new york s ai regulation initiative

The RAISE Act is coming for Big Tech‘s AI fever dreams. New York lawmakers just fired a warning shot across the bow of companies racing to build ever-more-powerful AI systems. The bill targets “frontier AI models” – those super-advanced systems costing over $100 million to train or requiring massive computational resources. It’s waiting for the governor’s signature now, and when it passes, New York becomes the first state to slap specific safety regulations on AI giants.

The rules are simple. Don’t create AI that kills people or causes billions in damages. Seems reasonable, right? Companies developing these monster models must implement safety protocols to prevent misuse. Think: stopping your AI from helping make bioweapons or going rogue. They’ll need to test continuously for risks like loss of control or – everyone’s favorite sci-fi nightmare – self-replication.

The rules? Pure common sense. Don’t build AI that kills us or breaks the bank. Test for dangers like self-replication or loss of control.

These tech titans can’t hide behind closed doors anymore. They’ll have to publish safety plans (minus the secret sauce) and report serious incidents within 72 hours. State officials get to peek at the unredacted plans whenever they want. Mess up once? That’s a $10 million fine. Do it again? $30 million. Ouch.

The law has teeth but isn’t as sharp as California’s proposal. New York’s version skips third-party audits and whistleblower protections. Still, it passed with overwhelming bipartisan support – a rare thing these days. Companies must retain these unredacted safety protocols for five years after deployment to ensure accountability. These regulations come at a crucial time as AI companions increasingly create social isolation among vulnerable users who substitute digital relationships for human ones.

The impact will ripple through AI development. Even distilled models derived from bigger ones face regulation if they cost $5 million or more. The act specifically targets AI actions that would be considered criminal if performed by humans with limited intervention. Companies will need detailed plans to prevent their AI from enabling crimes or creating dangerous capabilities.

Will this actually work? Who knows. But New York is taking a stand while the feds dither with executive orders and voluntary commitments. The message is crystal clear: develop responsibly or pay up. The question now is whether $30 million is enough to make tech billionaires blink.

References

You May Also Like

US Government Scrubs ‘Safety’ From AI Institute’s Name as Director Resigns

US government erases “safety” from AI institute name after director quits—the real reason will make you question everything about AI regulation.

Silicon Valley’s AI Titans Face California’s Regulatory Gauntlet as Feds Plot Takeover

Tech giants secretly lobby to strip California’s power over AI while workers face mass automation without protection.

Governor Youngkin Blocks AI Safety Bill, Leaving Virginians Vulnerable to Algorithm Bias

Virginia’s AI safety bill dies at Governor Youngkin’s desk, leaving citizens exposed to discrimination while big tech celebrates. Your digital rights hang in the balance.