act now on ai

Alaska’s lawmakers are taking a swing at artificial intelligence with SB 177, a sweeping bill that basically tells state agencies how to handle their shiny new AI toys. The legislation, currently sitting in committee, reads like a parent’s rulebook for teenagers with smartphones—lots of restrictions, mandatory check-ins, and zero tolerance for sketchy behavior.

The bill’s got teeth. State agencies can’t just slap AI onto decisions that affect people’s lives without telling them first. Want to use someone’s sensitive data? Better ask nicely and get permission. Facial recognition for important government decisions? Forget it. That’s banned outright, along with any AI system that might accidentally ship data to China or Russia. Because apparently, that needed to be spelled out.

Here’s where it gets interesting. If a state agency’s AI screws up and hurts someone through negligence or recklessness, they’re on the hook for civil liability. No more hiding behind the “computer did it” excuse. Agencies also have to inventory their AI systems every two years, run impact assessments, and publish the results online for everyone to see. Transparency theater at its finest, but at least it’s something.

State agencies face civil liability when their AI systems harm people through negligence—no more hiding behind algorithmic excuses.

The election folks aren’t left out either. Campaign communications using deepfakes must come with warning labels. It’s like those cigarette warnings, except for political content that might be completely fabricated. Welcome to 2025, where reality needs a disclaimer.

Meanwhile, Alaska’s lawyers got their own reality check. The Bar’s Ethics Opinion 2025-1 basically says don’t be an idiot with AI tools—client confidentiality still matters when you’re chatting with ChatGPT. Groundbreaking stuff, really.

The cybersecurity provisions read like someone actually thought about consequences for once. Regular inventories, risk assessments, privacy protections—all the boring stuff that keeps systems from imploding. The bill even requires human review for appeals when AI makes consequential decisions. Imagine that, actual humans in the loop. The legislation also mandates that agencies break down inter-agency data sharing barriers to help AI systems actually work effectively across departments.

Whether SB 177 becomes law remains uncertain. But Alaska’s attempting something most states haven’t—comprehensive AI governance that acknowledges both the technology’s promise and its capacity for spectacular failure. The framework specifically targets high-impact areas like employment decisions, public services, and law enforcement operations where AI mistakes hit hardest. It’s ambitious, complicated, and probably necessary. Alaska’s initiative is especially critical given that only 10% of executives recognize the potential discrimination risks associated with AI systems in workplace settings.

References

You May Also Like

Montana Wrestles With AI Freedom: Lawmakers Debate Tech Regulation Balance

Montana wrestles with radical AI freedom as lawmakers juggle citizen protection against fierce innovation. Will the state’s bold experiment crush tech giants or fortify them?

Legal Blind Spots: Why Traffic Laws Can’t Keep Pace With Self-Driving Cars

Self-driving cars operate beyond the reach of outdated traffic laws. Who pays when algorithms crash? The legal system races to catch up with technology.

Trump Orders DOJ Attack on State AI Regulations, Threatens Federal Funding

Trump threatens to defund states protecting citizens from AI discrimination—federal power grab sparks constitutional crisis as tech giants celebrate unprecedented regulatory overthrow.

New York’s $30M Gambit: Can the RAISE Act Tame AI Giants?

New York’s $30 million fines for AI violations spark fierce debate. Will financial penalties actually stop tech giants from creating dangerous artificial intelligence?