act now on ai

Alaska’s lawmakers are taking a swing at artificial intelligence with SB 177, a sweeping bill that basically tells state agencies how to handle their shiny new AI toys. The legislation, currently sitting in committee, reads like a parent’s rulebook for teenagers with smartphones—lots of restrictions, mandatory check-ins, and zero tolerance for sketchy behavior.

The bill’s got teeth. State agencies can’t just slap AI onto decisions that affect people’s lives without telling them first. Want to use someone’s sensitive data? Better ask nicely and get permission. Facial recognition for important government decisions? Forget it. That’s banned outright, along with any AI system that might accidentally ship data to China or Russia. Because apparently, that needed to be spelled out.

Here’s where it gets interesting. If a state agency’s AI screws up and hurts someone through negligence or recklessness, they’re on the hook for civil liability. No more hiding behind the “computer did it” excuse. Agencies also have to inventory their AI systems every two years, run impact assessments, and publish the results online for everyone to see. Transparency theater at its finest, but at least it’s something.

State agencies face civil liability when their AI systems harm people through negligence—no more hiding behind algorithmic excuses.

The election folks aren’t left out either. Campaign communications using deepfakes must come with warning labels. It’s like those cigarette warnings, except for political content that might be completely fabricated. Welcome to 2025, where reality needs a disclaimer.

Meanwhile, Alaska’s lawyers got their own reality check. The Bar’s Ethics Opinion 2025-1 basically says don’t be an idiot with AI tools—client confidentiality still matters when you’re chatting with ChatGPT. Groundbreaking stuff, really.

The cybersecurity provisions read like someone actually thought about consequences for once. Regular inventories, risk assessments, privacy protections—all the boring stuff that keeps systems from imploding. The bill even requires human review for appeals when AI makes consequential decisions. Imagine that, actual humans in the loop. The legislation also mandates that agencies break down inter-agency data sharing barriers to help AI systems actually work effectively across departments.

Whether SB 177 becomes law remains uncertain. But Alaska’s attempting something most states haven’t—comprehensive AI governance that acknowledges both the technology’s promise and its capacity for spectacular failure. The framework specifically targets high-impact areas like employment decisions, public services, and law enforcement operations where AI mistakes hit hardest. It’s ambitious, complicated, and probably necessary. Alaska’s initiative is especially critical given that only 10% of executives recognize the potential discrimination risks associated with AI systems in workplace settings.

References

You May Also Like

NC’s War on Digital Deception: Lawmakers Target Dangerous AI Deepfakes

North Carolina wages war on AI deepfakes with three groundbreaking bills that could cost offenders $10,000. Are your digital rights in danger?

Silicon Valley’s AI Titans Face California’s Regulatory Gauntlet as Feds Plot Takeover

Tech giants secretly lobby to strip California’s power over AI while workers face mass automation without protection.

Nvidia Defies Washington’s Demand for Secret Kill Switches in AI Technology

Nvidia battles Washington over AI kill switches while claiming export controls sabotage American innovation and secretly help China dominate.

Maryland’s Deepfake Reckoning: Why Our State Must Criminalize Digital Deception Now

Maryland faces a digital deception emergency as eleven states outpace our protections. New legislation promises justice for deepfake victims. Will you be protected when October arrives?