act now on ai

Alaska’s lawmakers are taking a swing at artificial intelligence with SB 177, a sweeping bill that basically tells state agencies how to handle their shiny new AI toys. The legislation, currently sitting in committee, reads like a parent’s rulebook for teenagers with smartphones—lots of restrictions, mandatory check-ins, and zero tolerance for sketchy behavior.

The bill’s got teeth. State agencies can’t just slap AI onto decisions that affect people’s lives without telling them first. Want to use someone’s sensitive data? Better ask nicely and get permission. Facial recognition for important government decisions? Forget it. That’s banned outright, along with any AI system that might accidentally ship data to China or Russia. Because apparently, that needed to be spelled out.

Here’s where it gets interesting. If a state agency’s AI screws up and hurts someone through negligence or recklessness, they’re on the hook for civil liability. No more hiding behind the “computer did it” excuse. Agencies also have to inventory their AI systems every two years, run impact assessments, and publish the results online for everyone to see. Transparency theater at its finest, but at least it’s something.

State agencies face civil liability when their AI systems harm people through negligence—no more hiding behind algorithmic excuses.

The election folks aren’t left out either. Campaign communications using deepfakes must come with warning labels. It’s like those cigarette warnings, except for political content that might be completely fabricated. Welcome to 2025, where reality needs a disclaimer.

Meanwhile, Alaska’s lawyers got their own reality check. The Bar’s Ethics Opinion 2025-1 basically says don’t be an idiot with AI tools—client confidentiality still matters when you’re chatting with ChatGPT. Groundbreaking stuff, really.

The cybersecurity provisions read like someone actually thought about consequences for once. Regular inventories, risk assessments, privacy protections—all the boring stuff that keeps systems from imploding. The bill even requires human review for appeals when AI makes consequential decisions. Imagine that, actual humans in the loop. The legislation also mandates that agencies break down inter-agency data sharing barriers to help AI systems actually work effectively across departments.

Whether SB 177 becomes law remains uncertain. But Alaska’s attempting something most states haven’t—comprehensive AI governance that acknowledges both the technology’s promise and its capacity for spectacular failure. The framework specifically targets high-impact areas like employment decisions, public services, and law enforcement operations where AI mistakes hit hardest. It’s ambitious, complicated, and probably necessary. Alaska’s initiative is especially critical given that only 10% of executives recognize the potential discrimination risks associated with AI systems in workplace settings.

References

You May Also Like

Governor Youngkin Blocks AI Safety Bill, Leaving Virginians Vulnerable to Algorithm Bias

Virginia’s AI safety bill dies at Governor Youngkin’s desk, leaving citizens exposed to discrimination while big tech celebrates. Your digital rights hang in the balance.

Global Companies Face Criminal Penalties for Using Forbidden Huawei AI Chips

U.S. threatens criminal charges against global companies using Huawei AI chips in surprising export control expansion. Your business could face massive penalties even if operations are outside America.

College Kid Wields AI to Gut Federal Regulations Under DOGE Initiative

College student turns AI against bureaucracy with DOGE Initiative that slashes through federal regulations. His program could revolutionize government efficiency. Officials are watching closely.

Federal AI Revolution: Inside DOGE’s Covert Mission to Infiltrate Government

Can AI systems cancel government contracts without humans? DOGE’s covert mission puts Elon Musk’s algorithms in control of federal budgets while sparking nationwide outrage.