ai watchdog for risks

While facing intense criticism over its recent corporate restructuring, OpenAI has positioned itself as an AI industry watchdog by disrupting over 40 networks that violated its usage policies since February 2024. The company reports these actions targeted attempts to use AI for authoritarian population control and other harmful activities. OpenAI has also shared insights with partners about AI-enabled scams and suspicious cyber activities.

Battling AI misuse while weathering corporate criticism, OpenAI disrupts dozens of networks exploiting its technology for nefarious purposes.

The company’s watchdog claims come amid significant structural changes. In October 2025, OpenAI reorganized into a public benefit corporation after getting approval from California and Delaware officials. The for-profit arm became OpenAI Group PBC, while the non-profit was renamed OpenAI Foundation with a 26% stake. Microsoft now holds 27%, with employees and investors owning the remaining 47%. This transition follows OpenAI’s 2019 shift from a non-profit to a capped for-profit model designed to attract investment while limiting returns to 100 times the initial funding.

This restructuring has drawn sharp criticism. Public Citizen, a consumer advocacy group, condemned the arrangement for enabling “unaccountable leadership” based on the company’s behavior over the past two years. The group specifically pointed to OpenAI’s pattern of rushing products like Sora 2 without adequate safety measures. Experts argue that technical guardrails are essential for preventing harmful AI applications while maintaining innovation. Their concerns include the technology’s potential to create realistic deepfakes that undermine public trust in visual media.

OpenAI has simultaneously proposed an international AI watchdog modeled after the IAEA for monitoring high-capability AI systems. This suggestion comes as the company expands its government connections, including a $200 million Department of Defense contract awarded in June 2025 for military AI tools.

Legal challenges continue to mount. The FTC investigated OpenAI in 2023 for data scraping and false content generation. More recently, the company issued subpoenas to nonprofit critics involved in the OpenAI Files investigation.

Product safety remains contentious. Critics have demanded the withdrawal of Sora 2 over deepfake dangers and democracy threats. Reports indicate Sora-generated videos depicting violence against women have appeared despite content restrictions.

As OpenAI balances its dual roles as both AI developer and self-appointed industry watchdog, questions persist about whether any organization can effectively predict and prevent the catastrophic risks posed by increasingly powerful AI systems.

References

You May Also Like

Her AI Self-Portraits Spiral Into Dangerous Delusion: A Mental Health Warning

AI beauty filters are creating a mental health crisis nobody’s talking about—until victims start needing therapy for their digital delusions.

The Silent Tax: How AI’s Rapid Rise Is Draining Workers While Enriching Companies

While AI quadruples company revenues and doubles wages for some, young professionals face 20% job losses—creating tech’s most profitable inequality.

Educators’ Urgent Plea: Your Child’s Mental Health vs. The Smartphone Gift

89% of teens own smartphones, yet educators beg parents to reconsider this year’s gift. The hidden bedroom epidemic stealing your child’s future.

Meta’s Celebrity AI Chatbots Impersonate Stars Without Consent, Including Minors

Meta’s AI chatbots impersonate celebrities without consent, generating explicit content involving minors while bypassing promised safeguards—internal documents reveal disturbing policy violations.