ai watchdog for risks

While facing intense criticism over its recent corporate restructuring, OpenAI has positioned itself as an AI industry watchdog by disrupting over 40 networks that violated its usage policies since February 2024. The company reports these actions targeted attempts to use AI for authoritarian population control and other harmful activities. OpenAI has also shared insights with partners about AI-enabled scams and suspicious cyber activities.

Battling AI misuse while weathering corporate criticism, OpenAI disrupts dozens of networks exploiting its technology for nefarious purposes.

The company’s watchdog claims come amid significant structural changes. In October 2025, OpenAI reorganized into a public benefit corporation after getting approval from California and Delaware officials. The for-profit arm became OpenAI Group PBC, while the non-profit was renamed OpenAI Foundation with a 26% stake. Microsoft now holds 27%, with employees and investors owning the remaining 47%. This transition follows OpenAI’s 2019 shift from a non-profit to a capped for-profit model designed to attract investment while limiting returns to 100 times the initial funding.

This restructuring has drawn sharp criticism. Public Citizen, a consumer advocacy group, condemned the arrangement for enabling “unaccountable leadership” based on the company’s behavior over the past two years. The group specifically pointed to OpenAI’s pattern of rushing products like Sora 2 without adequate safety measures. Experts argue that technical guardrails are essential for preventing harmful AI applications while maintaining innovation. Their concerns include the technology’s potential to create realistic deepfakes that undermine public trust in visual media.

OpenAI has simultaneously proposed an international AI watchdog modeled after the IAEA for monitoring high-capability AI systems. This suggestion comes as the company expands its government connections, including a $200 million Department of Defense contract awarded in June 2025 for military AI tools.

Legal challenges continue to mount. The FTC investigated OpenAI in 2023 for data scraping and false content generation. More recently, the company issued subpoenas to nonprofit critics involved in the OpenAI Files investigation.

Product safety remains contentious. Critics have demanded the withdrawal of Sora 2 over deepfake dangers and democracy threats. Reports indicate Sora-generated videos depicting violence against women have appeared despite content restrictions.

As OpenAI balances its dual roles as both AI developer and self-appointed industry watchdog, questions persist about whether any organization can effectively predict and prevent the catastrophic risks posed by increasingly powerful AI systems.

References

You May Also Like

Meta’s Celebrity AI Chatbots Impersonate Stars Without Consent, Including Minors

Meta’s AI chatbots impersonate celebrities without consent, generating explicit content involving minors while bypassing promised safeguards—internal documents reveal disturbing policy violations.

Is AI Development Outpacing Moral Governance? Pope Leo XIV Warns Politicians

Pope Leo XIV condemns AI’s $391 billion stampede while 97 million jobs transform and corporations chase profits over souls.

When AI Does Our Thinking, Are We Sacrificing Our Humanity?

Are we outsourcing our humanity to algorithms? As AI takes over our thinking, the line between authentic human connection and digital simulation blurs dangerously. Your identity is at stake.

AI Toys Threaten Children’s Development: Experts Sound Alarm

While AI toys promise personalized learning, experts warn they’re creating a generation of socially stunted children who can’t solve problems independently.