controversial ai tool for lawyers

While ChatGPT keeps spreading through law firms like wildfire, most people think it has no business giving legal advice. Legal adoption of AI tools nearly tripled from 11% to 30% in just one year, making lawyers the most enthusiastic GenAI users among all professionals at 28%.

Yet here’s the kicker: 63.6% of people believe ChatGPT shouldn’t touch legal or medical advice with a ten-foot pole.

Most people think ChatGPT has zero business giving legal or medical advice.

The trust issue is glaring. Only 28.3% of people trust ChatGPT for legal questions. That’s pathetic. Compare that to career advice at 61.7% or product recommendations at 60%, and you see the problem. People get nervous when real consequences are on the line. Makes sense – ChatGPT can’t interpret laws with context, lacks credentials, and might spit out dangerous nonsense that lands someone in jail.

But lawyers aren’t using it for client advice anyway. They’re smart enough to know better. Instead, they’re leveraging ChatGPT for the boring stuff: drafting documents, reviewing contracts, and doing routine research. Document review leads the pack with 74% adoption, followed closely by legal research at 73%. It’s all about efficiency. Why waste billable hours on repetitive tasks when a robot can handle it? This frees up time for actual lawyering – the kind that requires a brain and a law degree.

The whole situation screams for regulation, but nobody’s figured that out yet. What happens when non-lawyers use AI for legal work? That’s unauthorized practice of law, technically. The legal profession is scrambling to create guidelines while firms experiment with AI-powered client services and subscription models. With 92% of Fortune 500 companies already using OpenAI products, the corporate legal departments are leading the charge in AI adoption. Education institutions face similar challenges in establishing ethical AI use protocols while integrating the technology themselves.

Experience changes everything though. People who actually use ChatGPT tend to trust it more – 55% of users think it benefits humanity versus just 16% of non-users. Direct interaction builds confidence, apparently. Still, that confidence vanishes when legal questions arise.

Law firms face a simple reality: adapt or get left behind. The competitive edge goes to those using AI effectively. Traditional firms are feeling the heat to modernize. Meanwhile, lawyers keep downloading ChatGPT, unable to resist those sweet productivity gains. The transformation is happening whether anyone likes it or not.

References

You May Also Like

AI-Written Community Notes: X’s Risky Gamble on Truth and Trust

X’s AI fact-checkers might accidentally spread the lies they’re supposed to stop. Your favorite platform’s riskiest experiment yet.

Zuckerberg’s Bold Claim: Superintelligent AI Will Transform Your Personal Power

Mark Zuckerberg predicts superintelligent AI will give ordinary people genius-level abilities 24/7—but experts warn this could end human autonomy forever.

44 State AGs Warn AI Giants: Stop ‘Predatory AI’ Targeting Children—Or Face Legal Consequences

44 attorneys general threaten AI giants with legal action over predatory practices that target children—while 82% of parents already fear the worst.

FDA’s Drug Approval Revolution: AI Giants Enter Regulatory Medicine

Tech giants challenge traditional medicine as FDA embraces AI for drug approvals. Powerful algorithms now decide which medications reach patients. Can we trust silicon to safeguard our health?