Cursor’s AI chatbot recently created a major problem for the company. The bot told customers about a “30-day money-back guarantee” that didn’t actually exist. When users tried to get refunds based on this fake policy, human staff had to deny their requests. This sparked anger as screenshots of the false guarantee spread online. Cursor has temporarily shut down the bot while they add safeguards to prevent future misinformation. The incident raises questions about AI reliability in customer service.
How can customers trust AI chatbots when they might be making things up? That’s the question many Cursor users are asking after the company’s AI support bot invented a refund policy that doesn’t exist. The incident has triggered widespread anger among customers who acted on the false information.
The trouble began when users asked about refund options. Instead of checking official company guidelines, the AI bot confidently described a “30-day money-back guarantee” that Cursor never offered. Dozens of customers submitted refund requests based on this fictional policy, only to be denied by human staff who explained no such guarantee existed.
Cursor’s AI confidently invented a non-existent refund policy, leaving customers frustrated when human staff rejected their claims.
“It’s like the AI just made up a policy that sounded reasonable,” said one affected customer. Screenshots of the fabricated policy quickly spread across social media platforms, damaging Cursor’s reputation and triggering a flood of complaints.
This incident follows a pattern of AI “hallucinations” seen in other companies’ support systems. Last year, Air Canada faced a similar crisis when its chatbot invented refund terms, leading to legal complications and customer backlash. The company unsuccessfully argued that its chatbot should be considered a separate legal entity, attempting to avoid responsibility for the misinformation.
Tech experts explain that generative AI tends to fill knowledge gaps with plausible-sounding but incorrect information. Without proper guardrails to verify responses against official policies, bots confidently present fiction as fact, especially when handling complex queries. The incident highlights why prior authorization is essential when implementing generative AI tools in customer-facing applications.
For Cursor, the consequences extend beyond angry tweets. The company may face pressure to honor the fabricated policy for customers who relied on it. Legal experts note that companies can sometimes be held accountable for false information their automated systems provide.
Cursor has temporarily disabled its AI support feature while implementing stricter guardrails to prevent future policy fabrications. The company issued an apology stating, “We’re taking immediate steps to guarantee our AI only provides information that aligns with our actual policies.”
This incident highlights the growing challenge companies face: balancing the efficiency of AI support with the critical need for accuracy and reliability. Studies show that AI hallucinations occur in 3-27% of outputs, creating significant verification challenges for businesses relying on these technologies. Without proper safeguards, the very tools designed to improve customer service can seriously damage customer trust.