Cursor, an AI-powered platform, lately earned the title of the ‘quickest rising SaaS platform’ of all time, reaching from $1 million to $100 million in annual recurring income in simply 12 months.
When a product achieves that degree of prominence, all eyes are on it. When customers hit a snag, it doesn’t take lengthy for the phrase to unfold. It’s much more regarding after they start to float away from the product.
These are the sorts of developments which have lately surrounded Cursor. Initially, a number of customers reported that the app uninstalled itself from their units. Extra lately, it was noticed that Cursor routinely logged customers out after they switched to a different gadget.
When customers reached out for help from Cursor, the AI-powered help agent fabricated a utilization coverage that didn’t exist. In response to experiences, the AI agent, referred to as Sam, claimed {that a} coverage was in place inside Cursor that restricted utilization to a single person and on a single gadget. This false declare enraged many customers, prompting some to cancel their subscriptions.
“Multi-device workflows are desk stakes for builders, and when you’re going to drag one thing that disruptive, you’d a minimum of anticipate a changelog entry,” mentioned a person on Hacker Information, who additionally cancelled their Cursor subscription.
Furthermore, many questioned how Cursor, a startup that has championed an AI use case, may fall sufferer to an AI hallucination. This, in flip, raises the query of whether or not absolutely automated AI buyer help methods are even value it.
There was a backlash in opposition to Cursor over the past couple of days.
It appears that evidently the Cursor help system is 100% primarily based on AI, and it clearly gave very dangerous solutions to customers who couldn’t log into Cursor due to a bug, resulting in many shoppers cancelling their… pic.twitter.com/BhRi3Xg7lW— Julien Salinas (@JulienSalinasEN) April 18, 2025
Constructing Dependable AI Buyer Expertise Techniques
What AI fashions excel at is partaking in dialog—there’s a purpose they’re referred to as massive language fashions. Over time, this has inspired groups from massive, small, and medium-sized organisations to work together with their clients and automate the method.
Hallucinations just like the above can have a deterministic impact on each person expertise and the corporate, particularly if working at or above the dimensions of Cursor.
At AIM’s Machine Studying Builders’ Summit (MLDS) 2025, Kruthika Kumar Muralidharan, director of analytics at Razorpay, spoke about learn how to deal with the above issues.
He talked about that the very first facet to contemplate is capturing the core issues confronted by clients, figuring out the place help groups are really struggling and the place AI ought to come into play earlier than adopting LLMs.
Defining a exact goal and scope is important, he acknowledged, indicating that companies have to know precisely which downside they’re fixing and the way they’ll measure success. For example, if the chatbot’s purpose is to unravel advanced queries however it’s only educated on easy, low-hanging queries, it should inevitably fall brief in real-world conditions.
Therefore, corporations ought to first implement low-risk AI interactions in buyer help methods. “For instance, you can begin with the FAQ part, the place a considerable amount of information is already documented. It’s low danger and carries a small influence as properly,” he mentioned.
Then again, he additionally acknowledged that companies ought to keep away from adopting advanced and time-consuming coaching strategies whereas constructing a pilot program. As an alternative, he steered it’s higher to guage the AI customer support chatbot with simpler issues to check, perceive, and validate the outcomes.
“Don’t expose something on to clients but. Have some human testing and validation for some time earlier than it will get launched,” Muralidharan mentioned.
Moreover, he careworn {that a} core mission staff should often validate, take a look at, and measure the chatbot’s success, make obligatory corrections, and iterate. Muralidharan added that the core mission staff ought to include customers and professionals from all sides of the product.
“You practice, you deploy, you get bizarre outputs, you panic, you then make adjustments, and return to the 1st step, and begin doing it once more. I just about name this a typical working process,” he mentioned.
As soon as the pilot has been validated and the answer must be scaled, corporations should delve into specifics, akin to choosing the suitable AI mannequin or framework to make use of.
Moreover, he acknowledged {that a} key issue to make sure a dependable expertise is to restrict the use instances to particular questions solely. This doubtless ensures that AI will solely reply queries for which it’s assured.
“We hold experimenting, tweaking, and fine-tuning to make sure we get it proper earlier than we launch it to clients,” he added.
All issues thought of, evading errors, deviations and constructing a dependable AI chatbot requires a strong testing and validation pipeline. With out it, delivering a robust and reliable buyer expertise turns into extremely uncertain.
The submit Cursor’s Mishap Turns into a Cautionary Story for AI Buyer Service appeared first on Analytics India Journal.