The Year AI Erred

As AI continues to integrate deeper into everyday life, its unintended consequences reveal both the potential and risks that come with this rapidly advancing technology.

From generating misinformation to creating ethical dilemmas, AI’s shortcomings at times overshadowed its successes, sparking debates about its role and regulation.

When AI Crossed Personal and Ethical Boundaries

In a world that is becoming increasingly reliant on AI, privacy and ethics have often been compromised. OpenAI’s ChatGPT became the centre of controversy in October when users reported that the model had initiated conversations without being prompted first.

In one such incident, the chatbot looked into a user’s first week of high school, raising concerns about privacy. OpenAI later clarified that the issue stemmed from a bug, but the event highlighted how easily AI can blur the line between machine and human interaction.

Did ChatGPT just message me… First?
byu/SentuBill inChatGPT

Character AI recently became a source of controversy following a lawsuit in Florida. A mother accused the platform of abetting her son’s suicide, citing his unhealthy attachment to a chatbot modelled after a fictional character.

In another case, an AI chatbot replicated a deceased girl’s personality without her family’s knowledge. These incidents underscore the profound emotional impact AI can have and the questions it raises about consent and responsibility.

AI’s Struggle with Accuracy and Bias

AI has been actively deployed in sensitive fields like healthcare and legal systems, revealing its tendency for errors and biases.

OpenAI’s transcription tool Whisper, used by over 30,000 medical professionals, was reportedly criticised for generating false and sometimes harmful text, including fabricated medical advice and racial commentary.

Despite prior warnings, its widespread adoption in sensitive industries led to instances of misdiagnosis and mistranslation, emphasising the urgent need for regulatory oversight.

In addition, the same model also received criticism from the research community when researchers from Digital University Kerala (DUK) found Whisper to be inaccurate when dealing with native Indic languages like Malayalam.

At the same time, Google’s Gemini drew criticism for historically inaccurate and racially insensitive images, after it inaccurately depicted people of colour in Nazi-era uniforms. This led the company to temporarily disable its image-generation features.

Google co-founder Sergey Brin commented, “We definitely messed up on the image generation, and it was mostly due to not thorough testing.”

These mishaps added to the failures of Google’s AI Overview, a search summary tool that offered dangerously misleading health advice, such as using glue to ‘make cheese stick to pizza’.

The technology even suggested tobacco’s benefits for children and displayed political biases, pushing the company to modify its algorithms. Google responded by temporarily disabling the system for health-related queries on the basis of making adjustments.

Legal and Regulatory Chaos

AI is slowly expanding into the legal domain, where its unchecked use resulted in profound consequences. In February, Vancouver-based lawyer Chong Ke submitted fictitious cases generated by ChatGPT in a custody battle, unintentionally misleading the court.

Upon realisation, Ke apologised and said, “I had no idea that these two cases could be erroneous. I had no intention to mislead the opposing counsel or the court and sincerely apologise for the mistake I made.”

Even though the lawyer apologised, the incident underscored the risks of using AI in legal proceedings without proper oversight.

Similarly, Perplexity AI faced legal action from major media outlets for unauthorised content usage. Allegations of copyright infringement forced the company to initiate a revenue-sharing program, highlighting the tension between AI innovation and legal concerns regarding its content usage.

Adding to this, the misuse of AI deepfakes surged in 2024, threatening democratic integrity. From fabricated videos of Taylor Swift endorsing Donald Trump to misleading advertisements, deepfake technology became a popular tool for manipulation.

In response, lawmakers and developers collaborated to create more robust detection technologies and stricter content verification policies.

A Double-Edged Sword

The risks of over-reliance on AI were evident in industries like recruitment and transportation.

In September, an HR team was dismissed after its automated hiring system rejected every job applicant, including a manager who tested the system with his own resume. The incident highlighted the critical need for human oversight in crucial decision-making processes.

A manager just caught his entire HR team auto-rejecting ALL candidates because their ATS was searching for extinct technology 🔥
This story perfectly captures everything wrong with modern hiring pic.twitter.com/EaS3lo8sCi

— Gina Acosta (@ginacostag_) November 13, 2024

The transportation industry reported similar challenges. Cruise’s driverless cars faced scrutiny after an accident in San Francisco, where a vehicle dragged a pedestrian for 20 feet. While Cruise paid fines and vowed to improve safety, the incident reignited debates about the readiness of autonomous vehicles for public roads.

Another peculiar controversy emerged in June this year when AI chatbots VIC (Virtual Integrated Citizen) in the US and AI Steve in the UK announced candidacies for elected offices.

Fraud and Misinformation

The year 2024 also saw a surge in AI-driven scams and misinformation. In a disturbing example, Sunil Bharti Mittal, chairman of Bharti Enterprises, revealed how fraudsters used AI to clone his voice, nearly cheating an executive into authorising a substantial money transfer.

“Our Africa headquarters got a call in my voice and my tone directing for a money transfer of a fairly large amount of money,” Mittal revealed at the NDTV World Summit.

This incident, along with rising concerns over deepfake scams, underscored the urgency of developing safeguards against the malicious use of AI.

Meanwhile, in the political sphere, misinformation campaigns targeted Spanish-speaking voters ahead of US elections. According to The Associated Press, AI-generated content spreads false voting details, risking voter disenfranchisement.

Efforts to combat this included collaborations between developers and voting rights groups to enhance the verification of non-English content.

Accountability in the Age of AI

The events of 2024 reminded us of AI’s double-edged nature, namely its power to transform and its potential to harm when left unchecked. From spreading misinformation to amplifying biases, the consequences of poorly managed AI became impossible to ignore.

In response, governments and organisations began reshaping laws and creating ethical guidelines to steer AI in a responsible direction. This was a turning point, forcing us to confront tough questions about how to balance innovation while safeguarding humanity’s core values.

The post The Year AI Erred appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...