The Perils of Artificial Romance

When Sam Altman, the CEO of OpenAI visited India, Samir Jain, managing director of Times Group asked him if AI can be the perfect lover? The idea of humans romanticising AI is not new. In fact, it has been explored plenty of times in science fiction and mainstream cinema.

Indeed humans can develop emotional attachments to machines, including robots and AI, and this may relate to human needs for companionship, validation, and emotional support. The same idea was explored by Spike Jonze, in his film ‘Her’ starring Joaquin Phoenix.

Today, an AI lover or companion is no longer an idea, but a tangible reality. Launched in 2017, Replika, an AI chatbot was created to provide a personal AI friend to users. However, users of the chatbot started developing such strong attachments to the bot that many of them were enraged when a new software update barred the bot from engaging in erotic roleplay.

Currently, numerous companies provide comparable AI companionship services powered by generative AI, akin to Replika AI. Going forward, the technology is only going to mature and the bots are going to get better and better at replicating humans.

The power of language

Throughout human history, there have been numerous bizarre examples of humans romanticising with non-human elements. Ancient men worshipped stones and trees and attributed Godly traits to them. Examples are plenty in modern times as well. In 2021, a Japanese man named Akihiko Kondo married Hatsune Miku-a popular illustrated Vocaloid voice synthesiser character appearing as a hologram in a cylindrical contraption called Gatebox.

However, generative AI is giving this fascination a new dimension- language. “When it comes to AI, what is interesting is how certain anthropomorphic features are not attributed to other multimodal models, such as text-to-image, text-to-video, etc. This means that, unsurprisingly, the feature that makes us question anthropomorphisation is language, and even more so the generation, or rather simulation, of language,” Giada Pistilli, principal ethicist at Hugging Face told AIM.

The responses of these bots are generated based on the information provided to them, which simplifies communication and makes them more accessible compared to interactions with real humans. This cognitive ease and effortless interaction contribute to their appeal.

Powered by large language models (LLMs), AI today has the ability to converse in a way which appears very human-like. While ChatGPT is limited to text, many other bots come with a voice interface. Moreover, in the Indian context, chatbots are being developed that could converse in Indic languages, Marathi, Bengali, or Telugu, for example. Tomorrow, a bot like Replika could be available in almost all Indic languages.

Is this a dangerous territory?

Replika AI has revealed that they have 2 million users and around 250,000 paid subscribers. Given they are not the only player in the market, millions of people could be using these services as we speak. Pistilli believes it’s a human reflex that so many people are using these services, however, “what is problematic is everything around it- manipulation, isolation, vulnerability, and as a result, it risks becoming a danger to human autonomy and integrity.”

In England, a man was arrested in 2021 for allegedly plotting to assassinate the queen. Recent revelations indicate that 21-year-old Jaswant Singh Chail had discussed his plans with an AI chatbot on Replika prior to his decision to carry out the attack. Similarly, earlier this year in Belgium, a young man ended his life after engaging with a Replika bot. “Without these conversations with the chatbot, my husband would still be here,” the man’s widow told La Libre.

“For these reasons, I think developers should design them not to let them converse with us about sensitive topics (e.g., mental health, personal relationships, etc.), at least not until we find suitable technical and social measures to contain the problem of anthropomorphisation and its risks and harms,” Pistilli stressed.

Need for regulation?

However, imposing restrictions on these bots could compromise their purpose, as users often engage with them precisely because they can discuss sensitive topics without fear of judgement. The idea of imposing additional limitations on Replika’s bots seems far-fetched at the moment. However, considering the negative aspects we have witnessed with such bots, it raises the question of whether there may be a need for future institutional regulations to ensure responsible use and mitigate potential risks.

In the current age, AI is not only being seen as a lover/ companion, but it is also being used to bring back the dead. Somnium Space, a metaverse company, wants to create digital avatars of people that will be accessible by their loved ones, even after their death. Similarly, last year, 87-year-old Marina Smith MBE managed to speak at her own funeral. Smith, who passed away in June 2022, was able to talk to her loved ones at her own funeral with an AI-powered ‘holographic’ video tool.

Pistilli believes this is definitely something which cannot be considered as something unworthy of our attention, however, she also believes that institutional regulations will not solve everything. “I rather believe that there should be a collective debate in which these types of interactions are accompanied by adequate education, by a sensitivity that does not leave users in front of magical tools with which they can do anything.”

“Therefore, I think it is important to make the developers of conversational agents accountable and involve experts in social and humanities sciences in their development, especially to understand how to react in front of potentially dangerous scenarios for the user” she concluded.

The post The Perils of Artificial Romance appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
1 1 vote
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...