Thought Experiment
Imagine you’re browsing items in a supermarket (virtual or physical), and an AI assistant is available for you to interact with, to optimise your shopping for the day. You provide it with a budget, your household make-up, like details on the number of people, ages, dietary restrictions and it generates a shopping list with relevant & available items.
Based on your style of evaluating products (eg, reviews from other shoppers like you, images, use in recipes, comparing with local businesses) it also prioritises brands for you. It recognizes when you’re looking lost in the store and steps in to guide you with a map, or if the smells, sights and sounds of the store overwhelm you, it quickly re-routes you. You have a fun, swift, just-like-you-like-it shopping experience and are on your way out, sooner than expected. You can even thank it for reminding you to carry an umbrella on a suddenly cloudy day!
In this scenario, it’s easy to allow ourselves to ease into the idea of an empathetic, caring and understanding companion that wants and does what’s best for you.
Let’s try another one on for size:
You happen to miss a credit card payment because you were travelling and it slipped your mind. The automated service centre notices this lapse and while you’re away on vacation, starts sending you reminders. You ignore them because you don’t want to deal with them while on holiday, you don’t mind the small late fee – but they continue to escalate in intensity:
Day 1: “Your credit score has been affected by your missed payment. Do you not care about your financial future?” (guilt-tripping)
Day 2: “Don’t be irresponsible, pay now!” (sense of obligation)
Day 3: “Do you not understand the importance of being on time?” (targeting character)
Day 4: “If you don’t pay now, you may not enjoy the same privileges with our bank…” (threat)
The automated service centre has studied your payment patterns, your chats and call transcripts with the bank, and knows you value your reputation – it uses any means necessary to make you pay, as that’s what it’s been trained to do.
(If you’re reading this thinking ‘This is so extreme, this wouldn’t happen, I’m happy-sad to tell you these were all inspired by my exchange with ChatGPT earlier today)
In the pursuit of optimisation and automation, we may overlook both the best and worst case scenarios: In the best case scenario, this integration could lead to psychological safety, reflect human needs that are explicit and latent in experiences, and even be a desirable presence for non-task-based interactions. But the worst case scenario is much like the risk of anyone who knows you inside-out that you may not know very well – holding the power to be emotionally manipulative, exploitative, aggressive and having the potential to create unsafe environments for everyone including decision-makers, users and consumers.
At this stage, I’d like to share a conversation starter with you:
What kind of empathy and emotional response should we try to integrate in Generative AI solutions?
To explain, let’s break down the recognised types of Empathy*
*simplified
Cognitive empathy: Ability to understand how a person feels and what they might be thinking which means exploring the why of the feeling.
Emotional empathy/Affective empathy: Ability to feel or embody what someone is feeling which is essentially what mirror neurons do.
Behavioural empathy/Compassionate empathy: Acting upon what someone else is feeling and trying to help alleviate their distress in a way that works for them even if you don’t understand what they’re experiencing fully.
As a practitioner, my recommendation at this time is that Generative AI solutions should be trained to display Behavioural Empathy, without trying to develop Cognitive or Emotional empathy in them. This will allow AI to reflect human interest, emotion and need without developing tools to exploit them.
A few principles for implementing and broader usage of emotions in Generative AI:
– If the emotional theory and the logical foundation are flawed, implementations that are technically accurate will still be flawed experiences for users
– While there is growing evidence that AI now has a better understanding of sarcasm, explains humour, can even write convincing dialogue to mimic consciousness; AI can still be very literal in its interactions, while humans don’t tend to express emotions in simple and straightforward ways
– Think of automation in organisations and adoption as management journeys change; especially when considering business integration, making interactions more intuitive and aligned with employee expectations, addressing human concerns and increasing a sense of agency to reduce fear of being ‘replaced’
– Proactive intervention to balance untapped potential vs. negative exploitation:Bringing the perspective of business ethics, regulation, policy research, antitrust and addressing misinformation into our solutions
– Identifying latent biases and increasing representation in current training data: OpenAI’s models are as good as what it’s trained on, identifying ways to capture hidden bias in data, use cases and accounting for emerging human identities. Like, today there are several gender identities that would not have been captured a decade ago, that make up a significant portion of consumers today
– Managing people’s mass response to AI’s perceived & displayed emotions: There is a growing concern and excitement around AI developing ‘emotions’ of its own. While we may be far from sentient AI, it will still deeply affect human engagement (for business leaders, consumer interactions, and employees)
As you go back into your work day, here are a few use cases you could consider as good candidates for experimentation:
Note: I say experiments like these are not absolute foolproof solutions, they will be iterative – as human emotion is dynamic and will continue to be even as we learn to codify some aspects.
– People analytics: Determining the dynamic staffing of projects based on different work styles.
– Gaming: Finding the right level of challenge and achievement to keep a player engaged, while also dealing with new additions.
– Mental Health and Access to Care: Curating experiences that are aligned to people’s present state including recognizing signs of burnout, depression and other conditions.
– Education: Growth journeys in organisations for employees and in educational institutions for students mapped to individual styles of learning.
– Chatbots for Customer Service: Ability to reflect customer needs efficiently and respond appropriately.– Creative Arts & Tasks: Helping creatives with brainstorming, idea generation and inspiration.
The post Harnessing Human Emotions in Generative AI appeared first on Analytics India Magazine.