4 ways AI is contributing to bias in the workplace

ChatGPT on a MacBook

There's no question that artificial intelligence (AI) tools are more popular than ever because they're so advanced and have never been so accessible, particularly generative AI.

You'd be hard-pressed to find someone in the US who hasn't at least heard of ChatGPT, let alone used some form of it since its launch. But these systems are only as smart as the data they've trained on, which humans have created. This means that, like humans, these AI tools can be prone to bias.

Also: How to avoid the headaches of AI skills development

Bloomberg recently published a study about racial biases in experiments with GPT-3.5. Researchers asked the AI tool to rank 1,000 resumes of equally qualified candidates with different names. They found GPT-3.5 ranked people with names traditionally used by certain demographics, such as Black Americans, at the bottom of the list.

Another study showed that AI models are also affected by pre-existing biases in healthcare applications due to historical inequalities and disparities in access and quality. These factors are accentuated when AI systems are trained on data reflecting inequalities.

Here are four ways AI is contributing to bias in the workplace.

Artificial Intelligence

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...