Microsoft’s Copilot Enhances AI Accessibility for Everyone, with Special Focus on the Disabled

A senior graphic and visual designer at Accenture, Sai Kaustuv Dasgupta, aka the Wheelchair Warrior of India, told AIM how he has been using Microsoft Copilot to generate commands and content-related tasks.

Dasgupta lives with osteogenesis imperfecta (OI), a rare genetic disorder that has caused over 50 fractures, 90% locomotor impairment, and 80% hearing loss.

He is not alone in using this tool; many users with disabilities rely on it.

Microsoft’s chief accessibility officer, Jenny Lay-Flurrie, stated, “Accessibility is about more than just technology; it’s about creating a culture of inclusion. When we design with accessibility in mind, we create better experiences for everyone.”

The 2022 Global Report on Assistive Technology, jointly published by WHO and UNICEF, reveals that only 3% of people in some low-income countries have the assistive products they need, compared to 90% in some high-income countries.

Historically, products like Microsoft Office and Windows have incorporated accessibility features after years of development. However, Copilot was designed with accessibility from the beginning, reshaping human-computer interactions to be more inclusive.

Integrated into Microsoft 365’s core apps, Copilot supports assistive technologies such as screen readers, magnifiers, contrast themes, and voice input, providing a seamless user experience.

For example, it helps users with disabilities by quickly drafting emails, creating PowerPoint presentations via voice commands, and assisting neurodiverse users with organising thoughts, writing tasks, and processing information, thus enhancing communication and productivity.

The Accessibility Assistant in Word, Outlook, and PowerPoint helps authors create accessible content by detecting and resolving issues early on. It also features in-canvas notifications for readability, quick-fix cards, and per-slide toggles for PowerPoint.

Making AI Accessible for Shared Humanity

Microsoft’s commitment to making generative AI accessible to all, especially disabled individuals, is longstanding.

Five years ago, at Microsoft Build, the company announced ‘AI for Accessibility,’ a $25 million five-year program to inspire developers to create AI-based products for the disabled.

“Accessibility is not a bolt-on. It’s something that must be built into every product we make so that our products work for everyone. Only then will we empower every person and every organisation on the planet to achieve more. This is the inclusive culture we aspire to create,” said Microsoft CEO Satya Nadella.

This year at Microsoft Build, the company announced that the US-based AI startup From Your Eyes won the 2024 Imagine Cup student competition. The startup developed a mobile app and API using GPT-4 and image recognition technology to provide real-time visual explanations for users with impaired vision.

From Your Eyes was created out of a profound personal need and a visionary goal. “After losing my vision completely at the age of ten, I knew I would never be able to see biologically again, but I believed it could be possible with technology,” says From Your Eyes founder and CEO Zülal Tannur.

Through Microsoft’s Seeing AI initiative, Zülal met visually impaired developers worldwide, inspiring her to learn to code. SeeingAI, which debuted in 2017, is a free app from Microsoft designed for the visually impaired. Enhanced video descriptions, facilitated by GPT-4 Turbo with Vision, and alternative communication methods like Cboard’s picture board app powered by Azure Neural Voices expand the breadth of impact.

The event’s runners-up, JRE and PlanRoadmap, are using cutting-edge AI to create a greener steel industry and creating an AI productivity coach to help people with ADHD overcome task paralysis, respectively.

Similarly, the company is also working with global advertising giant WPP to improve opportunities for individuals with visual impairments. It has built an application that allows you to upload videos, and with OpenAI’s GPT-4 Vision and Azure AI services, you get your video back with spoken descriptions over the top.

“The first time I heard audio descriptions, it just brought me to light. It was this opportunity of ‘Oh my gosh, I’m seeing!’ Through the power of AI, we’ll be able to do things we only dreamt about until recently,” said WPP global head of inclusive design Josh Loebner.

However, besides Microsoft, its competitors also take accessibility seriously. This was evident a week ago when Google introduced new features for Maps, Lookout, and Android. Lookout now allows users to search for specific objects and get distance information. Look to Speak lets users select phrases with their eyes and customise symbols. At the same time, Project Gameface, which uses facial gestures to control the mouse cursor, is now on Android.

Google Maps will also offer detailed walking instructions and screen reader capabilities. Android’s updated sound notifications alert users to sounds like fire alarms.

Similarly, Apple introduced new accessibility features for iPads and iPhones to support users with diverse needs better. These include eye tracking, music haptics, vocal shortcuts, vehicle motion cues, and major updates to Apple Vision Pro’s visionOS.

Tim Cook once said, “When we work on making our devices accessible by the blind, I don’t consider the bloody ROI.” So for Microsoft, Apple and others, designing for diverse needs is about creating real, tangible impact, not just ticking boxes.

The post Microsoft’s Copilot Enhances AI Accessibility for Everyone, with Special Focus on the Disabled appeared first on AIM.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments

Latest stories

You might also like...