Apple to bring eye tracking to iPhones for accessibility

15 May 2024

Image: © Syda Productions/Stock.adobe.com

The company is also working on a feature that uses on-device machine learning to recognise speech patterns of people with cerebral palsy, ALS or stroke.

Apple is working on a host of new features to make its iPhone and iPad more accessible for people with disabilities, it announced today (15 May).

These features, coming later this year, include eye tracking to control a device with the motion of one’s eyes, music haptics to help people who are deaf or hard of hearing to “experience” music using what Apple calls the Taptic Engine on the iPhone, and vocal shortcuts to help users perform tasks by making a custom sound.

Apple said that the new in-built eye tracking feature powered by AI uses the front camera to track a person’s eye movements to navigate the device or through elements of an app – similar to the way one would navigate in the immersive world of Apple Vision Pro. The company said data used during this process, would be kept securely not be shared with Apple.

Meanwhile, the music haptics will play taps, textures and “refined” vibrations to the audio of the music. The company claims the feature works across millions of songs in the Apple Music catalogue and will be available as an API for developers to make music more accessible in their apps.

“We believe deeply in the transformative power of innovation to enrich lives,” said CEO Tim Cook. “That’s why for nearly 40 years, Apple has championed inclusive design by embedding accessibility at the core of our hardware and software.

“We’re continuously pushing the boundaries of technology and these new features reflect our long-standing commitment to delivering the best possible experience to all of our users.”

The vocal shortcuts feature will allow iPhone and iPad users to assign custom utterances that voice assistant Siri can understand to launch shortcuts and complete complex tasks.

Meanwhile, the atypical speech feature will use on-device machine learning to recognise user speech patterns with conditions that affect speech, such as cerebral palsy, amyotrophic lateral sclerosis (ALS) or stroke.

“Each year, we break new ground when it comes to accessibility,” said Sarah Herrlinger, senior director of global accessibility policy and initiatives at Apple. “These new features will make an impact in the lives of a wide range of users, providing new ways to communicate, control their devices and move through the world.”

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Vish Gain is a journalist with Silicon Republic

editorial@siliconrepublic.com