Apple unveils new Accessibility features to empower users of all abilities (coming later this year)

on

  • Apple introduces upcoming software features to enhance accessibility for individuals with cognitive, speech, and vision impairments.
  • Assistive Access simplifies app usage for individuals with cognitive disabilities, featuring streamlined interfaces and high contrast buttons.
  • Live Speech converts typed text into spoken words during phone and FaceTime calls, empowering those who are unable to speak.
  • Personal Voice enables users at risk of losing their ability to speak to create a personalised synthetic voice.
  • Detection Mode in Magnifier introduces Point and Speak functionality, aiding individuals with vision disabilities in interacting with text on physical objects.
  • New features expected to land later in the year (2023)

Apple has recently announced a series of upcoming software features designed to significantly enhance accessibility for individuals with cognitive, speech, and vision impairments.

With a strong commitment to inclusivity and user-friendly technology, Apple’s latest innovations leverage advanced hardware capabilities and on-device machine learning.

Assistive Access has been specifically tailored to cater to the needs of individuals with cognitive disabilities.

Drawing upon valuable feedback from the cognitive disability community, this feature empowers users to connect with loved ones, capture and enjoy photos, and listen to music with newfound ease and independence.

The customised experience includes a streamlined Calls app, alongside revamped interfaces for Messages, Camera, Photos, and Music, all of which feature high contrast buttons and large text labels to ensure optimal accessibility.

Live Speech is a new addition that improves speech accessibility for users of iPhones, iPads, and Macs. With Live Speech, individuals who are unable to speak or have gradually lost their ability to do so can now express themselves by typing their desired text, which is then automatically converted into spoken words during phone and FaceTime calls.

This innovative feature aims to empower millions of users worldwide, ensuring that their voices are heard and allowing them to actively engage in conversations. Live Speech also offers the convenience of saving frequently used phrases, facilitating seamless and efficient communication.

The most impressive new feature is Personal Voice. This new tool addresses the needs of individuals at risk of losing their ability to speak, such as those diagnosed with ALS (amyotrophic lateral sclerosis) or other conditions that impact speech.

It works by creating a personalised synthetic voice that closely resembles their own. By reading along with a set of randomised text prompts and recording 15 minutes of audio on their iPhone or iPad, individuals can generate a unique and authentic voice representation.

Personal Voice employs on-device machine learning to ensure the utmost privacy and seamlessly integrates with Live Speech, enabling users to communicate with their loved ones using a synthesised voice that preserves their identity.

Apple’s commitment to addressing the needs of individuals with vision disabilities is also evident through the introduction of Detection Mode in Magnifier, which includes the innovative Point and Speak functionality.

This feature empowers individuals who are blind or have low vision to interact with physical objects that contain multiple text labels. By combining input from the Camera app, the LiDAR Scanner, and on-device machine learning, Point and Speak identifies and reads aloud the text as users navigate across buttons or keypads.

This intuitive and intelligent solution enhances independence and facilitates seamless interaction with everyday objects like household appliances.

There is currently no official launch date for these new features, but users can expect them to become available later in the year.