Using advancements across hardware, software, and machine learning, people who are blind or low vision can use their iPhone and iPad to navigate the last few feet to their destination with Door Detection; users with physical and motor disabilities who may rely on assistive features like Voice Control and Switch Control can fully control Apple Watch from their iPhone with Apple Watch Mirroring; and the Deaf and hard of hearing community can follow Live Captions on iPhone, iPad, and Mac. Apple is also expanding support for its industry-leading screen reader VoiceOver with over 20 new languages and locales. These features will be available later this year with software updates across Apple platforms.
The announcement is in honor of Global Accessibility Day—last year, the company made a similar announcement, previewing forthcoming features like Assistive Touch feature for Apple Watch and Background Sounds for iOS 15.
These new features really open up a lot of possibilities, but the one I’m most excited about is Live Captions. Apple’s had a version of this technology in its Clips app for some years (and clearly makes use of similar functionality with Siri’s language processing and dictation). But on Apple’s platform you previously needed to turn to a third-party app for something like captioning a FaceTime call for deaf or hard of hearing users.
As someone who has two parents who both have difficulty hearing, this stands to be a big help. I am curious to see how well the feature actually works, and how it handles a big FaceTime call with a lot of participants; Apple says it will attribute dialog to specific speakers. Live Captions is also supposedly available to any audio content, which means other video conferencing apps may be able to take advantage of it as well—though it’s unclear whether that means through an opt-in API or just by default.
In addition to these major feature announcements, Apple’s press release mentions a number of other improvements, such as new Apple Books themes to make it easier to read text, Siri Pause Time to allow users to specify how long Siri will wait before responding to a request, and an improvement to Sound Recognition that lets you train it to listen for a specific version of a sound (i.e. your particular doorbell), and more.
—Linked by Dan Moren