Before your WWDC event in June, Apple on Tuesday preview a set of accessibility features coming «later this year» in its next big iPhone update.
The new «Personal Voice» feature, expected as part of iOS 17, will allow iPhones and iPads to generate digital reproductions of a user’s voice for in-person and phone conversations, FaceTime, and audio calls.
Apple said that Personal Voice will create a synthesized voice that sounds like a user and can be used to connect with family and friends. The feature is intended for users who have conditions that may affect their ability to speak over time.
Users can create their personal voice by recording 15 minutes of audio on their device. Apple said the feature will use local machine learning technology to maximize privacy.
It’s part of a larger set of accessibility improvements for iOS devices, including a new Assisted Access feature that helps users with cognitive disabilities and their caregivers more easily take advantage of iOS devices.
Apple also announced another machine learning-backed technology, augmenting its existing magnifying glass feature with a new point-and-talk-backed sensing mode. The new functionality will combine camera input, LiDAR input, and machine learning technology to announce text on the screen.
Apple typically releases software at WWDC in beta, which means that features are first available to developers and to members of the public who want to participate. Those features will typically remain in beta over the summer and will be released to the public in the fall when new ones are introduced. iPhones hit the market.
Apple WWDC 2023 The conference begins on June 5. The company is expected to disclose its first virtual reality headsets among other software and hardware announcements.