Apple’s next big iPhone update allows you to make a digital voice that sounds like you
In front of its June WWDC event, Apple on Tuesday previewed a suite of accessibility features that will come “later this year” in its next big iPhone update.
iPhones and iPads will be able to generate digital reproductions of a user’s voice for in-person conversations, phone, FaceTime, and audio calls with the new “Personal Voice” feature, which is anticipated to be included in iOS 17.
According to Apple, Personal Voice will generate a synthetic voice that resembles the user’s voice and can be used to communicate with friends and family. The feature is focused on clients who have conditions that can influence their speaking ability over time.
By recording 15 minutes of audio on their device, users can create their Personal Voice. According to Apple, the feature will maximize privacy by using local machine-learning technology.
It’s important for a bigger suite of accessibility enhancements for iOS gadgets, including a new Assistive Access feature that assists clients with cognitive disabilities, and their caretakers, all the more easily take advantage of iOS gadgets.
Apple also announced a new technology based on machine learning that adds a point-and-speak-based Detection Mode to its existing Magnifier feature. In order to display the text on the screen, the new functionality will combine Camera input, LiDAR input, and machine-learning technology.
When Apple releases software at WWDC, the features are typically first made available to developers and the general public who opt-in. Typically, these features will remain in beta throughout the summer and be made available to the general public in the fall, when new iPhones are released.
The Apple 2023 WWDC conference kicks off on June 5. Among other software and hardware announcements, the company is expected to introduce its first virtual reality headset.