According to Apple, users can create a Personal Voice by reading a set of text prompts aloud for a total of 15 minutes of audio on the iPhone or iPad. Since the feature integrates with Live Speech, users can then type what they want to say and have their Personal Voice read it to whomever they want to talk to. Apple says the feature uses “on-device machine learning to keep users’ information private and secure.”
There’s also a new detection mode in Magnifier to help users who are blind or have low vision, which is designed to help users interact with physical objects with numerous text labels. As an example, Apple says a user can aim their device’s camera at a label, such as a microwave keypad, which the iPhone or iPad will then read aloud as the user moves their finger across each number or setting on the appliance.
See https://www.theverge.com/2023/5/16/23725237/iphone-personal-voice-speak-training-accessibility