Apple Training Siri to Better Understand People With Atypical Speech
Apple is researching how to improve Siri to better understand people who talk with a stutter, according to new details shared by The Wall Street Journal in a piece on how companies train voice assistants to handle atypical speech.
Apple has built a bank of 28,000 audio clips from podcasts featuring people who stutter, which could be used to train Siri. The data that Apple has collected will improve voice recognition systems for atypical speech patterns, according to an Apple spokesperson.
Along with improving how Siri understands people with atypical speech patterns, Apple has also implemented a Hold to Talk feature for Siri that allows users to control how long they want Siri to listen for. This prevents Siri from interrupting users with a stutter before they're finished speaking.
Siri can also be used without voice all together, through a Type to Siri feature that was first introduced in iOS 11.
Apple plans to outline its work to improve Siri in a research paper set to be published this week, which will provide more details on the company's efforts.
Google and Amazon are also working to train Google Assistant and Alexa to better understand all users, including those who have trouble using their voices. Google is collecting atypical speech data, and Amazon in December launched the Alexa Fund to let people who have speech impairments train an algorithm to recognize their unique vocal patterns.
Popular Stories
Apple has announced it will be holding a special event on Tuesday, May 7 at 7 a.m. Pacific Time (10 a.m. Eastern Time), with a live stream to be available on Apple.com and on YouTube as usual. The event invitation has a tagline of "Let Loose" and shows an artistic render of an Apple Pencil, suggesting that iPads will be a focus of the event. Subscribe to the MacRumors YouTube channel for more ...
Apple has dropped the number of Vision Pro units that it plans to ship in 2024, going from an expected 700 to 800k units to just 400k to 450k units, according to Apple analyst Ming-Chi Kuo. Orders have been scaled back before the Vision Pro has launched in markets outside of the United States, which Kuo says is a sign that demand in the U.S. has "fallen sharply beyond expectations." As a...
Apple today released several open source large language models (LLMs) that are designed to run on-device rather than through cloud servers. Called OpenELM (Open-source Efficient Language Models), the LLMs are available on the Hugging Face Hub, a community for sharing AI code. As outlined in a white paper [PDF], there are eight total OpenELM models, four of which were pre-trained using the...
Apple is finally planning a Calculator app for the iPad, over 14 years after launching the device, according to a source familiar with the matter. iPadOS 18 will include a built-in Calculator app for all iPad models that are compatible with the software update, which is expected to be unveiled during the opening keynote of Apple's annual developers conference WWDC on June 10. AppleInsider...
The upcoming iOS 17.5 update for the iPhone includes only a few new user-facing features, but hidden code changes reveal some additional possibilities. Below, we have recapped everything new in the iOS 17.5 and iPadOS 17.5 beta so far. Web Distribution Starting with the second beta of iOS 17.5, eligible developers are able to distribute their iOS apps to iPhone users located in the EU...
Top Rated Comments
[HEADING=2]Apple Training Siri to Better Understand People[/HEADING]