Apple's 'Hey Siri' Feature in iOS 9 Uses Individualized Voice Recognition
Following the release of the first public beta for iOS 9.1 yesterday, along with the GM version on Wednesday, a few of the testers have come across a new feature introduced in the update. Somewhere in the Settings app, it appears that Apple has quietly added a set-up process for the new "Hey Siri" feature coming to the iPhone 6s and iPhone 6s Plus, thanks to a built-in M9 motion coprocessor that enables the phones' always-on functionality.
Although unconfirmed by Apple, the discovery in iOS 9.1 suggests that Siri will be able to begin detecting specific user voices and determine whether or not the owner of the iPhone in question is speaking to her. Similar in vein to the way Apple aimed its Touch ID feature to work better and better the more you unlocked an iPhone using the fingerprint scanning sensor, it seems the set-up process will guide users into stating words or phrases to better acclimate Siri with each iPhone owner.
Found in General > Siri > Allow 'Hey Siri', the new always-on feature is the next step-up in the technology by Apple, allowing users to ask Siri questions or make changes within the iPhone's apps by simply stating "Hey Siri" near the iPhone. The new set-up process discovered today could also just be a way for Siri to work better detecting voices in general, and not be specific to each user. With the iPhone 6s and iPhone 6s Plus launching in just two weeks, it won't be long until everyone can find out for themselves.
Thanks Alan and Daniel!
Popular Stories
Apple has announced it will be holding a special event on Tuesday, May 7 at 7 a.m. Pacific Time (10 a.m. Eastern Time), with a live stream to be available on Apple.com and on YouTube as usual. The event invitation has a tagline of "Let Loose" and shows an artistic render of an Apple Pencil, suggesting that iPads will be a focus of the event. Subscribe to the MacRumors YouTube channel for more ...
Apple today released several open source large language models (LLMs) that are designed to run on-device rather than through cloud servers. Called OpenELM (Open-source Efficient Language Models), the LLMs are available on the Hugging Face Hub, a community for sharing AI code. As outlined in a white paper [PDF], there are eight total OpenELM models, four of which were pre-trained using the...
Apple is set to unveil iOS 18 during its WWDC keynote on June 10, so the software update is a little over six weeks away from being announced. Below, we recap rumored features and changes planned for the iPhone with iOS 18. iOS 18 will reportedly be the "biggest" update in the iPhone's history, with new ChatGPT-inspired generative AI features, a more customizable Home Screen, and much more....
Apple has dropped the number of Vision Pro units that it plans to ship in 2024, going from an expected 700 to 800k units to just 400k to 450k units, according to Apple analyst Ming-Chi Kuo. Orders have been scaled back before the Vision Pro has launched in markets outside of the United States, which Kuo says is a sign that demand in the U.S. has "fallen sharply beyond expectations." As a...
Apple is finally planning a Calculator app for the iPad, over 14 years after launching the device, according to a source familiar with the matter. iPadOS 18 will include a built-in Calculator app for all iPad models that are compatible with the software update, which is expected to be unveiled during the opening keynote of Apple's annual developers conference WWDC on June 10. AppleInsider...
Top Rated Comments
If I have two devices with hey Siri activated in the same area, both react... A possible solution would be that if two devices (with same iCloud account) get activated by the voice command, each one would give that information to the other devices before Siri reacts, and then determine the nearest by the voices level each device receives. Then only the nearest could respond.
Another option, instead of voice level detection could be to let Siri ask you on each device simultaneously which one was meant by asking for the device type (iPad, iPhone, etc...): "On what device do you want to ask me something?" - "iPad"
A last idea would be to let Siri ask first from the nearest device "did you mean me?" If users answers "yes", the user could go on with further commands on that device, or if he answers "no", the next device would ask the same question and so on ...
Just a thought, but maybe I am the only one with this "problem" :)