Apple's Voice-Powered Virtual "Assistant" Coming to iOS
It has been rumored that Apple was planning to include voice-related features from its acquisition of personal assistant company Siri, and a partnership with speech-to-text specialists Nuance. Apple, however, didn't mention any such features when it first demonstrated iOS 5 at WWDC in June.
This new "Assistant" feature in iOS 5 is claimed to take voice input along with other user-specific information, such as location and contacts, to provide a powerful service to the user:
We can imagine a user asking their iPhone “Assistant” to setup a movie with one of their friends. The user might say “setup movie with Mark” and based on Mark’s contact info and the user’s location data, will be able to offer tickets to a local theater and send Mark the information.9to5Mac notes that development of the feature is ongoing and may not be finished in-time for iOS 5, but did find mention of 'Assistant' in buried in the iOS SDK.
"ASSISTANT_ENABLE_WARNING" = "Assistant uses your voice input and other information like your contact names, song names, and location to understand your requests. This data will be sent to Apple to process your request and to improve Apple products and services."
The feature sounds just like what Siri had been working on prior to its acquisition. Their iOS app Siri Assistant remains available on the App Store for download. Siri focused on a concept called "Virtual Personal Assistants" (VPAs) that would accomplish tasks for the user.
Virtual Personal Assistants (VPAs) represent the next generation interaction paradigm for the Internet. In today's paradigm, we follow links on search results. With a VPA, we interact by having a conversation. We tell the assistant what we want to do, and it applies multiple services and information sources to help accomplish our task. Like a real assistant, a VPA is personal; it uses information about an individual's preferences and interaction history to help solve specific tasks, and it gets better with experience.Siri's implementation used Nuance's voice recognition engine to translate spoken requests to text which would then be processed. Apple has been rumored to be working closely with Nuance to provide the same transcription service to iOS users.