Apple's New Data Center Focus of Nuance Voice Recognition Partnership?

164243 nc data center

Late last week, we reported on claims that Apple is in discussions with voice recognition firm Nuance Communications regarding some sort of partnership presumably linked to rumors that Apple is integrating significant voice capabilities into iOS 5.

TechCrunch now follows up to report that the deal seems to revolve around Apple utilizing Nuance's technology in its new North Carolina data center to drive centrally-hosted voice services. The partnership is said to likely be introduced at Apple's Worldwide Developers Conference (WWDC) early next month.

In digging into the information about the relationship between the two companies, we had heard that Apple might actually already be using Nuance technology in their new (but yet to be officially opened) massive data center in North Carolina. Since then, we've gotten multiple independent confirmations that this is indeed the case. And yes, this is said to be the keystone of a partnership that Apple is likely to announce with Nuance at WWDC next month.

More specifically, we're hearing that Apple is running Nuance software - and possibly some of their hardware - in this new data center. Why? A few reasons. First, Apple will be able to process this voice information for iOS users faster. Second, it will prevent this data from going through third-party servers. And third, by running it on their own stack, Apple can build on top of the technology, and improve upon it as they see fit.

As was reported previously, Nuance is a leader in voice recognition technology and holds a number of key patents in the field, making Apple's interest in a partnership a natural fit given its acquisition of Siri last year and rumors of the company trying to incorporate aspects of voice recognition and artificial intelligence into its systems.

Related Forum: iPhone

Top Rated Comments

ipedro Avatar
169 months ago
iOS5's killer feature: Real time conversational voice recognition

voice recognition is so useless and dumb, can u imagine all those ppl talking to their gadget on the tram

Short sighted people are always amusing to point at when everybody's adopted the new technology they said was useless and dumb just months prior to it becoming a hit.

Voice recognition hasn't taken off for a couple of reasons:

1 - It's not quite accurate enough so you have to speak at an unnatural speed and tone.

2 - Current technology is mostly one way so you still have to view the screen to communicate properly because you can't yet trust that the system will be correct within an acceptable ratio.

Apple's acquisition of Siri solves both of these.

Imagine having a conversation with your iPhone without thinking in pre-determined commands and being confident that it understands you and will act appropriately.

Instead of:

- Click and hold a button
- Wait for prompt
- "Play Song 'Beat it' "
- Verify that it got it right

- Begin next command...

You'll be able to do this:

"iPhone, play 'Beat it', make an appointment for lunch with Wendy on Tuesday at noon. Oh, also I need to get groceries today. I need potatoes, pasta, steaks and apples and send this to my wife too."

iPhone replies: Playing 'Beat It' by Michael Jackson. I've created an iCal appointment for 12pm this Tuesday for 'Lunch with Wendy'. I've set up a Task named "Groceries" and listed "potatoes, pasta, steaks and apples" and have shared it with Debbie.

This kind of real time interpretation requires a lot of power and constant refinement not possible in a battery powered mobile device. Instead, a small file containing a mono audio snipit can be sent to Apple's servers, interpreted and commands sent back within a few seconds, making this possible.

Look to iOS5 as being the beginning of a true personal assistant that you can speak to but that is also able to speak with you in a natural conversation.

Apple was indeed years ahead of the competition with the introduction of the iPhone and I doubt they've been sitting on their hands waiting for Android to catch up.

They've been working on the next killer feature that will once again push the boundaries and propel Apple way ahead of the pack. This type of conversational speech recognition definitely has the ability to do this.
Score: 13 Votes (Like | Disagree)
mr.steevo Avatar
169 months ago
Hello, Computer?

Score: 9 Votes (Like | Disagree)
dethmaShine Avatar
169 months ago
If this helps in Phone Sex, I'm in bitch.
Score: 7 Votes (Like | Disagree)
DavidLeblond Avatar
169 months ago
Based on all the rumors, what DOESN'T the NC data center do?
Score: 7 Votes (Like | Disagree)
Glideslope Avatar
169 months ago
Kirk: "Mr. Spock, full sensor scan of object".

Spock: "Computer, structural analysis, and make up of object to port of Enterprise".

Computer: "Computing".

Computer: "Object appears to contain early 21st century remains of Humanoids."
"There appears to be a digital recording encoded on silicone." "Mossberg, our friend, is no longer writing good things about us."

Spock: Looks at Kirk, and raises 1 eyebrow. :apple:
Score: 6 Votes (Like | Disagree)
juicedropsdeuce Avatar
169 months ago
Yeah, I couldn't even imagine people talking to their PHONE in public.... :rolleyes:
:D

Hopefully my phone will only respond to my voice, or if I walk past ipedro on the street my phone might start playing Michael Jackson's Beat It.
Score: 5 Votes (Like | Disagree)

Popular Stories

maxresdefault

Apple Announces 'Let Loose' Event on May 7 Amid Rumors of New iPads

Tuesday April 23, 2024 7:11 am PDT by
Apple has announced it will be holding a special event on Tuesday, May 7 at 7 a.m. Pacific Time (10 a.m. Eastern Time), with a live stream to be available on Apple.com and on YouTube as usual. The event invitation has a tagline of "Let Loose" and shows an artistic render of an Apple Pencil, suggesting that iPads will be a focus of the event. Subscribe to the MacRumors YouTube channel for more ...
Apple Silicon AI Optimized Feature Siri

Apple Releases Open Source AI Models That Run On-Device

Wednesday April 24, 2024 3:39 pm PDT by
Apple today released several open source large language models (LLMs) that are designed to run on-device rather than through cloud servers. Called OpenELM (Open-source Efficient Language Models), the LLMs are available on the Hugging Face Hub, a community for sharing AI code. As outlined in a white paper [PDF], there are eight total OpenELM models, four of which were pre-trained using the...
Apple Vision Pro Dual Loop Band Orange Feature 2

Apple Cuts Vision Pro Shipments as Demand Falls 'Sharply Beyond Expectations'

Tuesday April 23, 2024 9:44 am PDT by
Apple has dropped the number of Vision Pro units that it plans to ship in 2024, going from an expected 700 to 800k units to just 400k to 450k units, according to Apple analyst Ming-Chi Kuo. Orders have been scaled back before the Vision Pro has launched in markets outside of the United States, which Kuo says is a sign that demand in the U.S. has "fallen sharply beyond expectations." As a...
iOS 18 Siri Integrated Feature

iOS 18 Rumored to Add These 10 New Features to Your iPhone

Wednesday April 24, 2024 2:05 pm PDT by
Apple is set to unveil iOS 18 during its WWDC keynote on June 10, so the software update is a little over six weeks away from being announced. Below, we recap rumored features and changes planned for the iPhone with iOS 18. iOS 18 will reportedly be the "biggest" update in the iPhone's history, with new ChatGPT-inspired generative AI features, a more customizable Home Screen, and much more....
iPad And Calculator App Feature

Apple Finally Plans to Release a Calculator App for iPad Later This Year

Tuesday April 23, 2024 9:08 am PDT by
Apple is finally planning a Calculator app for the iPad, over 14 years after launching the device, according to a source familiar with the matter. iPadOS 18 will include a built-in Calculator app for all iPad models that are compatible with the software update, which is expected to be unveiled during the opening keynote of Apple's annual developers conference WWDC on June 10. AppleInsider...