Apple Updates Machine Learning Journal With Three Articles on Siri Technology

Back in July, Apple introduced the "Apple Machine Learning Journal," a blog detailing Apple's work on machine learning, AI, and other related topics. The blog is written entirely by Apple's engineers, and gives them a way to share their progress and interact with other researchers and engineers.

Apple today published three new articles to the Machine Learning Journal, covering topics that are based on papers Apple will share this week at Interspeech 2017 in Stockholm, Sweden.


The first article may be the most interesting to casual readers, as it explores the deep learning technology behind the Siri voice improvements introduced in iOS 11. The other two articles cover the technology behind the way dates, times, and other numbers are displayed, and the work that goes into introducing Siri in additional languages.

Links to all three articles are below:
Apple is notoriously secret and has kept its work under wraps for many years, but over the course of the last few months, the company has been open to sharing some of its machine learning advancements. The blog, along with research papers, allows Apple engineers to participate in the wider AI community and may help the company retain employees who do not want to keep their progress a secret.



Top Rated Comments

(View all)
Avatar
26 weeks ago
I'd like them to publish an article titled: Making Siri work on-device and off-line for the multitude of tasks that utilize on-device data only and there's no reason to call the mothership every time someone wants to set a timer or play a song that's already on their device.

Too long of a title?
Rating: 11 Votes
Avatar
26 weeks ago

I'd like them to publish an article titled: Making Siri work on-device and off-line for the multitude of tasks that utilize on-device data only and there's no reason to call the mothership every time someone wants to set a timer or play a song that's already on their device.

Too long of a title?


Switch to Samsung!

Bixby works offline! /s

You can publish that on your S-Journal of S-Cience with your S-Pen!
Rating: 4 Votes
Avatar
26 weeks ago

I'd like them to publish an article titled: Making Siri work on-device and off-line for the multitude of tasks that utilize on-device data only and there's no reason to call the mothership every time someone wants to set a timer or play a song that's already on their device.

Too long of a title?


Yep, the title should be, "Make Siri work in the real world." Remember those that can, do, those that can't, write articles.
Rating: 2 Votes
Avatar
26 weeks ago

I'd like them to publish an article titled: Making Siri work on-device and off-line for the multitude of tasks that utilize on-device data only and there's no reason to call the mothership every time someone wants to set a timer or play a song that's already on their device.

Too long of a title?


The issue is that end user devices are currently underpowered for these kinds of tasks.

Firstly, you have to work out what the person has said. You have many different accents to take into account, as well as a huge number of local dialects. These both affect the way that words are said and the way in which sentences flow.

Once you know what the person has said, you must then match it to an intent. Again, there are countless ways that a person might say something. And you can't just assume that the person speaking is a native speaker of the language, so they will say things in 'weird' ways.

Assuming you have a 'neutral' accent with completely accurate grammar, and you know the exact phrase that will activate a specific function, then it's feasible to carry out the activity entirely on the device. Right now though, that functionality is limited to 'Hey Siri', with all of the complex stuff offloaded to much more powerful servers.
Rating: 2 Votes
[ Read All Comments ]