machine learning


'machine learning' Articles

Apple Details Improvements to Siri's Ability to Recognize Names of Local Businesses and Destinations

In a new entry in its Machine Learning Journal, Apple has detailed how it approached the challenge of improving Siri's ability to recognize names of local points of interest, such as small businesses and restaurants. In short, Apple says it has built customized language models that incorporate knowledge of the user's geolocation, known as Geo-LMs, improving the accuracy of Siri's automatic speech recognition system. These models enable Siri to better estimate the user's intended sequence of words. Apple says it built one Geo-LM for each of the 169 Combined Statistical Areas in the United States, as defined by the U.S. Census Bureau, which encompass 80 percent of the country's population. Apple also built a single global Geo-LM to cover all areas not defined by CSAs around the world. When a user queries Siri, the system is customized with a Geo-LM based on the user's current location. If the user is outside of a CSA, or if Siri doesn't have access to Location Services, the system defaults to the global Geo-LM. Apple's journal entry is highly technical, and quite exhaustive, but hopefully this means that Siri should be able to better understand the names of local points of interest, and also be able to better distinguish between a Tom's Restaurant in Iowa and Kansas based on a user's geolocation. In its testing, Apple found that the customized language models reduced Siri's error rate by between 41.9 and 48.4 percent in eight major U.S. metropolitan regions: Boston, Chicago, Los Angeles, Minneapolis, New York, Philadelphia, Seattle, and San Francisco,

Apple Updates Leadership Page to Include New AI Chief John Giannandrea

Apple today updated its Apple Leadership page to include John Giannandrea, who now serves as Apple's Chief of Machine Learning and AI Strategy. Apple hired Giannandrea back in April, stealing him away from Google where he ran the search and artificial intelligence unit. Giannandrea is leading Apple's AI and machine learning teams, reporting directly to Apple CEO Tim Cook. He has taken over leadership of Siri, which was previously overseen by software engineering chief Craig Federighi. Apple told TechCrunch that it is combining its Core ML and Siri teams under Giannandrea. The structure of the two teams will remain intact, but both will now answer to Giannandrea. Under his leadership, Apple will continue to build its AI/ML teams, says TechCrunch, focusing on general computation in the cloud alongside data-sensitive on-device computations. Giannandrea spent eight years at Google before joining Apple, and before that, he founded Tellme Networks and Metaweb Technologies. Apple's hiring of Giannandrea in April came amid ongoing criticism of Siri, which many have claimed has serious shortcomings in comparison to AI offerings from companies like Microsoft, Amazon, and Google due to Apple's focus on privacy. Subscribe to the MacRumors YouTube channel for more videos. In 2018, Apple is improving Siri through a new Siri Shortcuts feature that's coming in iOS 12, which is designed to let users create multi-step tasks using both first and third-party apps that can be activated through

Apple's Latest Machine Learning Journal Entry Focuses on 'Hey Siri' Trigger Phrase

Apple's latest entry in its online Machine Learning Journal focuses on the personalization process that users partake in when activating "Hey Siri" features on iOS devices. Across all Apple products, "Hey Siri" invokes the company's AI assistant, and can be followed up by questions like "How is the weather?" or "Message Dad I'm on my way." "Hey Siri" was introduced in iOS 8 on the iPhone 6, and at that time it could only be used while the iPhone was charging. Afterwards, the trigger phrase could be used at all times thanks to a low-power and always-on processor that fueled the iPhone and iPad's ability to continuously listen for "Hey Siri." In the new Machine Learning Journal entry, Apple's Siri team breaks down its technical approach to the development of a "speaker recognition system." The team created deep neural networks and "set the stage for improvements" in future iterations of Siri, all motivated by the goal of creating "on-device personalization" for users. Apple's team says that "Hey Siri" as a phrase was chosen because of its "natural" phrasing, and described three scenarios where unintended activations prove troubling for "Hey Siri" functionality. These include "when the primary users says a similar phrase," "when other users say "Hey Siri"," and "when other users say a similar phrase." According to the team, the last scenario is "the most annoying false activation of all." To lessen these accidental activations of Siri, Apple leverages techniques from the field of speaker recognition. Importantly, the Siri team says that it is focused on "who is

Apple Shares Research into Self-Driving Car Software That Improves Obstacle Detection

Apple computer scientists working on autonomous vehicle technology have posted a research paper online describing how self-driving cars can spot cyclists and pedestrians using fewer sensors (via Reuters). The paper by Yin Zhou and Oncel Tuzel was submitted to the moderated scientific pre-print repository arXiv on November 17, in what appears to be Apple's first publicly disclosed research on autonomous vehicle technology. The paper is titled "End-to-End Learning for Point Cloud Based 3D Object Detection", and describes how new software developed by Apple scientists improves the ability of LiDAR (Light Detection and Ranging) systems to recognize objects including pedestrians and cyclists from a distance. Self-driving cars typically use a combination of standard cameras and depth-sensing LiDAR units to receive information about the world around them. Apple's research team said they were able to get "highly encouraging results" using LiDAR data alone to spot cyclists and pedestrians, and wrote that they were also able to beat other approaches for detecting 3D objects that rely solely on LiDAR tech. The experiments were limited to computer simulations and did not advance to road tests. Apple famously has a secretive research policy and has kept its work under wraps for many years, but over the last 12 months, the company has shared some of its research advancements with other researchers and the wider public, particularly in the area of machine learning. In December 2016, Apple said that it would start allowing its AI and machine learning researchers to pu

Deep Neural Networks for Face Detection Explained on Apple's Machine Learning Journal

Apple today published a new entry in its online Machine Learning Journal, this time covering an on-device deep neural network for face detection, aka the technology that's used to power the facial recognition feature used in Photos and other apps. Facial detection features were first introduced as part of iOS 10 in the Core Image framework, and it was used on-device to detect faces in photos so people could view their images by person in the Photos app. Implementing this technology was no small feat, says Apple, as it required "orders of magnitude more memory, much more disk storage, and more computational resources."Apple's iCloud Photo Library is a cloud-based solution for photo and video storage. However, due to Apple's strong commitment to user privacy, we couldn't use iCloud servers for computer vision computations. Every photo and video sent to iCloud Photo Library is encrypted on the device before it is sent to cloud storage, and can only be decrypted by devices that are registered with the iCloud account. Therefore, to bring deep learning based computer vision solutions to our customers, we had to address directly the challenges of getting deep learning algorithms running on iPhone.Apple's Machine Learning Journal entry describes how Apple overcame these challenges by leveraging GPU and CPU in iOS devices, developing memory optimizations for network interference, image loading, and caching, and implementing the network in a way that did not interfere with other tasks expected on iPhone. The new entry is well worth reading if you're interested in the

Apple Says 'Hey Siri' Detection Briefly Becomes Extra Sensitive If Your First Try Doesn't Work

A new entry in Apple's Machine Learning Journal provides a closer look at how hardware, software, and internet services work together to power the hands-free "Hey Siri" feature on the latest iPhone and iPad Pro models. Specifically, a very small speech recognizer built into the embedded motion coprocessor runs all the time and listens for "Hey Siri." When just those two words are detected, Siri parses any subsequent speech as a command or query. The detector uses a Deep Neural Network to convert the acoustic pattern of a user's voice into a probability distribution. It then uses a temporal integration process to compute a confidence score that the phrase uttered was "Hey Siri." If the score is high enough, Siri wakes up and proceeds to complete the command or answer the query automatically. If the score exceeds Apple's lower threshold but not the upper threshold, however, the device enters a more sensitive state for a few seconds, so that Siri is much more likely to be invoked if the user repeats the phrase—even without more effort. "This second-chance mechanism improves the usability of the system significantly, without increasing the false alarm rate too much because it is only in this extra-sensitive state for a short time," said Apple. To reduce false triggers from strangers, Apple invites users to complete a short enrollment session in which they say five phrases that each begin with "Hey Siri." The examples are saved on the device.We compare the distances to the reference patterns created during enrollment with another threshold to decide whether

Apple Updates Machine Learning Journal With Three Articles on Siri Technology

Back in July, Apple introduced the "Apple Machine Learning Journal," a blog detailing Apple's work on machine learning, AI, and other related topics. The blog is written entirely by Apple's engineers, and gives them a way to share their progress and interact with other researchers and engineers. Apple today published three new articles to the Machine Learning Journal, covering topics that are based on papers Apple will share this week at Interspeech 2017 in Stockholm, Sweden. The first article may be the most interesting to casual readers, as it explores the deep learning technology behind the Siri voice improvements introduced in iOS 11. The other two articles cover the technology behind the way dates, times, and other numbers are displayed, and the work that goes into introducing Siri in additional languages. Links to all three articles are below: Deep Learning for Siri's Voice: On-device Deep Mixture Density Networks for Hybrid Unit Selection Synthesis Inverse Text Normalization as a Labeling Problem Improving Neural Network Acoustic Models by Cross-bandwidth and Cross-lingual Initialization Apple is notoriously secret and has kept its work under wraps for many years, but over the course of the last few months, the company has been open to sharing some of its machine learning advancements. The blog, along with research papers, allows Apple engineers to participate in the wider AI community and may help the company retain employees who do not want to keep their progress a

Apple Launches New Blog to Share Details on Machine Learning Research

Apple today debuted a new blog called the "Apple Machine Learning Journal," with a welcome message for readers and an in-depth look at the blog's first topic: "Improving the Realism of Synthetic Images." Apple describes the Machine Learning Journal as a place where users can read posts written by the company's engineers, related to all of the work and progress they've made for technologies in Apple's products. In the welcome message, Apple encourages those interested in machine learning to contact the company at an email address for its new blog, machine-learning@apple.com. Welcome to the Apple Machine Learning Journal. Here, you can read posts written by Apple engineers about their work using machine learning technologies to help build innovative products for millions of people around the world. If you’re a machine learning researcher or student, an engineer or developer, we’d love to hear your questions and feedback. Write us at machine-learning@apple.com In the first post -- described as Vol. 1, Issue 1 -- Apple's engineers delve into machine learning related to neural nets that can create a program to intelligently refine synthetic images in order to make them more realistic. Using synthetic images reduces cost, Apple's engineers pointed out, but "may not be realistic enough" and could result in "poor generalization" on real test images. Because of this, Apple set out to find a way to enhance synthetic images using machine learning. Most successful examples of neural nets today are trained with supervision. However, to achieve high accuracy, the training sets

Apple Expanding Seattle Hub Working on AI and Machine Learning

Apple will expand its presence in downtown Seattle, where it has a growing team working on artificial intelligence and machine learning technologies, according to GeekWire. The report claims Apple will expand into additional floors in Two Union Square, and this will allow its Turi team to move into the building and provide space for future employees.“We’re trying to find the best people who are excited about AI and machine learning — excited about research and thinking long term but also bringing those ideas into products that impact and delight our customers,” said computer scientist Carlos Guestrin, Apple director of machine learning. “The bar is high, but we’re going to be hiring as quickly as we can find people that meet our high bar, which is exciting.”Apple's director of machine learning Carlos Guestrin, who founded Turi and is a University of Washington professor, said the Seattle team collaborates "extensively" with groups at Apple's headquarters in Cupertino, including working on new AI features for upcoming Apple products and services. Guestrin said AI, for example, will enable the iPhone to be more understanding and predictive in the future:“But what’s going to make a major difference in the future, in addition to those things, for me to be emotionally connected to this device, is the intelligence that it has — how much it understands me, how much it can predict what I need and what I want, and how valuable it is at being a companion to me,” he said. “AI is going to be at the core of that, and we’re going to be some of the people who help with

Apple Hires Carnegie Mellon Researcher to Lead AI Team

Carnegie Mellon University professor Russ Salakhutdinov has been hired by Apple to lead a team focused on artificial intelligence, according to a tweet Salakhutdinov sent out this morning. He will continue to teach at Carnegie Mellon, but will also serve as "Director of AI Research" at Apple. In his tweet, Salakhutdinov says he is seeking additional research scientists with machine learning expertise to join his team. An included job posting asks that candidates have experience with Deep Learning, Computer Vision, Machine Learning, Reinforcement Learning, Optimization, and/or Data Mining. Salakhutdinov specializes in statistical machine learning and has authored many papers on neural networks, deep kernel learning, reinforcement learning, and other related topics. His expertise may be used to improve services like Siri, which has been in the spotlight recently after journalist Walt Mossberg wrote a piece calling the personal assistant "limited," "unreliable," and "dumb." Siri is powered by a neural network and uses machine learning techniques to improve over time, as do other Apple features like Spotlight, QuickType, Photos, autocorrect, Maps, and more. Salakhutdinov's hiring comes as rumors suggest Apple is aiming to improve Siri as part of an effort to build the personal assistant into an Amazon Echo-like smart home product that would be able to do things like control smart home accessories. Apple is also on the verge of finishing an R&D research center in Yokohama, Japan, which will focus on "deep engineering" and developing Apple's artificial

Apple Hiring for New Machine Learning Division Following Turi Acquisition

Following its recent acquisition of Turi, a Seattle-based machine learning and artificial intelligence startup, a pair of new job listings reveal that Apple has spun the company into its new machine learning division. Apple is looking to hire data scientists and advanced app developers, based in Seattle, who together will help build proof-of-concept apps for multiple Apple products to deliver new and improved user experiences where possible.Turi is the new machine learning division at Apple. We build tools that enable teams across Apple to develop machine learning solutions to power amazingly intelligent user experiences. We are looking for new energetic members to join our ML Applications team to collaborate with product teams on a variety of proof-of-concept projects.Many of those improvements driven by machine learning were highlighted in a recent profile about Apple's artificial intelligence efforts, including improved Siri accuracy, app suggestions, and several other examples:You see it when the phone identifies a caller who isn’t in your contact list (but did email you recently). Or when you swipe on your screen to get a shortlist of the apps that you are most likely to open next. Or when you get a reminder of an appointment that you never got around to putting into your calendar. Or when a map location pops up for the hotel you’ve reserved, before you type it in. Or when the phone points you to where you parked your car, even though you never asked it to. These are all techniques either made possible or greatly enhanced by Apple’s adoption of deep

Apple's Machine Learning Has Cut Siri's Error Rate by a Factor of Two

Steven Levy has published an in-depth article about Apple's artificial intelligence and machine learning efforts, after meeting with senior executives Craig Federighi, Eddy Cue, Phil Schiller, and two Siri scientists at the company's headquarters. Apple provided Levy with a closer look at how machine learning is deeply integrated into Apple software and services, led by Siri, which the article reveals has been powered by a neural-net based system since 2014. Apple said the backend change greatly improved the personal assistant's accuracy."This was one of those things where the jump was so significant that you do the test again to make sure that somebody didn’t drop a decimal place," says Eddy Cue, Apple’s senior vice president of internet software and services.Alex Acero, who leads the Siri speech team at Apple, said Siri's error rate has been lowered by more than a factor of two in many cases.“The error rate has been cut by a factor of two in all the languages, more than a factor of two in many cases,” says Acero. “That’s mostly due to deep learning and the way we have optimized it — not just the algorithm itself but in the context of the whole end-to-end product.”Acero told Levy he was able to work directly with Apple's silicon design team and the engineers who write the firmware for iOS devices to maximize performance of the neural network, and Federighi added that Apple building both hardware and software gives it an "incredible advantage" in the space."It's not just the silicon," adds Federighi. "It's how many microphones we put on the device, where we place

Apple Making Big Hiring Push in Artificial Intelligence and Machine Learning

Apple is stepping up its efforts to recruit employees focused on artificial intelligence and machine learning, reports Reuters. The report suggests Apple is looking to challenge Google's lead in features such as Google Now that learn to anticipate smartphone users' needs, something Apple is starting to address in iOS 9 with its new "Proactive" feature.As part of its push, the company is currently trying to hire at least 86 more employees with expertise in the branch of artificial intelligence known as machine learning, according to a recent analysis of Apple job postings. The company has also stepped up its courtship of machine-learning PhD's, joining Google, Amazon, Facebook and others in a fierce contest, leading academics say.Apple's machine learning efforts are in large part built with Siri in mind, and Siri should play an important role in this Wednesday's media event, as indicated by the event invitation's tagline.Many of the currently posted positions are slated for software efforts, from building on Siri’s smarts to the burgeoning search features in iOS. The company is also hiring machine learning experts for divisions such as product marketing and retail, suggesting a broad-based effort to capitalize on data.Reuters notes that Apple faces a challenge with machine learning due to its focus on privacy and reluctance to tap into all possible data sources. For example, with the Proactive features of iOS 9, Apple is primarily keeping all of the data and analysis on the user's phone, enhancing privacy but limiting some of what can be learned from data passed to the