Facial detection features were first introduced as part of iOS 10 in the Core Image framework, and it was used on-device to detect faces in photos so people could view their images by person in the Photos app.
Implementing this technology was no small feat, says Apple, as it required "orders of magnitude more memory, much more disk storage, and more computational resources."
Apple's iCloud Photo Library is a cloud-based solution for photo and video storage. However, due to Apple's strong commitment to user privacy, we couldn't use iCloud servers for computer vision computations. Every photo and video sent to iCloud Photo Library is encrypted on the device before it is sent to cloud storage, and can only be decrypted by devices that are registered with the iCloud account. Therefore, to bring deep learning based computer vision solutions to our customers, we had to address directly the challenges of getting deep learning algorithms running on iPhone.Apple's Machine Learning Journal entry describes how Apple overcame these challenges by leveraging GPU and CPU in iOS devices, developing memory optimizations for network interference, image loading, and caching, and implementing the network in a way that did not interfere with other tasks expected on iPhone.
The new entry is well worth reading if you're interested in the specific details behind how Apple overcame these challenges to successfully implemented the feature. The technical details are dense, but understandable, and it provides some interesting insight into how facial recognition works.
With its Machine Learning Journal, Apple aims to share the complex concepts behind its technology so the users of its products can get a look behind the curtain. It also serves as a way for Apple's engineers to participate in the AI community.
Apple has previously shared several articles on Siri, including how "Hey Siri," works, and a piece on using machine learning and neural networks for refining synthetic images.