The patent was published today by the U.S. Patent and Trademark Office under the title "Visual-based inertial navigation", and describes a system that allows a consumer device to position itself in three-dimensional space using data from cameras and sensors.
The system combines images from an onboard camera with measurements gleaned from a gyroscope and accelerometers as well as other sensors, to build a picture of the device's real-time position in physical space.
The patent notes that visual-based inertial navigation systems can achieve positional awareness down to the centimeter scale without the need for GPS or cellular network signals. However, the technology is unsuitable for implementation in typical mobile devices because of the processing demands involved in variable real-time location tracking.
To overcome the limitation, Apple's invention uses something called a sliding window inverse filter (SWF) that minimizes computational load by using predictive coding to map the orientation of objects relative to the device.
The system could be used in a navigational AR device that overlays an output image with location-based information. One scenario describes how the technology could be used to pinpoint items in a retail store as a user walks among the aisles. Another describes the use of depth sensors to generate a 3D map of a given environment.
Whether or not Apple uses the patent in an upcoming product is obviously unknown at this time, but the company has been relatively open about its interest in innovating in the virtual reality and AR space. Apple is said to have a large team experimenting with headsets and other technologies and is believed to have been working in the area since at least early 2015.
The patent was filed in 2013 and credits former Flyby Media employees Alex Flint, Oleg Naroditsky, Christopher P. Broaddus, Andriy Grygorenko and Oriel Bergig, as well as University of Michigan professor Stergios Roumeliotis, as its inventors.