Pages

Ads 468x60px

Saturday, 18 November 2017

An Inside Look at the New iPhone X’s Face Recognition Software




The introduction of face recognition to the iOS operating system is one of the big stories to come with the release of the iPhone X. While also being known for its AR native app design capabilities, it is this new security function that is sharing the spotlight with this latest release. In this post, we are going to take a closer look at how this form of biometric authentication went from a niche offering to a consumer technology that is now available on one of the world’s most popular smartphones.

Sensors That Work in Any Lighting

The iPhone X uses sensors to develop 3D templates of the user’s face and implement face recognition security onto the phone. It builds the template and then compares it to a database of templates that are stored on the device. This means that one of the first hurdles for reliable face recognition is to provide the phone with the ability to accurately measure depth information for each pixel in the image.
Compounding this challenge, face recognition needs to work in adverse light conditions. The phone cannot simply rely on RGB values for each pixel. In low light, pixel information can be lost, and in brightly lit environments, the pixels get flooded with excess illumination.
The iPhone 7 used stereo cameras to determine depth by creating a disparity map, but there were still problems with poor lighting. To further compound the issue, calibrating the cameras to accurately determine depth proved difficult.
To overcome these shortcomings with estimating depth, the iPhone X uses structured light cameras to project a grid of infrared light, and then measures the light that reflects back from the surface. Based on factors like angle of incidence and time of flight, the iPhone X can reliably measure depth, even in poor light conditions.

Neural Networks Improve Recognition Abilities

Neural networks are not a new concept, but they have recently seen more interest from researchers. This renewed attention is largely due to advances in technology, the availability of large volumes of data on the internet, and improvements in the training techniques that are used on neural networks.
With the publication of the AlexNet architecture in 2012, we started to see significant advances in the ability of neural networks to classify images. By using Convolutional Neural Networks, machines are capable of classifying images with a reliability that is close to that of a human, and this ability has been shown to work for applications like face recognition.

Hardware Balances Power and Functionality

Convolutional Neural Networks are the engines that are going to power applications like deep-learning AI, augmented reality, and face recognition. Understanding these facts, companies like Intel, NVIDIA, and AMD are all competing to develop new hardware solutions that will power the future of Deep Neural Networks.
To meet the needs of this new facial recognition technology, Apple has built a custom GPU for the iPhone X. This custom GPU is designed to harness the power of the new technology, while also limiting its size and making it practical for incorporation into a smartphone.The improvements to things like sensor cameras, hardware, and algorithms will continue to make products and services that use Deep Neural Networks more available to consumers. With technologies like face recognition and object classification coming into their own on the iPhone X, we can expect to see these technologies being applied to more apps, services, and devices in the near future.

writed by:
Serena Garner from Y Media Labs.

0 comments:

Post a Comment