Bottom line: Biometric security is widely used on mobile devices for convenience sake. This method of identity validation is generally considered “secure enough” for most applications, despite researchers demonstrating time and again that such measures can be circumvented.
Most methods of thwarting biometric security involves replicating a specific user’s identity, like reproducing fingerprints from photographs or using pictures to fool facial recognition systems. Recently, however, researchers from New York University and Michigan State University demonstrated a far more startling tactic that uses neural networks to generate synthetic fingerprints.
The researchers trained the neural net using thousands of images of real fingerprints and used a “generator” to create synthetic prints. These prints were then fed into another neural network, a “discriminator,” which is designed to classify the fake print as real or generated, thus improving their authenticity through trial and error.
The resulting DeepMasterPrints (named after master keys that can open many different locks) can be used in dictionary-style attacks against fingerprint verification systems to varying degrees of success, depending on the security strength of the target system.
At the lowest level of security on capacitive tests, researchers were able to use a DeepMasterPrint to trick the system 76.67 percent of the time. The middle security tier was fooled 22.50 percent of the time while the top-tier security solution was only tricked 1.11 percent of the time.