Fingerprint authentication systems can be fooled by generic fingerprints created by artificial intelligence, according to the New York University Tandon.
“These experiments demonstrate the need for multi-factor authentication and should be a wake-up call for device manufacturers about the potential for artificial fingerprint attacks,” said researcher Philip Bontrager. “Fingerprint-based authentication is still a strong way to protect a device or a system, but at this point, most systems don’t verify whether a fingerprint or other biometric is coming from a real person or a replica.”
The work builds on earlier research, also at NYU Tandon, which described how fingerprint-based systems use partial fingerprints, rather than full ones, to confirm identity.
“Devices typically allow users to enrol several different finger images, and a match for any saved partial print is enough to confirm identity,” said the university.
Partial fingerprints are less likely to be unique than full prints, and the earlier work demonstrated that enough similarities exist between partial prints to create synthetic prints capable of matching many stored partials in a database.
Taking that work further, a machine learning algorithm was trained to generate synthetic fingerprints that are more likely to fool an ID system – that have been dubbed DeepMasterPrints to distinguish them from MasterPrints coined in the earlier work.
“The proposed method, referred to as ‘latent variable evolution’, is based on training a ‘generative adversarial network’ on a set of real fingerprint images. Stochastic search in the form of the ‘covariance matrix adaptation evolution strategy’ is then used to search for latent input variables to the generator network that can maximise the number of impostor matches as assessed by a fingerprint recogniser,” according to the paper ‘DeepMasterPrints: Generating masterprints for dictionary attacks via latent variable evolution‘ that described this work at the IEEE ‘International conference of biometrics: Theory, applications and systems’, where it won best paper.
According to the university, the work is “yet another step toward assessing the viability of MasterPrints against real devices, which the researchers have yet to test. Because these images replicate the quality of fingerprint images stored in fingerprint-accessible systems, they could potentially be used to launch a brute force attack against a secure cache of these images – to see how far they are along the road to fooling real devices, the full paper can be downloaded free from here.