“Hacking” Voiceprints? Little to Fear if You Take Precautions

Stop me if you’ve heard this one before. It’s the third Thursday of the month, which makes it ripe for a tech story that calls all implementation of biometrics for authentication and security into question. In this case an Israeli watchdog called vpnMentor reported that Korean-headquartered security specialist Suprema was the victim of a security breach that exposed “biometric data” for roughly one million end-users of its Biostar 2 services.

For those of you not following adoption of the use of biometric factors like fingerprints or facial recognition for “access management” – meaning the unlocking of doors in office buildings or other business establishments, the story may have gone unnoticed. But with a headline like this in the The Guardian:

“Major Breach Found in Biometrics system Used by Banks, UK Police and Defence Firms”

Your curiosity may be rightfully piqued

Press coverage was brutal, claiming that vpnMentor’s researchers had gained access to over 27.8 million records which fingerprint data, facial recognition data, face photos of users, unencrypted usernames and passwords, logs of facility access, security levels and clearance, and personal details of staff.

The Twitterati had lots of fun with the topic. Scary stuff, right?

Scary, sobering, cautionary but also amusing to some. As you might imagine, the Twitterati had some especially caustic things to say on the topic. My personal favorite was from an individual who invoked the image of a “Biometric Manager,” analogous to password managers like LastPass or Dashlane, that would generate a “random, high-entropy face & fingerprint for every site” or online resource that an individual might visit.

The absurdity of such a resource seems self-evident; yet the implication of the humor is clear. Biometrics like faceprints and voiceprints are having success as replacement for passwords and other forms of knowledge-based authentication. In doing so, they have given rise to their own set of problems. Yet, contrary to the conclusions that one can draw from the treatment in The Guardian, The Verge and elsewhere, breaches should not make it easy for criminals to match stored voice prints or facial images with the digital templates that could be associated with them for the purpose of authentication.

How Enrollment and Handling of Voiceprints Prevents Security Breaches

In the absurdist treatment of the BioStar 2 breach, much is made of the irrevocability of biometric data. The implication is that, because one cannot change one’s voice when a breach happens, it is no longer useful as an authentication factor. But that just isn’t true. A voiceprint is not a voice recording, rather it is a a binary representation of the unique aspects of a person’s vocal tract (physical characteristics) as well as his or her way of talking (behavioral traits). Because it is a binary file, it can’t be played or reverse-engineered to voice again. It can only be used with the same engine that it was created with because it is signed and hashed with the engine’s stamp. The same is true of facial characteristics and images.

Enrolling one’s voice with different companies or services does not jeopardize you “across the boards.” Instead, generation and storage of templates differ from vendor to vendor. Enrollees are safe because their voiceprints cannot be used for different services or across different systems. This voiceprint, or template, is not considered to be PII (personally identifiable information) because, as a set of bits that is often encrypted, it contains no information at all, personal or otherwise. It is just a pattern that awaits a match with captured utterances from an enrolled client or customer. When stored according to the specifications or recommendations from vendors like NICE, “secure by design”, they cannot be associated with the sorts of personal data, such as usernames/passwords, activity logs and the like that make the Suprema breach so frightening.

Finally: A Voice-based Identifier is Neither Irrevocable Nor Static

The last bit of wisdom to impart in this post is that, while you can’t change or manage your voice or an existing stored voiceprint in the way you might change a password, it is quite possible to re-enroll or create an updated, ideally more accurate, voiceprint. Solution providers have long known that an individual’s voice changes with age and that it would be necessary to keep the stored voice print “dynamic” to maintain accuracy automatically over time.

While Suprema’s abuses appear to call into question the usefulness of any biometric once a major platform has been breached, the truth is very different. The unique characteristics of each individuals’ voice, face, fingerprints, irises and soon veins and neural patterns are gaining importance as firms establish high levels of trust over multiple devices and media. Behavioral characteristics are gaining in importance as well. Today’s solutions, especially if they are implemented in a way that is secure by design, are resilient enough to survive the inevitable hacks and support unprecedented levels of personalization and trust.

 



Categories: Intelligent Authentication

2 replies

  1. Readers of this blog may be interested in the “secure by design” whitepaper we published on developing compliant voice biometrics solutions – https://aurayasystems.com/wp-content/uploads/2019/06/ArmorVox-Secure-By-Design.pdf This discusses many of these issues and outlines design solution to the problem of securing voice biometric information.

  2. Thanks Clive. “Secure by Design” should be baked into all IAuth Solutions.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.