Image may be NSFW.
Clik here to view.
Shortcomings in voice presentation attack detection are making it surprisingly easy to defeat biometric defenses.
A pair of Canadian computer scientists trying to get around voice authentication found it easy to do so – claiming a success rate of up to 99 percent in only six attempts against the least effective biometric security system tested. The University of Waterloo researchers have published their findings in the IEEE Computer Society's digital library.
"Our attack," their report states, "targets common points of failure that all spoofing countermeasures share, making it real-time, model-agnostic, and completely blackbox." There was no need to familiarize the algorithm with a target to create attack samples.
"The key message from our work is that (countermeasures) mistakenly learn to distinguish between spoofed and bona fide audio based on cues that are easily identifiable and forgeable."
The researchers say that the effects of their attack are subtle enough to "guarantee" that their adversarial samples defeat auto speaker verification with their contents intact.
According to reporting by the trade publication Tech Xplore, the samples were 10 percent successful in a four-second attack against Amazon's Connect voice authentication software. They were 40 percent successful in a period shorter than 30 seconds.
Clik here to view.
