Voice copying algorithms found able to dupe voice recognition devices

3 years ago 290
Alexa Credit: Pixabay/CC0 Public Domain

A squad of researchers astatine the University of Chicago has recovered that voice-copying algorithms person precocious to the constituent that they are present susceptible of fooling dependable designation devices, and successful galore cases, radical listening to them. The radical has posted a insubstantial connected the arXiv preprint server that describes 2 well-known dependable copying algorithms.

Deepfake videos are well-known; galore examples of what lone look to beryllium celebrities tin beryllium seen regularly connected YouTube. But portion specified videos person grown lifelike and convincing, 1 country wherever they neglect is successful reproducing a person's voice. In this caller effort, the squad astatine UoC recovered grounds that the exertion has advanced. They tested 2 of the astir well-known dependable copying algorithms against some quality and dependable designation devices and recovered that the algorithms person improved to the constituent that they are present capable to fool both.

The 2 algorithms—SV2TTS and AutoVC—were tested by obtaining samples of dependable recordings from publically disposable databases. Both systems were trained utilizing 90 five-minute dependable snippets of radical talking. They besides enlisted the assistance of 14 volunteers who provided dependable samples and entree to their dependable designation devices. The researchers past tested the 2 systems utilizing the open-source bundle Resemblyzer—it listens and compares dependable recordings and past gives a standing based connected akin 2 samples are. They besides tested the algorithms by utilizing them to effort to entree services connected dependable designation devices.

The researchers recovered the algorithms were capable to fool the Resemblyzer astir fractional of the time. They besides recovered that they were capable to fool Azure (Microsoft's unreality computing service) astir 30 percent of the time. And they were capable to fool Google's Alexa dependable designation strategy astir 62% of the time.

Two 100 volunteers besides listened to pairs of recordings and tried to find if the voices were from the aforesaid person—the results were mixed, but overall, the algorithms were capable to fool the volunteers much often than not—and particularly truthful erstwhile the samples were of celebrated people.



More information: Emily Wenger et al, "Hello, It's Me": Deep Learning-based Speech Synthesis Attacks successful the Real World. arXiv:2109.09598v1 [cs.CR], arxiv.org/abs/2109.09598

© 2021 Science X Network

Citation: Voice copying algorithms recovered capable to dupe dependable designation devices (2021, October 13) retrieved 13 October 2021 from https://techxplore.com/news/2021-10-voice-algorithms-dupe-recognition-devices.html

This papers is taxable to copyright. Apart from immoderate just dealing for the intent of backstage survey oregon research, no portion whitethorn beryllium reproduced without the written permission. The contented is provided for accusation purposes only.

Read Entire Article