Researchers from the University of Eastern Finland claim that it is relatively easy for hackers to crack supposedly top-of-the-line voice recognition systems due to poor security.
Skillful voice impersonators can fool the systems because they aren't efficient in recognising voice modifications, according to the study, published today.
Most mobile devices come with voice recognition and command technologies built-in, but many of these systems lack appropriate security mechanisms and can be compromised by hackers.
These services are used to dictate messages, translate phrases and conduct search queries, and increasing adoption presents a potential opportunity for cyber crooks.
The study shows that cyber criminals are using different technologies to get into speaker recognition software, such as voice conversion, speech synthesis and replay attacks.
Although experts are coming up with techniques and countermeasures to fight back against such attacks, voice modifications produced by humans can't be detected easily.
Voice impersonation, the University said, is common in the entertainment industry. Professionals and amateurs are commonly copying voice characteristics of other speakers, notably public figures.
There's also the issue of "voice disguise", where speakers change the way they speak to avoid being recognised. The latter is common in situations that don't require face-to-face communication.
And, as a result, people can blackmail others or conduct threatening calls. Because of this, there's a need to improve the robustness of voice recognition systems so that they aren't exposed to human-induced voice modifications.
In the study, researchers analysed the speech of two professional impersonators mimicking eight Finnish public figures. It also looked at acted speech from 60 Finnish speakers, who participated in recording sessions.
The speakers were asked to fake their age by changing their voices to sound older or younger, and an overwhelming number of them were able to trick the speech systems.
Tom Harwood, chief product officer and co-founder at Aeriandi, said: "Biometrics technology has been shown to significantly reduce fraud, especially in the financial sector - but it's not the whole solution.
"Earlier this year, twins tricked the HSBC voice biometrics security system, and this instance showed that no security technology is 100% fool-proof.
"Technology advances have also shown that it is now possible to cheat voice recognition systems. Voice synthesiser technology is a great example.
"It makes it possible to take an audio recording and alter it to include words and phrases the original speaker never spoke, thus making voice biometric authentication insecure.
He added: "The good news is that there is a way to protect against phone fraud beyond biometrics - and that's fraud detection technology. Fraud detection on voice looks at more than the voice print of the user; it considers a whole host of other parameters.
"For example, is the phone number being used legitimate? Where is the caller located? Increasingly phone fraud attacks on UK banks come from overseas. Voice Fraud technology has been proven to protect against this as well as domestic threats."
HomePod delay means Apple will miss Christmas sales
Reports of Toshiba PC sale plans come after it sold its TV manufacturing unit to Hisense
IoT Accelerator programme intended to stimulate tech investment in Wales
Vote follows claims of Russian interference, even though Clinton out-spent Trump 2-to-1