Microsoft has demonstrated an experimental system which can deliver spoken translations on the fly.
The company said that the technology, which is still in its development phase, would allow for speech to be processed by a device and translated into another language while maintaining the voice characteristics of the speaker.
Microsoft chief research officer Rick Rashid demonstrated the technology earlier this year at an event in China, offering the audience a showcase in which his own presentation was translated from English to Chinese in a voice closely matching his own.
Rashid said that unlike previous speech recognition systems, which relied on the analysis of sound waves and speech models in order to recognise commands, the new system leverages a process called "neural networking" designed to function in a manner similar to that of the human brain.
The system is able to better recognise speech and translate words into multiple languages as well as word orders and text presentation.
The demonstration system relied on a process in which Rashid's words were captured from voice to text in English, then translated into Chinese and played as speech. Additionally, the system analysed Rashid's own voice frequency to broadcast the text in a voice similar to its own.
While Rashid said that the system still contains a number of bugs and errors and Microsoft gave no target release date, the company believes that the system could one day offer a dramatic upgrade to both speech recognition and translation technology.
"We may not have to wait until the 22nd century for a usable equivalent of Star Trek’s universal translator, and we can also hope that as barriers to understanding language are removed, barriers to understanding each other might also be removed," Rashid said in a blog post.
"The cheers from the crowd of 2,000 mostly Chinese students, and the commentary that's grown on China's social media forums ever since, suggests a growing community of budding computer scientists who feel the same way."