Literary detective Sherlock Holmes was famously smart – clever enough indeed to finish his sidekick Watson's sentences before the good doctor had the chance to complete them.
But could computers ever show such Holmes-like cunning and know what we're going to say before we've said it? A team of computer scientists from Microsoft Research in Redmond, along with colleagues at Cornell University and the University of California in Irvine, have been trying to find out, with interesting results.
The researchers wanted to understand more about how well computers can understand the semantic complexity of language. They devised a series of machine learning algorithms, which aimed to analyse and understand the sentences they were presented with.
The team then set about pitting their algorithms against a series of comprehension tests, including secondary school exam questions and tests based on passages from five Sherlock Holmes novels by Sir Arthur Conan Doyle.
The Scholastic Aptitude Tests (SAT) presented a sentence with one or two blanks that needed to be filled in, with five words given as options for each blank.
“These questions are intriguing because they probe the ability to distinguish semantically coherent sentences from incoherent ones, and yet involve no more context than the single sentence,” the researchers explain in their presentation paper [PDF].
The algorithms use techniques the so-called Good Turing frequency estimation, which is based on the work of British computer science pioneer Alan Turning. By training the algorithms on a large corpus of words, they are then able to predict the probability that one of the choices will make sense if used to fill in the blank.
That corpus of words was plucked from the archive of all material published by the LA Times between 1985 and 2002 – around 1.1 billion words in total.
When pitted against the SAT exams the team's algorithms were able to successfully fill in the blanks in 53 per cent of the time. With the Sherlock Holmes text, the system got a 52 per cent success rate – not perfect but better than would be expected by pure chance.
What's more, the team were able to identify areas where they could improve.
“Encouragingly, one third of the errors involve single-word questions which test the dictionary deﬁnition of a word,” they noted. With a few tweaks, it should be possible to reduce those mistakes, they predicted.
There were still about 40 per cent of the errors that look more complicated to solve – these were ones associated with some level of general knowledge.
For example, in the sentence, “Many fear that the [blank] of more lenient tobacco advertising could be detrimental to public health”, the algorithm plumped for the answer “withdrawal” rather than the correct option of “ratiﬁcation”.
Solving that problem should mean we have a few more years before computers are able to finish our sentences for us.
The work will be officially presented at the 50th Annual Meeting of the Association for Computational Linguistics in Jeju Island, Korea.
Japanese researchers develop a flexible screen worn on the skin that they claim can monitor patients' heart rate and other vitals
ZenFone 5 Pro appears to boast a Snapdragon 845 SOC, an Adreno 630 GPU and 6GB of RAM
Pilot project will serve 300 homes to start with
The IoT faces significant compatibility challenges, which could be avoided for blockchain by adopting Hyperledger