Next year marks the 60th anniversary of the publication of Alan Turing's paper that led to the famous Turing Test.
Back in 1950, Turing proposed that a machine can demonstrate and be proven to possess true intelligence through a simple test of conversation, in which a human user converses blindly with two entities: a machine and another human. If that user could not tell the difference between the two entities, the machine is considered to be 'intelligent'.
The test served as a benchmark of artificial intelligence (AI) for years, but new advances in cognitive sciences and consciousness studies compel us to revisit it.
During a debate at the conference of the Society for the Study of Artificial Intelligence and Simulation of Behaviour it became clear that, while the Turing Test has served us well in driving AI research forward, it would not serve as a full test of intelligence.
A simple example is that the test does not consider learning, which is an important element of intelligence demonstrated by both humans and animals.
In my opinion, there is a fundamental misconception in the test. It requires a human examiner to have a conversation with two unseen entities. One of these entities is a human while the other is the machine to be tested.
One of the conversation topics that we can trap a machine with is the weather. Assume that we asked 'how is the weather outside?' and got one of the following as an answer:
'It is 21 degrees with northerly wind speed of five knots.'
'It is lovely weather today [do you not like it sunny?]' (The actual weather outside is heavy rain.)
'I do not like the weather in England. How do you cope with it?'
Now, which one do we think is a human answer and which one is a machine's? The first answer gives an impression of a machine with good weather sensors, but could it not be a human with a weather station stating simple facts?
The last one could be a person who is not from England, but could it not also be a machine programmed with preset answers to divert topics in a direction in which it can converse?
The second answer is the interesting one. The first part of the answer gives the impression of a machine, but when the optional part is added, it gives the sense of cynicism that one is likely to associate with a human rather than a pattern-matching machine. These cases show the flaws in the Turing Test argument, and return us to the question of what does it test?
For an intelligent machine to pass the test it has to be able to pretend to be human. This requires that the machine is conscious of the fact that it is a machine, it is conscious of the fact the test requires it to come across as human, it is conscious of time and visual limitation, and finally it is conscious of what makes a human comes across as human, i.e. human quirkiness. After all, we would be much quicker to accept a robot as intelligent if it could hold a light-hearted conversation about football.
In my opinion, the Turing Test does not test intelligence, or at least not solely so. It tests consciousness, self-awareness and the ability to lie. The last is the most important, because the ability to lie is a distinctively human characteristic associated with our ability to create from imagination.
A conscious, creative machine with imagination is a very interesting machine, but are these pre-requisites of intelligence?
A symposium to be held at the 2010 Artificial Intelligence and Simulation of Behaviour conference aims to answer this question, among others, in the search for a modern alternative to the Turing Test.
A good reading list on the Turing Test and associated topics can be found here.
Dr Aladdin Ayesh is a senior lecturer in the Informatics Department at De Montfort University and a member of the Centre for Computational Intelligence.
Dust storm on Titan only the third Solar System body where such storms have been observed
New technique could enable quantum computers to scale-up to millions of qubits
Systrom and Krieger taking time off "to explore our curiosity and creativity"
Comcast's £29.7bn winning bid more than twice the £13.7bn Rupert Murdoch valued Sky at just eight years ago