Many pundits pinpoint 1956 as the birth of artificial intelligence (AI), as it was the year of the Dartmouth Conference, whose published title included the first known use of the phrase 'artificial intelligence'. I agree with the year but not the reason, as 1956 also saw the release of the film Forbidden Planet, featuring Robbie the Robot. So, as far as I am concerned, Robbie still sets the gold standard for AI.
The field of AI is easy to define: it is the science of mimicking human mental faculties in a computer. Of course, it is much more challenging to achieve AI than to define it, and AI remains one of the 'grand challenges' for computing technology.
There is a huge gulf between what we can achieve with our brains and what we can achieve with computers. The human mind is a wondrous thing, not just in its conscious thoughts but in the things we do without even realising. Reading this article involves a mixture of conscious thought ('What is this guy talking about?') and unconscious thought, as you recognise and interpret the shapes of the letters, words, phrases and sentences on the page or screen.
To understand these different types of intelligent behaviour, a spectrum of intelligence can be drawn up from automatic behaviour (reaction and co-ordination), through sub-conscious behaviour (perception and interaction), to specialist expertise (planning and evaluating).
The spectrum doesn't characterise the degree of intelligence but the type of intelligence. So, the good news is that we all enjoy intelligent behaviour across the whole spectrum. But it is the breadth of this spectrum that is the nub of the problem for AI. Traditional approaches have tended to pick off one behaviour or another, but have failed to tackle the whole spectrum in the way that Robbie does.
Engineers and computer scientists have been good at tackling the automatic behaviours for many years, as evidenced by the manufacturing robots that dominate modern car manufacturing plants. Early AI research, on the other hand, focused on problems at the other end of the spectrum involving specialist expertise. Two early triumphs involved the use of rules for reasoning about the interpretation of mass spectrograms and the diagnosis of bacterial infections. Surely, some of us reasoned, if a computer can deal with problems too difficult for most ordinary people, then more modest human reasoning should be straightforward.
Unfortunately, this has not proved to be the case. Human behaviour in the middle of the spectrum, which we perform with barely a conscious thought, has proved to be the most difficult for computer scientists to emulate. For example, computers still cannot reliably recognise objects in a visual image.
Tony Blair's latest mission is to bring all the world's faiths together in his Faith Foundation. I have just such a vision for AI. If we are ever going to build a real Robbie, we will need to bring together various AI approaches within a hybrid system that might include rules, frames, objects, model-based reasoning, case-based reasoning, Bayesian updating, fuzzy logic, multi-agent systems, swarm intelligence, evolutionary algorithms and neural networks.
The blackboard model has re-emerged from its 1970s origins to achieve such an approach, with diverse software agents working together like a team of boffins that share their ideas by writing them with chalk on a blackboard. Although such an approach remains a long way short of Robbie, it has achieved some notable practical successes, from processing medical images to managing and controlling complex manufacturing processes. These are still specialist applications, but ones that involve a range of intelligent behaviours.
AI has made significant advances from both ends of the intelligence spectrum, and is closing in towards the middle. The gaps relate to behaviours that we tend to take for granted, such as language, perception and common sense, by which I mean making sensible decisions in unfamiliar situations.
The Honda robots known as Asimo are probably the highest-profile attempt to emulate Robbie. They are marvellously engineered and do some things very well, but I still wouldn't invite one to my dinner party.
Fifty-three years have passed since Robbie first put in an appearance, and I suspect we will have to wait as long again before we have a really convincing artificial human mind.
Professor Adrian A. Hopgood is Dean of the Faculty of Technology at De Montfort University.
The new processors support Intel's Optane memory acceleration technology
Blockchain's killer app is bitcoin, the rest is mostly 'pure marketing', says MaidSafe's David Irvine
Blockchains are not suited to many of the data security purposes being put forward for them
Applications from some member states were down more than 40 per cent
A new RSA report urges coders to sign a 'Hippocratic Oath' before embarking on AI programmes.