At the Intel Developer Forum in San Francisco last month, Intel outlined its vision for multi-core processors, which it believes will dominate the future of computing.
The chip giant predicts that by 2015, parallel processors are going to be the norm because such architectures offer the only way to increase the capability and performance of next-generation systems.
In an interview with vnunet.com Justin Rattner, director of the chip giant's corporate technology group, asserts that the world is "on the leading edge of probably the most important architectural transition in the history of the microprocessor".
Why do we need faster and faster processors?
If we want systems that are more human-like, that take more natural forms of input and produce more natural forms of output, they are going to require several orders of magnitude of additional processing capability than we have to today.
These are the kind of systems that everybody wants. Nobody wants systems that are unfriendly and hard to use and seem to need more support than they give.
And it's your job to figure out how we get these systems?
Processors have such a long lead time that you find yourself thinking 10 years into the future. When we think about a product in the mature part of its life cycle, we're thinking what people are going to do with these things a decade ahead.
It's like sitting in 1985 and saying that the internet will be enormous. Hyper-threading and multi-core were central to us four or five years ago. Now the product guys are doing a fine job getting from two to four and eight cores.
Intel's Corporate Technology Group tends to think beyond eight cores, about what starts to happen if we have tens of cores per die and potentially hundreds of software threads running in a single processor.
Dual-core is really just a toe in the water that won't require most people to really address the programming challenges head on. But we're not more than five years away from having to confront the issues with parallel computing and deliver the tools and technology that make it practical.
We are on the leading edge of probably the most important architectural transition in the history of the microprocessor.
What kind of challenges will multi-core and multi-threading present?
It is generally accepted that parallel programming is non trivial. The current programming techniques are subject to all sorts of errors that are particularly difficult to track down, mostly time dependent errors.
We were looking at that recently as part of an internal study. The failure of the power grid [in the North East of the US in August 2003] was traced to one of these time dependent errors.
One program or thread was holding a lock, another high priority thread couldn't get to the lock and the thing just froze. That system just stopped right there because it was a deadly embrace between those two threads. It ran perfectly for years and then in this one instance it locked.
Nasa had one of these same problems where again a couple of threads were contending for some shared piece of data.
All these data threads inside the processor get backed up into some kind of data gridlock?
Things can go wrong if I need to lock two things. Since that's not an exclusive activity, I get this lock but it turns out that somebody else gets the other lock. I'm waiting for him to release his and he is waiting for me to release mine.
It is the computing equivalent of gridlock. I can't get through because he's in the intersection and the guy behind can't get through, so we just come to a stop.
And there is the opposite form which is called a live lock. That is like an endless loop where we chase each other around the barn but never make forward progress.
Errors like that just don't arise in traditional sequential programs. There each step comes one step after another and you are done. The assumption is that nothing else is going on. There is no parallelism, so life is good.
One of the big challenges will be coming up with new approaches that eliminate very large classes of these time dependent errors and present the programmer with a much simplified view of how the system behaves.
That is going to change programming languages, runtime environments, operating systems, instruction set interfaces, literally down to the micro-architecture and internal interfaces in the system.
Electronics and computer chain the latest high street retailer to fall into difficulties
Incisive Media and Investec Asset Management supported fundraiser crosses Atlantic in 40 days
Alphabet's health sciences division Verily have been messing with AI algorithms
North Korea's cyber attack capabilities are expanding fast - and turning their fire on a wider range of targets