Intel expects to continue Moore's Law beyond 10nm process technologies, and has detailed how it is tweaking circuit designs to make its system on a chip (SoC) products more resilient.
The chipmaker is delivering a number of presentations at the International Solid-State Circuits Conference (ISSCC) in San Francisco this week, including one on the challenges and implications of moving beyond 10nm.
Intel is currently slated to introduce its first 10nm chips sometime in 2016, according to current roadmaps.
"Scaling does continue to provide lower cost per transistor, and it is Intel's view that cost reduction is needed to justify new generations of process technology," said Mark Bohr, Intel senior fellow for logic technology development.
Silicon wafer manufacturing costs have been increasing as Intel has scaled feature sizes, but the firm managed to deliver a greater than expected feature density with the move to 14nm technology, Bohr said.
"As a result of that, we have been able to continue to reduce costs per transistor," he added.
Looking beyond 10nm, Intel had been expected to introduce a new Extreme Ultraviolet Lithography (EUV) technology to enable it to manufacture such increasingly tiny components, but the firm has been rowing back on this of late.
"I still believe we can do 7nm without EUV and deliver improved cost per transistor. Exactly how we're going to do that I'm not ready to disclose," Bohr said.
Meanwhile, Bohr also said that scaling is increasingly likely to involve more than one silicon die, either by 3D stacking or by mounting them side by side inside a chip package.
"Going forward, heterogeneous integration will become increasingly important, but we may not be able to do it all on one chip, so you will see more use of SoC solutions such as 2.5D integration, where two are mounted side by side on a substrate, or full 3D integration, stacking chips on top of each other, each one tuned for a different [manufacturing] process to perform different functions," he explained.
Intel will also discuss at ISSCC how it can improve the performance of SoC designs by introducing circuitry to monitor the on-chip clock signals used to synchronise operations for timing variations and voltage 'droop', and adjust them dynamically.
"Normally we use a guard band to cope with worst case conditions to ensure error-free operation, but the worst case conditions are not always present, so the guard bands can be reduced by sensing changes in operating conditions at runtime," said Vivek De, an Intel fellow with Intel Labs.
This means that the chip can sense that a timing violation is about to occur and respond just in time by slowing down the clock frequency or increasing the voltage to compensate, he explained. The guard bands, which allow for a margin of error in timings, can therefore be reduced to boost performance.
"Also we can detect timing errors on the fly and recover by repeating the same operation at lower frequency or higher voltage," De said.
Intel said that measurements from 22nm tri-gate test chips showed that an improvement of 21 percent in performance or 67 percent higher energy efficiency can be achieved by implementing this technology.
In a similar vein, Intel has implemented a test graphics execution core with integrated digital voltage regulators. These can detect the onset of a voltage droop and respond by ramping up the current injection to the graphics core to offset the droop.
Measurements of this chip show that it is capable of up to 82 percent energy reduction or 75 percent higher frequency when operating in Turbo mode, according to Intel.
In fear of future shortage - or in preparation for its own electric car project?
New Spectre microcode patches released by Intel to fix security flaws in Skylake, Kaby Lake and Coffee Lake CPUs
But if you're running anything older you'll have to wait
Powered by servers based on Qualcomm's scalable 48-core Centriq 2400 10nm CPUs
Malware has been in circulation for more than a year