Meditations on Moore’s Law

As part of my study on parallel programming I’ve been thinking a lot about how processor architectures have evolved and are going to evolve in the not-too-distant future. No discussion of processor architectures is complete without some talk about Moore’s Law. The original formulation of Moore’s Law is:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year… Certainly over the short term this rate can be expected to continue, if not to increase.

The original statement is a bit more complicated than the generic “number of transistors on a chip double every two years” that is the common phrasing. Firstly, the original states that the doubling occurs every year. That’s a bit too optimistic, the historic rate has been between 1.5 and 2 years for the doubling. But the more subtle (and generally missed) point that Moore made was that it is density at minimum cost per transistor that increases. It’s not just the density of transistors, but rather the density at which cost per transistor is lowest. We can put more transistors on a chip, but as we do so the chances that a defect will cause the chip to not work properly will increase.

There are a number of corollaries and by-laws that go along with Moore’s Law. For one, the cost per transistor decreases over time but the manufacturing cost per area of silicon increases over time as the number of components crammed onto a chip increases (as well as the materials, energy and supporting technology required to create that chip). But the one consequence that has hit the chip industry in recent years is that the power consumption of chips also increases: roughly doubling every month. And that is something that directly affects the bottom line: no one is going to by a 6 GHz if you need an industrial strength cooling solution to use it. Even though transistor densities (and as a result processor speeds) have been progressing steadily over the past few decades, memory speeds simply haven’t kept up. The maximum speed at which a modern processor can operate is far more than the speed at which it can pull data to operate on.

These two factors: increased power consumption and lagging bandwidth has prompted a significant course change for chip manufacturers. We have the technology to pack a lot of processing power on a single chip, but we quickly hit diminishing returns if we keep aiming for speed. So instead of making the fastest, meanest processors, the industry is turning to leaner, slower, parallel processors. Computing power is still increasing, but it’s increasing in width, not depth. Multi-core CPUs and their cousins, the GPUs  exploit Moore’s Law are bundle multiple processing units (cores) onto a single piece of silicon. The top of the line GPUs boast hundreds of individual cores capable of running thousands of current execution threads. Intel’s newest Core i7-980X Extreme has clock speeds of up to 3.6Ghz, but also boasts 6 cores capable of running 12 threads. Parallel computation is here to stay and it’s only going to keep increasing. Moore’s Law should be good for another decade at least (and maybe more until we hit the limits imposed by quantum mechanics) and it’s a safe bet to assume that all those extra transistors are going to find their place in more cores.

Having dozens or hundreds of cores is one thing, but knowing how to use them is quite another. Software makers still don’t really know how to use all the computational width that Moore’s Law will continue to deliver. There are a number of different ideas on how to run programs across multiple cores (including shared-state threads and message passing), but there doesn’t seem to be a consensus on how best to go about it. Then there is the problem that we have billions of lines of serial code that will probably not benefit from multi-core unless they are rebuilt to exploit parallelism. Anyone who’s tried to re-engineer an existing piece of software knows that is not an easy task. It’s also expensive, not a good state of affairs for a multi-trillion dollar industry.

Luckily there are a lot of really smart people working on the matter from a number of different angles. The next few years are going to be an exciting time as operating systems, programming languages, compilers, network technologies and all the people working on them try to answer the question of what to do with the all the cheap computing cores that are lying around. The downside is that software won’t run any faster for a while as clock speeds stagnate and we figure out how to work around that. For the next 2 months I’ll continue reading up and thinking about the different ways in which we can keep using Moore’s Law to our benefit. The free lunch may be over, but that doesn’t mean that we shouldn’t eat well.

Advertisements

Published by

Shrutarshi Basu

Programmer, writer and engineer, currently working out of Cornell University in Ithaca, New York.

One thought on “Meditations on Moore’s Law”

  1. This post of mine, resonated a bit with this post from my past. I’ve been phrasing the switch from depth to width, as a conversion in x86 computing as being emblematic of the shift between supercomputing and mainframe computing.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s