For almost half a century, computer chips have doubled in power every 18 months. But this may not hold true for much longer, says Eugene Fiume
What is Moore’s Law?
In a paper published in 1965, Gordon Moore, the co-founder of Intel, predicted that the number of transistors that could be placed inexpensively on an integrated circuit – equivalent to raw computing power – would double roughly every 18 months. Remarkably, he’s been right so far.
How important is Moore’s Law to the computing industry?
Moore’s Law has resulted in exponential growth in computer memory and computation speeds. But this doesn’t mean the consumer’s “experience” of computing has been improving at the same rate. If you buy a computer that’s three generations newer than your previous model, it will have about eight times the computing speed. But your applications won’t run at eight times the speed. Optimum performance depends less on how fast the central processing unit (CPU) operates and more on how well the CPU is interconnected with the memory and other components, how efficient the software is and how fast information travels through your network.
Do you think the end of Moore’s Law is near?
It’s coming up. We’re going to get a few more generations out of current technology – perhaps another six to 10 years – but we are nearing the limit to how small features on a chip can get. These features are now being fabricated at the scale of 30 to 40 nanometres – or about 1/200th the size of a red blood cell. New technologies may be able to shrink this even further, but there is a limit. Below a certain size, you start to get unpredictable effects. That’s one reason people are thinking of new ways to lay out computers. There’s talk about moving away from flat distribution of transistors toward layered configurations. That’s also why we’re seeing multi-core systems. For a while, they will allow us to achieve a doubling of performance without an actual doubling of transistor density.
How will the increasing popularity of “cloud computing” affect computing performance?
“Cloud computing” makes the computation go away – it travels from your computer over the Internet to somewhere else, such as one of those huge computation farms that Amazon and Google have built for their own purposes. In principle, you can use these companies’ vast computation resources to do the work you need and have the results sent back to you. The bottleneck is the network. Your computation time will be determined by the responsiveness of your network. And that has nothing to do with Moore’s Law.
The computing industry has used Moore’s Law to sell us a new computer every few years. What will happen when those raw power improvements can no longer be achieved?
In the future, I think we are going to see a proliferation of inexpensive devices. I tend to use my laptop for everything, but you can get better purpose-built devices – e-readers with reflective displays, for example, that are easier on the eyes and can be read outdoors. As the market for tablet and hand-held devices grows, the laptop market will shrink. The netbook will evolve and likely merge with hand-helds. Many people currently use a laptop for just email. A good hand-held does that.
Computers are still poor at “seeing” and interpreting spoken language. Won’t these developments require massive increases in raw computing power?
Absolutely. We will require massive increases in computer power – and huge advances in algorithms to deal both with vast amounts of data and, more importantly, with many different modes of interacting with computers. But almost all of the problems that are truly interesting to people are incredibly complex to solve, and raw computing power alone won’t solve them.
Eugene Fiume is a professor in the computer science department