[wellylug] OT: CPU Design
Enkidu
enkidu at cliffp.com
Wed Mar 31 18:17:52 NZST 2004
Rob, do you have a reference for that? It's very interesting!
Cheers,
Cliff
On Wed, 31 Mar 2004 17:44:43 +1200, you wrote:
>This is off topic but for all of you wanting your linux machines to run
>faster here is an interesting article which
>explains the challenges that chip designers are coming across. And why
>megahertz madness is coming to an end.
>
>>>>
>
>Thirty years ago, (the then) Captain Grace Murray Hopper (USN) was out on
>the lecture circuit handing out pieces of wire just under a foot long to
>evince that the speed of light was a fundamental limit on computer
>technology. She called the wires "nanoseconds" as they were cut to the
>length that light would travel in a nanosecond.
>
>Physicists have defined the speed of light in a vacuum as the value
>299,792,458 meters per second. A "nanosecond" then translates to 29.979
>centimeters per second, or about 11.8 inches.
>
>Until recently, processor chips, and even systems, did not have to be
>concerned with physical restrictions because it was easy to get parts
>and to work within the length limits imposed by the clock speeds in use.
>For example, in the late 1960s, the Control Data 6600, a synchronous
>machine, had all of its internal "bus" cables sized during installation
>to 64 feet. With microsecond clocks this did not matter very much. Even
>10 years ago, a 100MHz clock meant that the CPU cycle was 10
>nanoseconds. It was quite easy to get memory that would respond within
>10 nanoseconds, and the system clock could propagate almost 10 feet in
>that time.
>
>Gigahertz clocks have changed all of that. Memory speeds, unlike memory
>sizes, have not been increasing at anywhere near the level that Moore's
>Law has been driving processors. The best that external (main) memories
>can do at the moment are double data rate at 200 to 400MHz, so at best
>still more than a nanosecond per datum, far slower than the CPU clocks.
>It has become necessary to carefully plan the positioning of components,
>especially caches that are processor clock dependent, so that functions
>can be properly synchronized with and within a clock cycle. As the clock
>speed increases, the distance that can be reached within a single clock
>has been shrinking dramatically. The current state of the art for
>semiconductor fabrication is the 300 square mm wafer with a typical chip
>being a square of about 15mm per side (and hence a diagonal of 21.2mm).
>Given the speed of light figure above, the clock signal requires ~70.7
>picoseconds to propagate across that diagonal, which translates to a
>clock frequency of ~14GHz.
>
>This of course also assumes that the speed of light in a vacuum closely
>approximates the signal propagation speed in a silicon chip. It does
>not. Signal propagation in the chip is slowed by the material (silicon
>as opposed to a vacuum), various dopants and other impurities in the
>silicon, and various switching that gates signals. Actual propagation
>speeds range from a best case of about 265,000,000 meters per second
>down to 165,000,000 meters per second or even lower. For a distance of
>21.2 mm as in the example of the previous paragraph, the former leads to
>a clock of ~80 picoseconds, or ~12.5GHz, the latter to a clock of ~128
>picoseconds which is ~7.78GHz. Bigger chips would require a slower clock
>rate to cover the longer distance.
>
>It should be noted that the floating point unit on a 3GHz Pentium is
>being clocked at 6GHz as an attempt to speed up their (quite complex)
>pipeline, so 7 and even 12 GHz is really not that far off in chip
>design. What this means is that the current status quo of clock
>increases for better speed is coming to an end. Architectural changes
>will be required in the very near future to improve performance.
More information about the wellylug
mailing list