Jump to content

Cell microprocessor implementations

From Wikipedia, the free encyclopedia

Cell microprocessors are multi-core processors that use cellular architecture for high performance distributed computing. The first commercial Cell microprocessor, the Cell BE, was designed for the Sony PlayStation 3. IBM designed the PowerXCell 8i for use in the Roadrunner supercomputer.[1]

Implementation

[edit]

First edition Cell on 90 nm CMOS

[edit]
Known Cell variants in 90 nm process
Designation Die area First disclosed Enhancement
DD1 221 mm2 ISSCC 2005
DD2 235 mm2 Cool Chips April 2005 Enhanced PPE core

IBM has published information concerning two different versions of Cell in this process, an early engineering sample designated DD1, and an enhanced version designated DD2 intended for production.

The main enhancement in DD2 was a small lengthening of the die to accommodate a larger PPE core, which is reported to "contain more SIMD/vector execution resources"[1]. Some preliminary information released by IBM references the DD1 variant. As a result, some early journalistic accounts of the Cell's capabilities now differ from production hardware.

Cell floorplan

[edit]
Cell function units and footprint
Cell function unit Area Description
XDR interface 05.7% Interface to Rambus system memory
memory controller 04.4% Manages external memory and L2 cache
512 KiB L2 cache 10.3% Cache memory for the PPE
PPE core 11.1% PowerPC processor
test 02.0% Unspecified "test and decode logic"
EIB 03.1% Element interconnect bus linking processors
SPE (each) × 8 06.2% Synergistic coprocessing element
I/O controller 06.6% External I/O logic
Rambus FlexIO 05.7% External signalling for I/O pins

Powerpoint material accompanying an STI presentation given by Dr Peter Hofstee], includes a photograph of the DD2 Cell die overdrawn with functional unit boundaries which are also captioned by name, which reveals the breakdown of silicon area by function unit as follows:

SPE floorplan

[edit]
SPU function units and footprint
SPU function
unit
Area Description Pipe
single precision 10.0% single precision FP execution unit even
double precision 04.4% double precision FP execution unit even
simple fixed 03.25% fixed point execution unit even
issue control 02.5% feeds execution units
forward macro 03.75% feeds execution units
GPR 06.25% general purpose register file
permute 03.25% permute execution unit odd
branch 02.5% branch execution unit odd
channel 06.75% channel interface (three discrete blocks) odd
LS0–LS3 30.0% four 64 KiB blocks of local store odd
MMU 04.75% memory management unit
DMA 07.5% direct memory access unit
BIU 09.0% bus interface unit
RTB 02.5% array built-in test block (ABIST)
ATO 01.6% atomic unit for atomic DMA updates
HB 00.5% obscure

Additional details concerning the internal SPE implementation have been disclosed by IBM engineers, including Peter Hofstee, IBM's chief architect of the synergistic processing element, in a scholarly IEEE publication.[2]

This document includes a photograph of the 2.54 mm × 5.81 mm SPE, as implemented in 90-nm SOI. In this technology, the SPE contains 21 million transistors of which 14 million are contained in arrays (a term presumably designating register files and the local store) and 7 million transistors are logic. This photograph is overdrawn with functional unit boundaries, which are also captioned by name, which reveals the breakdown of silicon area by function unit as follows:

Understanding the dispatch pipes is important to write efficient code. In the SPU architecture, two instructions can be dispatched (started) in each clock cycle using dispatch pipes designated even and odd. The two pipes provide different execution units, as shown in the table above. As IBM partitioned this, most of the arithmetic instructions execute on the even pipe, while most of the memory instructions execute on the odd pipe. The permute unit is closely associated with memory instructions as it serves to pack and unpack data structures located in memory into the SIMD multiple operand format that the SPU computes on most efficiently.

Unlike other processor designs providing distinct execution pipes, each SPU instruction can only dispatch on one designated pipe. In competing designs, more than one pipe might be designed to handle extremely common instructions such as add, permitting more two or more of these instructions to be executed concurrently, which can serve to increase efficiency on unbalanced workflows. In keeping with the extremely Spartan design philosophy, for the SPU no execution units are multiply provisioned.

Understanding the limitations of the restrictive two pipeline design is one of the key concepts a programmer must grasp to write efficient SPU code at the lowest level of abstraction. For programmers working at higher levels of abstraction, a good compiler will automatically balance pipeline concurrency where possible.

SPE power and performance

[edit]
Relationship of speed to temperature
Voltage Frequency Power Die Temp.
0.9 V 2.0 GHz 01 W 25 °C
0.9 V 3.0 GHz 02 W 27 °C
1.0 V 3.8 GHz 03 W 31 °C
1.1 V 4.0 GHz 04 W 38 °C
1.2 V 4.4 GHz 07 W 47 °C
1.3 V 5.0 GHz 11 W 63 °C

As tested by IBM under a heavy transformation and lighting workload [average IPC of 1.4], the performance profile of this implementation for a single SPU processor is qualified as follows:

The entry for 2.0 GHz operation at 0.9 V represents a low power configuration. Other entries show the peak stable operating frequency achieved with each voltage increment. As a general rule in CMOS circuits, power dissipation rises in a rough relationship to V2F, the square of the voltage times the operating frequency.

Though the power measurements provided by the IBM authors lack precision they convey a good sense of the overall trend. These figures show the part is capable of running above 5 GHz under test lab conditions—though at a die temperature too hot for standard commercial configurations. The first Cell processors made commercially available were rated by IBM to run at 3.2 GHz, an operating speed where this chart suggests a SPU die temperature in a comfortable vicinity of 30 degrees.

Note that a single SPU represents 6% of the Cell processor's die area. The power figures given in the table above represent just a small portion of the overall power budget.

IBM has publicly announced their intention to implement Cell on a future technology below the 90 nm node to improve power consumption. Reduced power consumption could potentially allow the existing design to be boosted to 5 GHz or above without exceeding the thermal constraints of existing products.

Cell at 65 nm

[edit]

The first shrink of Cell was at the 65 nm node. The reduction to 65 nm reduced the existing 230 mm2 die based on the 90 nm process to half its current size, about 120 mm2, greatly reducing IBM's manufacturing cost as well.

On 12 March 2007, IBM announced that it started producing 65 nm Cells in its East Fishkill fab. The chips produced there are apparently only for IBMs own Cell blade servers, which were the first to get the 65 nm Cells. Sony introduced the third generation of the PS3 in November 2007, the 40GB model without PS2-compatibility which was confirmed to use the 65 nm Cell. Thanks to the shrunk Cell, power consumption was reduced from 200 W to 135 W.

At first it was only known that the 65 nm-Cells clock up to 6 GHz and run on 1.3 V core voltage, as demonstrated on the ISSCC 2007. This would have given the chip a theoretical peak performance of 384 GFLOPS in FP8 quarter precision (48 GFLOPs in FP64 dual precision), a significant improvement to the 204.8 GFLOPS peak (25.6 GFLOPs FP64 dual precision) that a 90 nm 3.2 GHz Cell could provide with 8 active SPUs. IBM further announced it implemented new power-saving features and a dual power supply for the SRAM array. This version was not yet the long-rumoured "Cell+" with enhanced Double Precision floating point performance, which first saw the light of day mid-2008 in the Roadrunner supercomputer in the form of QS22 PowerXCell blades. Although IBM talked about and even showed higher-clocked Cells before, clock speed has remained constant at 3.2 GHz, even for the double precision enabled "Cell+" of the Roadrunner. By keeping clockspeed constant, IBM has instead opted to reduce power consumption. PowerXCell clusters even best IBMs Blue Gene clusters (371 MFLOPS/watt), which are far more power-efficient already than clusters made up of conventional CPUs (265 MFLOPS/watt and lower).

Future editions in CMOS

[edit]

Prospects at 45 nm

[edit]

At ISSCC 2008, IBM announced Cell at the 45 nm node. IBM said it would require 40 percent less power at the same clockspeed than its 65 nm predecessor and that the die area would shrink by 34 percent. The 45 nm Cell requires less cooling and allows for cheaper production, also through the use of a much smaller heatsink. Mass production was initially slotted to begin in late 2008 but was moved to early 2009.

Prospects beyond 45 nm

[edit]

Sony, IBM and Toshiba announced to begin work on a Cell as small as 32 nm in January 2006, but since process shrinks in fabs usually happen on a global and not an individual chip scale, this was merely as a public commitment to take Cell to 32 nm.

References

[edit]
  1. ^ Kevin J. Barker, Kei Davis, Adolfy Hoisie, Darren J. Kerbyson, Mike Lang, Scott Pakin, Jose C. Sancho. "Entering the Petaflop Era:The Architecture and Performance of Roadrunner".