Jump to content

Very-large-scale integration

From Wikipedia, the free encyclopedia
(Redirected from Vlsi)

Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining millions or billions of MOS transistors onto a single chip. VLSI began in the 1970s when MOS integrated circuit (Metal Oxide Semiconductor) chips were developed and then widely adopted, enabling complex semiconductor and telecommunications technologies. The microprocessor and memory chips are VLSI devices.

Before the introduction of VLSI technology, most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI enables IC designers to add all of these into one chip.

A VLSI integrated-circuit die

History

[edit]

Background

[edit]

The history of the transistor dates to the 1920s when several inventors attempted devices that were intended to control current in solid-state diodes and convert them into triodes. Success came after World War II, when the use of silicon and germanium crystals as radar detectors led to improvements in fabrication and theory. Scientists who had worked on radar returned to solid-state device development. With the invention of the first transistor at Bell Labs in 1947, the field of electronics shifted from vacuum tubes to solid-state devices.[1]

With the small transistor at their hands, electrical engineers of the 1950s saw the possibilities of constructing far more advanced circuits. However, as the complexity of circuits grew, problems arose.[2] One problem was the size of the circuit. A complex circuit like a computer was dependent on speed. If the components were large, the wires interconnecting them must be long. The electric signals took time to go through the circuit, thus slowing the computer.[2]

The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this problem by making all the components and the chip out of the same block (monolith) of semiconductor material.[3] The circuits could be made smaller, and the manufacturing process could be automated. This led to the idea of integrating all components on a single-crystal silicon wafer, which led to small-scale integration (SSI) in the early 1960s, and then medium-scale integration (MSI) in the late 1960s.[4]

VLSI

[edit]

General Microelectronics introduced the first commercial MOS integrated circuit in 1964.[5] In the early 1970s, MOS integrated circuit technology allowed the integration of more than 10,000 transistors in a single chip.[6] This paved the way for VLSI in the 1970s and 1980s, with tens of thousands of MOS transistors on a single chip (later hundreds of thousands, then millions, and now billions).

The first semiconductor chips held two transistors each. Subsequent advances added more transistors, and as a consequence, more individual functions or systems were integrated over time. The first integrated circuits held only a few devices, perhaps as many as ten diodes, transistors, resistors and capacitors, making it possible to fabricate one or more logic gates on a single device. Now known retrospectively as small-scale integration (SSI), improvements in technique led to devices with hundreds of logic gates, known as medium-scale integration (MSI). Further improvements led to large-scale integration (LSI), i.e. systems with at least a thousand logic gates. Current technology has moved far past this mark and today's microprocessors have many millions of gates and billions of individual transistors.

At one time, there was an effort to name and calibrate various levels of large-scale integration above VLSI. Terms like ultra-large-scale integration (ULSI) were used. But the huge number of gates and transistors available on common devices has rendered such fine distinctions moot. Terms suggesting greater than VLSI levels of integration are no longer in widespread use.

In 2008, billion-transistor processors became commercially available. This became more commonplace as semiconductor fabrication advanced from the then-current generation of 65 nm processors. Current designs, unlike the earliest devices, use extensive design automation and automated logic synthesis to lay out the transistors, enabling higher levels of complexity in the resulting logic functionality. Certain high-performance logic blocks, like the SRAM (static random-access memory) cell, are still designed by hand to ensure the highest efficiency.[citation needed]

Structured design

[edit]

Structured VLSI design is a modular methodology originated by Carver Mead and Lynn Conway for saving microchip area by minimizing the interconnect fabric area. This is obtained by repetitive arrangement of rectangular macro blocks which can be interconnected using wiring by abutment. An example is partitioning the layout of an adder into a row of equal bit slices cells. In complex designs this structuring may be achieved by hierarchical nesting.[7]

Structured VLSI design had been popular in the early 1980s, but lost its popularity later[citation needed] because of the advent of placement and routing tools wasting a lot of area by routing, which is tolerated because of the progress of Moore's Law. When introducing the hardware description language KARL in the mid-1970s, Reiner Hartenstein coined the term "structured VLSI design" (originally as "structured LSI design"), echoing Edsger Dijkstra's structured programming approach by procedure nesting to avoid chaotic spaghetti-structured programs.

Difficulties

[edit]

As microprocessors become more complex due to technology scaling, microprocessor designers have encountered several challenges which force them to think beyond the design plane, and look ahead to post-silicon:

  • Process variation – As photolithography techniques get closer to the fundamental laws of optics, achieving high accuracy in doping concentrations and etched wires is becoming more difficult and prone to errors due to variation. Designers now must simulate across multiple fabrication process corners before a chip is certified ready for production, or use system-level techniques for dealing with effects of variation.[8][9]
  • Stricter design rules – Due to lithography and etch issues with scaling, design rule checking for layout has become increasingly stringent. Designers must keep in mind an ever increasing list of rules when laying out custom circuits. The overhead for custom design is now reaching a tipping point, with many design houses opting to switch to electronic design automation (EDA) tools to automate their design process.[10]
  • Timing/design closure – As clock frequencies tend to scale up, designers are finding it more difficult to distribute and maintain low clock skew between these high frequency clocks across the entire chip. This has led to a rising interest in multicore and multiprocessor architectures, since an overall speedup can be obtained even with lower clock frequency by using the computational power of all the cores.[11]
  • First-pass success – As die sizes shrink (due to scaling), and wafer sizes go up (due to lower manufacturing costs), the number of dies per wafer increases, and the complexity of making suitable photomasks goes up rapidly. A mask set for a modern technology can cost several million dollars. This non-recurring expense deters the old iterative philosophy involving several "spin-cycles" to find errors in silicon, and encourages first-pass silicon success. Several design philosophies have been developed to aid this new design flow, including design for manufacturing (DFM), design for test (DFT), and Design for X.[12]
  • Electromigration

See also

[edit]

References

[edit]
  1. ^ Zorpette, Glenn (20 November 2022). "How the First Transistor Worked". IEEE Spectrum.
  2. ^ a b "The History of the Integrated Circuit". Nobelprize.org. Archived from the original on 29 June 2018. Retrieved 21 April 2012.
  3. ^ "BBC - History - Historic Figures: Kilby and Noyce (1923-2005)". www.bbc.co.uk. Retrieved 10 August 2024.
  4. ^ O’Regan, Gerard (2016), O'Regan, Gerard (ed.), "The Invention of the Integrated Circuit and the Birth of Silicon Valley", Introduction to the History of Computing: A Computing History Primer, Undergraduate Topics in Computer Science, Cham: Springer International Publishing, pp. 93–100, doi:10.1007/978-3-319-33138-6_7, ISBN 978-3-319-33138-6, retrieved 10 August 2024
  5. ^ "1964: First Commercial MOS IC Introduced". Computer History Museum.
  6. ^ Hittinger, William C. (1973). "Metal-Oxide-Semiconductor Technology". Scientific American. 229 (2): 48–59. Bibcode:1973SciAm.229b..48H. doi:10.1038/scientificamerican0873-48. ISSN 0036-8733. JSTOR 24923169.
  7. ^ Jain, B. K. (August 2009). Digital Electronics - A Modern Approach by B K Jain. Global Vision Publishing House. ISBN 9788182202153. Retrieved 2 May 2017.
  8. ^ Wu, Qiang; Li, Yanli; Yang, Yushu; Chen, Shoumian; Zhao, Yuhang (26 June 2020). "The Law that Guides the Development of Photolithography Technology and the Methodology in the Design of Photolithographic Process". 2020 China Semiconductor Technology International Conference (CSTIC). IEEE. pp. 1–6. doi:10.1109/CSTIC49141.2020.9282436. ISBN 978-1-7281-6558-5.
  9. ^ "Exploring the Challenges of VLSI Design: Navigating Complexity for Success". InSemi Tech. Retrieved 10 August 2024.
  10. ^ Wang, Laung-Terng; Chang, Yao-Wen; Cheng, Kwang-Ting (Tim) (February 2009). Electronic Design Automation: Synthesis, Verification, and Test. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. ISBN 978-0-08-092200-3.
  11. ^ "Clock Skew in STA". 23 June 2024. Retrieved 10 August 2024.
  12. ^ Rieger, Michael L. (26 November 2019). "Retrospective on VLSI value scaling and lithography". Journal of Micro/Nanolithography, MEMS, and MOEMS. 18 (4): 040902. Bibcode:2019JMM&M..18d0902R. doi:10.1117/1.JMM.18.4.040902. ISSN 1932-5150.

Further reading

[edit]
[edit]