NASA Advanced Supercomputing Division
Agency overview | |
---|---|
Formed | 1982 |
Preceding agencies |
|
Headquarters | NASA Ames Research Center, Moffett Field, California 37°25′16″N 122°03′53″W / 37.42111°N 122.06472°W |
Agency executive |
|
Parent department | Ames Research Center Exploration Technology Directorate |
Parent agency | National Aeronautics and Space Administration (NASA) |
Website | www |
Current Supercomputing Systems | |
Pleiades | SGI/HPE ICE X supercluster |
Aitken[1] | HPE E-Cell system |
Electra[2] | SGI/HPE ICE X & HPE E-Cell system |
Endeavour | SGI UV shared-memory system |
Merope[3] | SGI Altix supercluster |
The NASA Advanced Supercomputing (NAS) Division is located at NASA Ames Research Center, Moffett Field in the heart of Silicon Valley in Mountain View, California. It has been the major supercomputing and modeling and simulation resource for NASA missions in aerodynamics, space exploration, studies in weather patterns and ocean currents, and space shuttle and aircraft design and development for almost forty years.
The facility currently houses the petascale Pleiades, Aitken, and Electra supercomputers, as well as the terascale Endeavour supercomputer. The systems are based on SGI and HPE architecture with Intel processors. The main building also houses disk and archival tape storage systems with a capacity of over an exabyte of data, the hyperwall visualization system, and one of the largest InfiniBand network fabrics in the world.[4] The NAS Division is part of NASA's Exploration Technology Directorate and operates NASA's High-End Computing Capability (HECC) Project.[5]
History
[edit]Founding
[edit]In the mid-1970s, a group of aerospace engineers at Ames Research Center began to look into transferring aerospace research and development from costly and time-consuming wind tunnel testing to simulation-based design and engineering using computational fluid dynamics (CFD) models on supercomputers more powerful than those commercially available at the time. This endeavor was later named the Numerical Aerodynamic Simulator (NAS) Project and the first computer was installed at the Central Computing Facility at Ames Research Center in 1984.
Groundbreaking on a state-of-the-art supercomputing facility took place on March 14, 1985 in order to construct a building where CFD experts, computer scientists, visualization specialists, and network and storage engineers could be under one roof in a collaborative environment. In 1986, NAS transitioned into a full-fledged NASA division and in 1987, NAS staff and equipment, including a second supercomputer, a Cray-2 named Navier, were relocated to the new facility, which was dedicated on March 9, 1987.[6]
In 1995, NAS changed its name to the Numerical Aerospace Simulation Division, and in 2001 to the name it has today.
Industry leading innovations
[edit]NAS has been one of the leading innovators in the supercomputing world, developing many tools and processes that became widely used in commercial supercomputing. Some of these firsts include:[7]
- Installed Cray's first UNIX-based supercomputer[8]
- Implemented a client/server model linking the supercomputers and workstations together to distribute computation and visualization
- Developed and implemented a high-speed wide area network (WAN) connecting supercomputing resources to remote users (AEROnet)
- Co-developed NASA's first method for dynamic distribution of production loads across supercomputing resources in geographically distant locations (NASA Metacenter)
- Implemented TCP/IP networking in a supercomputing environment
- Developed a batch-queuing system for supercomputers (NQS)
- Developed a UNIX-based hierarchical mass storage system (NAStore)
- Co-developed (with SGI) the first IRIX single system image 256-, 512-, and 1,024-processor supercomputers
- Co-developed (with SGI) the first Linux-based single-system image 512- and 1,024-processor supercomputers
- A 2,048-processor shared memory environment
Software development
[edit]NAS develops and adapts software in order to "complement and enhance the work performed on its supercomputers, including software for systems support, monitoring systems, security, and scientific visualization," and often provides this software to its users through the NASA Open Source Agreement (NOSA).[9]
A few of the important software developments from NAS include:
- NAS Parallel Benchmarks (NPB) were developed to evaluate highly parallel supercomputers and mimic the characteristics of large-scale CFD applications.
- Portable Batch System (PBS) was the first batch queuing software for parallel and distributed systems. It was released commercially in 1998 and is still widely used in the industry.
- PLOT3D was created in 1982 and is a computer graphics program still used today to visualize the grids and solutions of structured CFD datasets. The PLOT3D team was awarded the fourth largest prize ever given by the NASA Space Act Program for the development of their software, which revolutionized scientific visualization and analysis of 3D CFD solutions.[6]
- FAST (Flow Analysis Software Toolkit) is a software environment based on PLOT3D and used to analyze data from numerical simulations which, though tailored to CFD visualization, can be used to visualize almost any scalar and vector data. It was awarded the NASA Software of the Year Award in 1995.[10]
- INS2D and INS3D are codes developed by NAS engineers to solve incompressible Navier-Stokes equations in two- and three-dimensional generalized coordinates, respectively, for steady-state and time varying flow. In 1994, INS3D won the NASA Software of the Year Award.[6]
- Cart3D is a high-fidelity analysis package for aerodynamic design which allows users to perform automated CFD simulations on complex forms. It is still used at NASA and other government agencies to test conceptual and preliminary air- and spacecraft designs.[11] The Cart3D team won the NASA Software of the Year award in 2002.
- OVERFLOW (Overset grid flow solver) is a software package developed to simulate fluid flow around solid bodies using Reynolds-averaged, Navier-Stokes CFD equations. It was the first general-purpose NASA CFD code for overset (Chimera) grid systems and was released outside of NASA in 1992.
- Chimera Grid Tools (CGT) is a software package containing a variety of tools for the Chimera overset grid approach for solving CFD problems of surface and volume grid generation; as well as grid manipulation, smoothing, and projection.
- HiMAP A three level (Intra/Inter discipline, multicase) parallel HIgh fidelity Multidisciplinary (Fluids, Structures, Controls) Analysis Process,[12][13]
Supercomputing history
[edit]Since its construction in 1987, the NASA Advanced Supercomputing Facility has housed and operated some of the most powerful supercomputers in the world. Many of these computers include testbed systems built to test new architecture, hardware, or networking set-ups that might be utilized on a larger scale.[6][8] Peak performance is shown in Floating Point Operations Per Second (FLOPS).
Computer Name | Architecture | Peak Performance | Number of CPUs | Installation Date |
---|---|---|---|---|
Cray XMP-12 | 210.53 megaflops | 1 | 1984 | |
Navier | Cray 2 | 1.95 gigaflops | 4 | 1985 |
Chuck | Convex 3820 | 1.9 gigaflops | 8 | 1987 |
Pierre | Thinking Machines CM2 | 14.34 gigaflops | 16,000 | 1987 |
43 gigaflops | 48,000 | 1991 | ||
Stokes | Cray 2 | 1.95 gigaflops | 4 | 1988 |
Piper | CDC/ETA-10Q | 840 megaflops | 4 | 1988 |
Reynolds | Cray Y-MP | 2.54 gigaflops | 8 | 1988 |
2.67 gigaflops | 88 | 1988 | ||
Lagrange | Intel iPSC/860 | 7.88 gigaflops | 128 | 1990 |
Gamma | Intel iPSC/860 | 7.68 gigaflops | 128 | 1990 |
von Karman | Convex 3240 | 200 megaflops | 4 | 1991 |
Boltzmann | Thinking Machines CM5 | 16.38 gigaflops | 128 | 1993 |
Sigma | Intel Paragon | 15.60 gigaflops | 208 | 1993 |
von Neumann | Cray C90 | 15.36 gigaflops | 16 | 1993 |
Eagle | Cray C90 | 7.68 gigaflops | 8 | 1993 |
Grace | Intel Paragon | 15.6 gigaflops | 209 | 1993 |
Babbage | IBM SP-2 | 34.05 gigaflops | 128 | 1994 |
42.56 gigaflops | 160 | 1994 | ||
da Vinci | SGI Power Challenge | 16 | 1994 | |
SGI Power Challenge XL | 11.52 gigaflops | 32 | 1995 | |
Newton | Cray J90 | 7.2 gigaflops | 36 | 1996 |
Piglet | SGI Origin 2000/250 MHz | 4 gigaflops | 8 | 1997 |
Turing | SGI Origin 2000/195 MHz | 9.36 gigaflops | 24 | 1997 |
25 gigaflops | 64 | 1997 | ||
Fermi | SGI Origin 2000/195 MHz | 3.12 gigaflops | 8 | 1997 |
Hopper | SGI Origin 2000/250 MHz | 32 gigaflops | 64 | 1997 |
Evelyn | SGI Origin 2000/250 MHz | 4 gigaflops | 8 | 1997 |
Steger | SGI Origin 2000/250 MHz | 64 gigaflops | 128 | 1997 |
128 gigaflops | 256 | 1998 | ||
Lomax | SGI Origin 2800/300 MHz | 307.2 gigaflops | 512 | 1999 |
409.6 gigaflops | 512 | 2000 | ||
Lou | SGI Origin 2000/250 MHz | 4.68 gigaflops | 12 | 1999 |
Ariel | SGI Origin 2000/250 MHz | 4 gigaflops | 8 | 2000 |
Sebastian | SGI Origin 2000/250 MHz | 4 gigaflops | 8 | 2000 |
SN1-512 | SGI Origin 3000/400 MHz | 409.6 gigaflops | 512 | 2001 |
Bright | Cray SVe1/500 MHz | 64 gigaflops | 32 | 2001 |
Chapman | SGI Origin 3800/400 MHz | 819.2 gigaflops | 1,024 | 2001 |
1.23 teraflops | 1,024 | 2002 | ||
Lomax II | SGI Origin 3800/400 MHz | 409.6 gigaflops | 512 | 2002 |
Kalpana[14] | SGI Altix 3000[15] | 2.66 teraflops | 512 | 2003 |
Cray X1[16] | 204.8 gigaflops | 2004 | ||
Columbia | SGI Altix 3000[17] | 63 teraflops | 10,240 | 2004 |
SGI Altix 4700 | 10,296 | 2006 | ||
85.8 teraflops[18] | 13,824 | 2007 | ||
Schirra | IBM POWER5+[19] | 4.8 teraflops | 640 | 2007 |
RT Jones | SGI ICE 8200, Intel Xeon "Harpertown" Processors | 43.5 teraflops | 4,096 | 2007 |
Pleiades | SGI ICE 8200, Intel Xeon "Harpertown" Processors[20] | 487 teraflops | 51,200 | 2008 |
544 teraflops[21] | 56,320 | 2009 | ||
SGI ICE 8200, Intel Xeon "Harpertown"/"Nehalem" Processors[22] | 773 teraflops | 81,920 | 2010 | |
SGI ICE 8200/8400, Intel Xeon "Harpertown"/"Nehalem"/"Westmere" Processors[23] | 1.09 petaflops | 111,104 | 2011 | |
SGI ICE 8200/8400/X, Intel Xeon "Harpertown"/"Nehalem"/"Westmere"/"Sandy Bridge" Processors[24] | 1.24 petaflops | 125,980 | 2012 | |
SGI ICE 8200/8400/X, Intel Xeon "Nehalem"/"Westmere"/"Sandy Bridge"/"Ivy Bridge" Processors[25] | 2.87 petaflops | 162,496 | 2013 | |
3.59 petaflops | 184,800 | 2014 | ||
SGI ICE 8400/X, Intel Xeon "Westmere"/"Sandy Bridge"/"Ivy Bridge"/"Haswell" Processors[26] | 4.49 petaflops | 198,432 | 2014 | |
5.35 petaflops[27] | 210,336 | 2015 | ||
SGI ICE X, Intel Xeon "Sandy Bridge"/"Ivy Bridge"/"Haswell"/"Broadwell" Processors[28] | 7.25 petaflops | 246,048 | 2016 | |
Endeavour | SGI UV 2000, Intel Xeon "Sandy Bridge" Processors[29] | 32 teraflops | 1,536 | 2013 |
Merope | SGI ICE 8200, Intel Xeon "Harpertown" Processors[25] | 61 teraflops | 5,120 | 2013 |
SGI ICE 8400, Intel Xeon "Nehalem"/"Westmere" Processors[26] | 141 teraflops | 1,152 | 2014 | |
Electra | SGI ICE X, Intel Xeon "Broadwell" Processors[30] | 1.9 petaflops | 1,152 | 2016 |
SGI ICE X/HPE SGI 8600 E-Cell, Intel Xeon "Broadwell"/"Skylake" Processors[31] | 4.79 petaflops | 2,304 | 2017 | |
8.32 petaflops [32] | 3,456 | 2018 | ||
Aitken | HPE SGI 8600 E-Cell, Intel Xeon "Cascade Lake" Processors[33] | 3.69 petaflops | 1,150 | 2019 |
Computer Name | Architecture | Peak Performance | Number of CPUs | Installation Date |
Storage resources
[edit]Disk storage
[edit]In 1987, NAS partnered with the Defense Advanced Research Projects Agency (DARPA) and the University of California, Berkeley in the Redundant Array of Inexpensive Disks (RAID) project, which sought to create a storage technology that combined multiple disk drive components into one logical unit. Completed in 1992, the RAID project lead to the distributed data storage technology used today.[6]
The NAS facility currently houses disk mass storage on an SGI parallel DMF cluster with high-availability software consisting of four 32-processor front-end systems, which are connected to the supercomputers and the archival tape storage system. The system has 192 GB of memory per front-end[34] and 7.6 petabytes (PB) of disk cache.[4] Data stored on disk is regularly migrated to the tape archival storage systems at the facility to free up space for other user projects being run on the supercomputers.
Archive and storage systems
[edit]In 1987, NAS developed the first UNIX-based hierarchical mass storage system, named NAStore. It contained two StorageTek 4400 cartridge tape robots, each with a storage capacity of approximately 1.1 terabytes, cutting tape retrieval time from 4 minutes to 15 seconds.[6]
With the installation of the Pleiades supercomputer in 2008, the StorageTek systems that NAS had been using for 20 years were unable to meet the needs of the greater number of users and increasing file sizes of each project's datasets.[35] In 2009, NAS brought in Spectra Logic T950 robotic tape systems which increased the maximum capacity at the facility to 16 petabytes of space available for users to archive their data from the supercomputers.[36] As of March 2019, the NAS facility increased the total archival storage capacity of the Spectra Logic tape libraries to 1,048 petabytes (or 1 exabyte) with 35% compression.[34] SGI's Data Migration Facility (DMF) and OpenVault manage disk-to-tape data migration and tape-to-disk de-migration for the NAS facility.
As of March 2019, there is over 110 petabytes of unique data stored in the NAS archival storage system.[34]
Data visualization systems
[edit]In 1984, NAS purchased 25 SGI IRIS 1000 graphics terminals, the beginning of their long partnership with the Silicon Valley–based company, which made a significant impact on post-processing and visualization of CFD results run on the supercomputers at the facility.[6] Visualization became a key process in the analysis of simulation data run on the supercomputers, allowing engineers and scientists to view their results spatially and in ways that allowed for a greater understanding of the CFD forces at work in their designs.
The hyperwall
[edit]In 2002, NAS visualization experts developed a visualization system called the "hyperwall" which included 49 linked LCD panels that allowed scientists to view complex datasets on a large, dynamic seven-by-seven screen array. Each screen had its own processing power, allowing each one to display, process, and share datasets so that a single image could be displayed across all screens or configured so that data could be displayed in "cells" like a giant visual spreadsheet.[37]
The second generation "hyperwall-2" was developed in 2008 by NAS in partnership with Colfax International and is made up of 128 LCD screens arranged in an 8x16 grid 23 feet wide by 10 feet tall. It is capable of rendering one quarter billion pixels, making it the highest resolution scientific visualization system in the world.[38] It contains 128 nodes, each with two quad-core AMD Opteron (Barcelona) processors and a Nvidia GeForce 480 GTX graphics processing unit (GPU) for a dedicated peak processing power of 128 teraflops across the entire system—100 times more powerful than the original hyperwall.[39] The hyperwall-2 is directly connected to the Pleiades supercomputer's filesystem over an InfiniBand network, which allows the system to read data directly from the filesystem without needing to copy files onto the hyperwall-2's memory.
In 2014, the hyperwall was upgraded with new hardware: 256 Intel Xeon "Ivy Bridge" processors and 128 NVIDIA Geforce 780 Ti GPUs. The upgrade increased the system's peak processing power from 9 teraflops to 57 teraflops, and now has nearly 400 gigabytes of graphics memory.[40]
In 2020, the hyperwall was further upgraded with new hardware: 256 Intel Xeon Platinum 8268 (Cascade Lake) processors and 128 NVIDIA Quadro RTX 6000 GPUs with a total of 3.1 terabytes of graphics memory. The upgrade increased the system's peak processing power from 57 teraflops to 512 teraflops.[41]
Concurrent visualization
[edit]An important feature of the hyperwall technology developed at NAS is that it allows for "concurrent visualization" of data, which enables scientists and engineers to analyze and interpret data while the calculations are running on the supercomputers. Not only does this show the current state of the calculation for runtime monitoring, steering, and termination, but it also "allows higher temporal resolution visualization compared to post-processing because I/O and storage space requirements are largely obviated... [and] may show features in a simulation that would otherwise not be visible."[42]
The NAS visualization team developed a configurable concurrent pipeline for use with a massively parallel forecast model run on the Columbia supercomputer in 2005 to help predict the Atlantic hurricane season for the National Hurricane Center. Because of the deadlines to submit each of the forecasts, it was important that the visualization process would not significantly impede the simulation or cause it to fail.
References
[edit]- ^ "Aitken Supercomputer homepage". NAS.
- ^ "Electra Supercomputer homepage". NAS.
- ^ "Merope Supercomputer homepage". NAS.
- ^ a b "NASA Advanced Supercomputing Division: Advanced Computing" (PDF). NAS. 2019.
- ^ "NAS Homepage - About the NAS Division". NAS.
- ^ a b c d e f g "NASA Advanced Supercomputing Division 25th Anniversary Brochure (PDF)" (PDF). NAS. Archived from the original (PDF) on 2013-03-02.
- ^ "NAS homepage: Division History". NAS.
- ^ a b "NAS High-Performance Computer History". Gridpoints: 1A–12A. Spring 2002.
- ^ "NAS Software and Datasets". NAS.
- ^ "NASA Flow Analysis Software Toolkit". NASA.
- ^ "NASA Cart3D Homepage". Archived from the original on 2002-06-02.
- ^ "NASA.gov". Archived from the original on 2023-01-17. Retrieved 2024-05-21.
- ^ "NASA.gov" (PDF).
- ^ "NASA to Name Supercomputer After Columbia Astronaut". NAS. May 2005. Archived from the original on 2013-03-17. Retrieved 2014-03-07.
- ^ "NASA Ames Installs World's First Alitx 512-Processor Supercomputer". NAS. November 2003. Archived from the original on 2013-03-17. Retrieved 2014-03-07.
- ^ "New Cray X1 System Arrives at NAS". NAS. April 2004.
- ^ "NASA Unveils Its Newest, Most Powerful Supercomputer". NASA. October 2004. Archived from the original on 2004-10-28. Retrieved 2014-03-07.
- ^ "Columbia Supercomputer Legacy homepage". NASA.
- ^ "NASA Selects IBM for Next-Generation Supercomputing Applications". NASA. June 2007.
- ^ "NASA Supercomputer Ranks Among World's Fastest – November 2008". NASA. November 2008. Archived from the original on 2019-08-25. Retrieved 2014-03-07.
- ^ "'Live' Integration of Pleiades Rack Saves 2 Million Hours". NAS. February 2010. Archived from the original on 2013-03-16. Retrieved 2014-03-07.
- ^ "NASA Supercomputer Doubles Capacity, Increases Efficiency". NASA. June 2010. Archived from the original on 2019-08-25. Retrieved 2014-03-07.
- ^ "NASA's Pleiades Supercomputer Ranks Among World's Fastest". NASA. June 2011. Archived from the original on 2011-10-21. Retrieved 2014-03-07.
- ^ "Pleiades Supercomputer Gets a Little More Oomph". NASA. June 2012.
- ^ a b "NASA's Pleiades Supercomputer Upgraded, Harpertown Nodes Repurposed". NAS. August 2013. Archived from the original on 2019-08-25. Retrieved 2014-03-07.
- ^ a b "NASA's Pleiades Supercomputer Upgraded, Gets One Petaflops Boost". NAS. October 2014. Archived from the original on 2019-08-25. Retrieved 2014-12-29.
- ^ "Pleiades Supercomputer Performance Leaps to 5.35 Petaflops with Latest Expansion". NAS. January 2015.
- ^ "Pleiades Supercomputer Peak Performance Increased, Long-Term Storage Capacity Tripled". NAS. July 2016. Archived from the original on 2019-06-19. Retrieved 2020-03-05.
- ^ "Endeavour Supercomputer Resource homepage". NAS.
- ^ "NASA Ames Kicks off Pathfinding Modular Supercomputing Facility". NAS. February 2017.
- ^ "Recently Expanded, NASA's First Modular Supercomputer Ranks 15th in the U.S. on TOP500 List". NAS. November 2017.
- ^ "NASA's Electra Supercomputer Rises to 12th Place in the U.S. on the TOP500 List". NAS. November 2018.
- ^ "NASA Advanced Supercomputing Division: Modular Supercomputing" (PDF). NAS. 2019.
- ^ a b c "HECC Archival Storage System Resource homepage". NAS.
- ^ "NAS Silo, Tape Drive, and Storage Upgrades - SC09" (PDF). NAS. November 2009.
- ^ "New NAS Data Archive System Installation Completed". NAS. 2009.
- ^ "Mars Flyer Debuts on Hyperwall". NAS. September 2003.
- ^ "NASA Develops World's Highest Resolution Visualization System". NAS. June 2008.
- ^ "NAS Visualization Systems Overview". NAS.
- ^ "NAS hyperwall Visualization System Upgraded with Ivy Bridge Nodes". NAS. October 2014.
- ^ "NAS Visualization Systems: hyperwall". NAS. December 2020.
- ^ Ellsworth, David; Bryan Green; Chris Henze; Patrick Moran; Timothy Sandstrom (September–October 2006). "Concurrent Visualization in a Production Supercomputing Environment" (PDF). IEEE Transactions on Visualization and Computer Graphics. 12 (5): 997–1004. doi:10.1109/TVCG.2006.128. PMID 17080827. S2CID 14037933.
External links
[edit]NASA Advanced Supercomputing Resources
[edit]- NASA Advanced Supercomputing (NAS) Division homepage
- NAS Computing Environment homepage
- NAS Pleiades Supercomputer homepage
- NAS Aitken Supercomputer homepage
- NAS Electra Supercomputer homepage
- NAS Archive and Storage Systems homepage
- NAS hyperwall-2 homepage