Derecho

 

NSF NCAR’s current flagship supercomputing system

Derecho installed at NWSC

The Derecho system installed at NWSC.

Installed in 2023, Derecho is helping scientists conduct research needed to better understand a range of phenomena that affect society, from the behavior of major wildfires to eruptions of solar storms that can threaten GPS and other sensitive technologies. 

University researchers and NSF NCAR scientists can use Derecho to pursue work in Earth systems science and related sciences. To access these supercomputers, as well as NSF NCAR’s high-end storage systems and other resources, researchers and educators can apply for allocations via the processes defined for each of the communities we support.

Derecho features 2,488 compute nodes, each with 128 AMD Milan cores, and 82 GPU nodes, each with four NVIDIA A100 GPUs. An HPE Cray EX cluster, Derecho has a peak performance of 19.87 petaflops and delivers about 3.5 times the scientific throughput of NSF NCAR’s prior Cheyenne system. 

derecho

Summer Wasson demonstrates where the cold water goes to keep the Derecho supercomputer cool. (Matt Idler for Cowboy State Daily)

          Read news stories related to Derecho and computing at NSF NCAR.

Derecho is the first system at NSF NCAR with a substantial GPU partition. Four emerging and expanding scientific use cases influenced the design of Derecho: data assimilation, GPU-based modeling, machine learning, and high-throughput computing. These growth areas represent substantially different usage and hardware requirements and led to the ultimate Derecho configuration.
 

Get Started on Derecho
 

Derecho Configuration

318,464 CPU cores 2,488 CPU-only nodes
64-core AMD EPYC 7763 Milan processors
128 cores and 256 GB DDR4 memory per node
328 total GPUs 82 GPU nodes
4 NVIDIA A100 Tensor Core GPUs per node
40 GB HBM2 memory per GPU
600 GB/s NVIDIA NVLink GPU interconnect
64 AMD Milan cores and 512 GB memory
HPE Slingshot v11 interconnect Dragonfly topology
200 Gbps per port per direction
1.7-2.6 µs MPI latency
GPU nodes—four Slingshot injection ports
CPU nodes—one Slingshot injection port