What's new

Top500: ORNL’s Exaflop machine Frontier keeps top spot, new competitor Leonardo breaks the Top10

Hamartia Antidote

ELITE MEMBER
Joined
Nov 17, 2013
Messages
35,188
Reaction score
30
Country
United States
Location
United States

FRANKFURT, Germany; BERKELEY, Calif.; and KNOXVILLE, Tenn.— The 60th edition of the TOP500 reveals that the Frontier system is still the only true exascale machine on the list.

With an HPL score of 1.102 EFlop/s, the Frontier machine at Oak Ridge National Laboratory (ORNL) did not improve upon the score it reached on the June 2022 list. That said, Frontier’s near-tripling of the HPL score received by second-place winner is still a major victory for computer science. On top of that, Frontier demonstrated a score of 7.94 EFlop/s on the HPL-MxP benchmark, which measures performance for mixed-precision calculation. Frontier is based on the HPE Cray EX235a architecture and it relies on AMD EPYC 64C 2GHz processor. The system has 8,730,112 cores and a power efficiency rating of 52.23 gigaflops/watt. It also relies on gigabit ethernet for data transfer.

Frontier is the clear winner of the race to exascale, and it will require a lot of work and innovation to knock it from the top spot.

The Fugaku system at the Riken Center for Computational Science (R-CCS) in Kobe, Japan, previously held the top spot for two years in a row before being moved down by the Frontier machine. With an HPL score of 0.442 EFlop/s, Fugaku has retained its No. 2 spot from the previous list.

The LUMI system, which found its way to the No. 3 spot on the last list, has retained its spot. However, the system went through a major upgrade to keep it competitive. The upgrade doubled the machines size, which allowed it to achieve an HPL score of 0.309 EFlop/s.

The only new machine to grace the top of the list was the No. 4 Leonardo system at EuroHPC/CINECA in Bologna, Italy. The machine achieved an HPL score of 0.174 EFlop/s with 1,463,616 cores.

Here is a summary of the system in the Top 10:

  • Frontier is the No. 1 system in the TOP500. This HPE Cray EX system is the first US system with a performance exceeding one EFlop/s. It is installed at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, where it is operated for the Department of Energy (DOE). It currently has achieved 1.102 EFlop/s using 8,730,112 cores. The new HPE Cray EX architecture combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI, with AMD Instinct™ 250X accelerators, and Slingshot-10 interconnect.
  • Fugaku, now the No. 2 system, is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Pflop/s.
  • The upgraded LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland is the No. 3 with a performance of 309.1 Pflop/s. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is located in CSC’s data center in Kajaani, Finland.
  • The new No. 4 system Leonardo is installed at a different EuroHPC site in CINECA, Italy. It is an Atos BullSequana XH2000 system with Xeon Platinum 8358 32C 2.6GHz as main processors, NVIDIA A100 SXM4 40 GB as accelerators, and Quad-rail NVIDIA HDR100 Infiniband as interconnect. It achieved a Linpack performance of 174.7 Pflop/s.
  • Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, is now listed at the No. 5 spot worldwide with a performance of 148.8 Pflop/s on the HPL benchmark, which is used to rank the TOP500 list. Summit has 4,356 nodes, each one housing two POWER9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.
  • Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA is at No. 6. Its architecture is very similar to the #5 system’s Summit. It is built with 4,320 nodes with two POWER9 CPUs and four NVIDIA Tesla V100 GPUs. Sierra achieved 94.6 Pflop/s.
  • Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, which is in China's Jiangsu province is listed at the No. 7 position with 93 Pflop/s.
  • Perlmutter at No. 8 is based on the HPE Cray “Shasta” platform and a heterogeneous system with AMD EPYC-based nodes and 1,536 NVIDIA A100 accelerated nodes. Perlmutter achieved 64.6 Pflop/s
  • Selene now at No. 9 is an NVIDIA DGX A100 SuperPOD installed inhouse at NVIDIA in the USA. The system is based on AMD EPYC processor with NVIDIA A100 for acceleration and a Mellanox HDR InfiniBand as network and achieved 63.4 Pflop/s.
  • Tianhe-2A (Milky Way-2A), a system developed by China’s National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzhou, China is now listed as the No. 10 system with 61.4 Pflop/s.

Other TOP500 Highlights​

The data from this list shows that AMD processors are still the preferred choice for HPC systems. Frontier uses Gen AMD EPYC processors optimized for HPC and AI, as did the No. 3 LUMI system. That said, Xeon chips were also prevalent on the list. In fact, the new Leornardo system uses Xeon Platinum processors.
 
.
LOL The list is useless.

"The missing Chinese machines affect the Top500 list and alter the historical information that the list conveys," said Jack Dongarra
 
.
Back
Top Bottom