What's new

China Dominates the World TOP500 Supercomputers

China unveils sample machine for new-generation exascale supercomputer
Xinhua | Updated: 2018-05-17 23:53
f_art.gif
w_art.gif
in_art.gif
more_art.gif

5afe279ba3103f68b65b8bde.jpeg
Photo taken on May 17, 2018 shows the prototype of Tianhe-3, a supercomputer capable of at least a billion billion calculations per second, during the 2nd World Intelligence Congress in North China's Tianjin. [Photo/Xinhua]

TIANJIN - The National Supercomputer Center in Tianjin unveiled a sample machine for the new-generation exascale supercomputer at the second World Intelligence Congress.

It is the first time the machine, which has three sets of equipment, each about two meters high, was on public view.

The new supercomputer Tianhe-3 will be 200 times faster and have 100 times more storage capacity than the Tianhe-1 supercomputer, China's first petaflop supercomputer launched in 2010, said Xia Zijun, deputy director of the research and development branch at the center.

5afe279ba3103f68b65b8be0.jpeg
Photo taken on May 17, 2018 shows the prototype of Tianhe-3, a supercomputer capable of at least a billion billion calculations per second, during the 2nd World Intelligence Congress in North China's Tianjin. [Photo/Xinhua]

The sample machine will be a test for the computing power of the Tianhe-3 supercomputer, which is expected to be ready in 2020, he said. The sample machine will go operational by year end.

It will pave the way for the development of a supercomputer capable of a billion billion calculations per second.

The center will explore the application of the computers in super computing, cloud computing, big data, artificial intelligence and internet of things, he said.

The supercomputer center in Tianjin began developing the exascale supercomputer with the National University of Defense Technology in 2016.
 
.
US Regains TOP500 Crown with Summit Supercomputer, Sierra Grabs Number Three Spot
TOP500 News Team | June 25, 2018 02:37 CEST

FRANKFURT, Germany; BERKELEY, Calif.; and KNOXVILLE, Tenn.—The TOP500 celebrates its 25thanniversary with a major shakeup at the top of the list. For the first time since November 2012, the US claims the most powerful supercomputer in the world, leading a significant turnover in which four of the five top systems were either new or substantially upgraded.

summit-supercomputer-800x450.jpg

Summit supercomputer. Source: Oak Ridge National Laboratory

Summit, an IBM-built supercomputer now running at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot with a performance of 122.3 petaflops on High Performance Linpack (HPL), the benchmark used to rank the TOP500 list. Summit has 4,356 nodes, each one equipped with two 22-core Power9 CPUs, and six NVIDIA Tesla V100 GPUs. The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.

Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, drops to number two after leading the list for the past two years. Its HPL mark of 93 petaflops has remained unchanged since it came online in June 2016.

Sierra, a new system at the DOE’s Lawrence Livermore National Laboratory took the number three spot, delivering 71.6 petaflops on HPL. Built by IBM, Sierra’s architecture is quite similar to that of Summit, with each of its 4,320 nodes powered by two Power9 CPUs plus four NVIDIA Tesla V100 GPUs and using the same Mellanox EDR InfiniBand as the system interconnect.

Tianhe-2A, also known as Milky Way-2A, moved down two notches into the number four spot, despite receiving a major upgrade that replaced its five-year-old Xeon Phi accelerators with custom-built Matrix-2000 coprocessors. The new hardware increased the system’s HPL performance from 33.9 petaflops to 61.4 petaflops, while bumping up its power consumption by less than four percent. Tianhe-2A was developed by China’s National University of Defense Technology (NUDT) and is installed at the National Supercomputer Center in Guangzhou, China.

The new AI Bridging Cloud Infrastructure (ABCI) is the fifth-ranked system on the list, with an HPL mark of 19.9 petaflops. The Fujitsu-built supercomputer is powered by 20-core Xeon Gold processors along with NVIDIA Tesla V100 GPUs. It’s installed in Japan at the National Institute of Advanced Industrial Science and Technology (AIST).

Piz Daint (19.6 petaflops), Titan (17.6 petaflops), Sequoia (17.2 petaflops), Trinity (14.1 petaflops), and Cori (14.0 petaflops) move down to the number six through 10 spots, respectively.

General highlights

Despite the ascendance of the US at the top of the rankings, the country now claims only 124 systems on the list, a new low. Just six months ago, the US had 145 systems. Meanwhile, China improved its representation to 206 total systems, compared to 202 on the last list. However, thanks mainly to Summit and Sierra, the US did manage to take the lead back from China in the performance category. Systems installed in the US now contribute 38.2 percent of the aggregate installed performance, with China in second place with 29.1 percent. These numbers are a reversal compared to six months ago.

The next most prominent countries are Japan, with 36 systems, the United Kingdom, with 22 systems, Germany with 21 systems, and France, with 18 systems. These numbers are nearly the same as they were on the previous list.

For the first time, total performance of all 500 systems exceeds one exaflop, 1.22 exaflops to be exact. That’s up from 845 petaflops in the November 2017 list. As impressive as that sounds, the increase in installed performance is well below the previous long-term trend we had seen until 2013.

The overall increase in installed capacity is also reflected in the fact that there are now 273 systems with HPL performance greater than one petaflop, up from 181 systems on the previous list. The entry level to the list is now 716 teraflops, an increase of 168 teraflops.

Technology trends

Accelerators are used in 110 TOP500 systems, a slight increase from the 101 accelerated systems in the November 2017 lists. NVIDIA GPUs are present in 96 of these systems, including five of the top 10: Summit, Sierra, ABCI, Piz Daint, and Titan. Seven systems are equipped with Xeon Phi coprocessors, while PEZY accelerators are used in four systems. An additional 20 systems now use Xeon Phi as the main processing unit.

Almost all the supercomputers on the list (97.8 percent) are powered by main processors with eight or more cores and more than half (53.2 percent) have over 16 cores.

Ethernet, 10G or faster, is now used in 247 systems, up from 228 six months ago. InfiniBand is found on 139 systems, down from 163 on the previous list. Intel’s Omni-Path technology is in 38 systems, slightly up from 35 six months ago.

Vendor highlights

For the first time, the leading HPC manufacturer of supercomputers on the list is not from the US. Chinese-based Lenovo took the lead with 23.8 percent (122 systems) of all installed machines, followed by HPE with 15.8 percent (79 systems), Inspur with 13.6 percent (68 systems), Cray with 11.2 percent (56 systems), and Sugon with 11 percent (55 systems). Of these, only Lenovo, Inspur, and Sugon captured additional system share compared to half a year ago.

Even though IBM has two of the top three supercomputers in Summit and Sierra, it claims just 19 systems on the entire list. However, thanks to those two machines, the company now contributes 19.9 percent of all TOP500 performance. Trailing IBM is Cray, with 16.5 percent of performance, Lenovo with 12.0 percent, and HPE with 9.9 percent.

Intel processors are used in 476 systems, which is marginally higher than the 471 systems on the last list. IBM Power processors are now in 13 systems, down from 14 systems since November 2017.

Green500 results

The top three positions in the Green500 are all taken by supercomputers installed in Japan that are based on the ZettaScaler-2.2 architecture using PEZY-SC2 accelerators, while all other system in the top 10 use NVIDIA GPUs.

The most energy-efficient supercomputer is once again the Shoubu system B, a ZettaScaler-2.2 system installed at the Advanced Center for Computing and Communication, RIKEN, Japan. It was remeasured and achieved 18.4 gigaflops/watt during its 858 teraflops Linpack performance run. It is ranked number 362 in the TOP500 list.

The second-most energy-efficient system is Suiren2 system at the High Energy Accelerator Research Organization/KEK, Japan. This ZettaScaler-2.2 system achieved 16.8 gigaflops/watt and is listed at position 421 in the TOP500. Number three on the Green500 is the Sakura system, which is also installed at the High Energy Accelerator Research Organization/KEK. It achieved 16.7 gigaflops/watt and occupies position 388 on the TOP500 list.

They are followed by the DGX SaturnV Volta system in the US; Summit in the US; the TSUBAME 3.0 system, AIST AI Cloud system, the AI Bridging Cloud Infrastructure (ABCI) system, all from Japan; the new IBM MareNostrum P9 cluster in Spain; the DOE’s Summit system; and Wilkes-2, from the UK. All of these systems use various NVIDIA GPUs.

The most energy-efficient supercomputer that doesn’t rely on accelerators of any kind is the Sunway TaihuLight, which is powered exclusively by ShenWei processors. Its 6.05 gigaflops/watt earned it 22nd place on the Green500 list.

HPCG Results

The TOP500 list has incorporated the High-Performance Conjugate Gradient (HPCG) Benchmark results, which provided an alternative metric for assessing supercomputer performance and is meant to complement the HPL measurement.

The two new DOE systems, Summit at ORNL and Sierra at LLNL, captured the first two positions on the latest HPCG rankings. Summit achieved 2.93 HPCG-petaflops and Sierra delivered 1.80 HPCG-petaflops. They are followed by the previous leader, Fujitsu’s K computer, which attained 0.60 HPCG-petaflops. Trinity, a Cray XC40 system installed at Los Alamos National Lab and Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) round out the top five.



US Regains TOP500 Crown with Summit Supercomputer, Sierra Grabs Number Three Spot | TOP500 Supercomputer Sites
 
.
Lenovo Attains Status as Largest Global Provider of Top500 Supercomputers
June 25, 2018

FRANKFURT, Germany, June 25, 2018 – Today, at the International Supercomputing Conference (ISC) in Frankfurt, Lenovo Data Center Group continued its global momentum, becoming the world’s largest TOP500 supercomputing provider measured by the number of systems ranked on the TOP500 list. 117 of the 500 most powerful supercomputers included in the TOP500 are Lenovo installations, meaning nearly one out of every four systems (23.4 percent) on the prestigious list is a Lenovo solution.

“Last year, we set a goal to become the world’s largest provider of TOP500 computing systems by 2020. We have reached that goal two years ahead of our original plan,” said Kirk Skaugen, President of Lenovo Data Center Group. “This distinction is a testament to our commitment to prioritize customer satisfaction, deliver cutting edge innovation and performance and be the world’s most trusted data center partner. We are motivated every day by the scientists and their groundbreaking research as we work together to solve humanity’s greatest challenges.”

Lenovo’s high performance computing customer base is as diverse as it is wide, with 17 of the top 25 research universities and institutions across the globe now powering their research with Lenovo’s comprehensive HPC and AI solutions. Lenovo, with dual headquarters in Morrisville, NC, USA and Beijing, China, enables ground breaking research in over 160 countries in the world and in many fields including cancer and brain research, astrophysics, climate science, chemistry, biology, artificial intelligence, automotive and aeronautics, to name a few.

Examples of Lenovo’s innovative supercomputer system designs and the research they enable include:

  • ITALY: CINECA – Largest computing center in Italy; The Marconi Supercomputer is among the world’s fastest energy efficient supercomputers; Research projects range from precision medicine to self-driving cars.
  • CANADA: SciNet – Home to Niagara, the most powerful supercomputer in Canada; First of its kind to leverage a dragonfly topology; Researchers have access to 3 petaflops of Lenovo processing power to help them understand the effect of climate change on ocean circulations.
  • GERMANY: Leibniz-Rechenzentrum (LRZ) – Supercomputing center in Munich, Germany; Lenovo’s Direct to Node warm water cooling technologies have reduced energy consumption at the facility by 40 percent; Scientists conduct earthquake and tsunami simulations to better predict future natural disasters.
  • SPAIN: Barcelona Supercomputing Center – Largest supercomputer in Spain; Voted “World’s Most Beautiful Data Center” by DatacenterDynamics; Scientists are using artificial intelligence models to improve the detection of retinal disease.
  • CHINA: Peking University – The first supercomputer in China to use Lenovo’s Direct to Node warm water cooling technology; Scientists are using Lenovo systems to conduct world leading life science and genetics research.
  • INDIA: The Liquid Propulsion System Centre (LPSC) – Research and development center functioning under the Indian Space Research Organization; Using Lenovo’s Direct to Node warm water cooling technology to develop next generation earth-to-orbit technologies.
  • DENMARK: VESTAS – The largest supercomputer in Denmark; Winner of HPCwire’s “Reader’s Choice for Best Use of High Performance Data Analytics”; Vestas is working to make wind energy production even more efficient by collecting and analyzing data to help customers pick the best sites for wind energy installations.
“Lenovo has an industry leading ability to bring deep innovations and a comprehensive approach to execute on the largest scale and highest performance, working with our customers to design supercomputing systems that meet their needs in terms of design and compute power,” said Madhu Matta, Vice President and General Manager of HPC and AI at Lenovo Data Center Group. “This flexibility and customer-first attitude positions us well for future growth in the high performance computing and artificial intelligence markets.”

To further enable customers to increase performance and simultaneously reduce electrical consumption, Lenovo also announced Neptune – its holistic, three-pronged approach to liquid cooling technologies – this week at ISC. Neptune encompasses the company’s entire suite of liquid cooling technologies including Lenovo’s Direct to Node (DTN) warm water cooling, rear door heat exchanger (RDHX) and hybrid Thermal Transfer Module (TTM) solutions, which combine both air and liquid cooling to deliver peak or high performance for HPC, AI and enterprise customers.


Lenovo Attains Status as Largest Global Provider of Top500 Supercomputers | HPCWire
 
. .
Hyperion: China Maintains Lead in Race to Exascale
By Chelsea Lang
June 28, 2018

Amidst a flurry of activity surrounding machine learning, quantum, and cloud, ISC 2018, Tuesday’s Hyperion Research briefing reminded us that there is no escaping the race to exascale. And while Hyperion’s estimation that China will beat out the United States, Japan and Europe comes as no surprise to those keeping tabs, CEO Earl Joseph was quick to point out that the exascale panorama is in almost constant flux.

Source: Hyperion Research (compiled through a combination of publicly available data and Hyperion’s estimates)

Most recently that landscape was disrupted by the U.S. announcement of its intention to procure two or potentially three exascale systems under the CORAL-2 program, with an anticipated cost of up to $600 million per supercomputer. That’s in addition to the retooled Intel-Cray Aurora system that is projected to reach over one exaflops. And between all four of the global players, Hyperion reports that they’re seeing shake ups every four to six weeks.

But as it currently stands, Joseph says that we should expect to see China reach the first peak and sustained exascale systems, with peak estimated to arrive in 2020, and sustained in either 2021 or 2022, with the U.S. expected to trail China by 6-9 months, crossing the same finish lines in 2021 and 2022-2023, respectively.

China’s proposed systems, which are expected to source hardware and processors from Chinese vendors (with a possibility of some U.S. processors included in the mix). The potential systems in contention are: Sugon Exascale, Sunway Exascale, TianHe-3, and potentially a Wuxi system.

Meanwhile, Hyperion lists four potential systems in the U.S. lineup: ANL’s A21, ORNL’s Frontier (OLCF5), LLNL’s El Capitan (ATS-4), NERSC-10. System deliveries are expected to begin in 2021 and will have roughly one year between installations, with early operation expected one year later for each system. Hyperion also included a placeholder for an NSF Exascale Phase 2 system, which is slated to achieve a 10x performance boost over the Phase 1 machines. The systems are expected to feature American hardware and processors, with the potential inclusion of Arm.


Source: Hyperion. (Missing from Hyperion’s slide is the potential second Argonne system that could be funded under CORAL-2 and delivered in the 2022-2023 timeframe, discussed here.)

But Europe and Japan have been a bit trickier to pin down, as Hyperion COO Steve Conway stepped in to explain, particularly as it related to Europe. With a growing emphasis on indigenous processors across the board, the European exascale effort is at a significant disadvantage. In addition to contending with red tape, selecting the host country, and budget concerns, a European plan to design and develop its own processors and accelerators has changed the schedule significantly, and is likely to cause the EU exascale timeline to slip by 2-4 years, according to the Hyperion analysis.

Conway noted that, to date, approximately 20 EU member states have committed to help fund the €1 billion effort. ETP4HPC and the CEA-RIKEN collaboration around ARMv8 are expected to be major contributors to the effort.

By comparison, Conway remarked that Japan was the most stable of the four contestants. Backed by a well-established processor and facing budgetary concerns, Hyperion expects that rather than racing to be the first, Japan is likely looking to climb to the number-one spot after the race has ended by leveraging a system with a traditional architecture and extreme bandwidth.

Looking at the numbers, investments in R&D among the group the board bore out Hyperion’s estimates, with the U.S. commitments at $2 billion per year, China in the same ballpark, Europe planning investment of €5-6 billion in total, and Japan potentially investing $1 billion over five years. But if you look at historical investments in HPC, it’s worth noting that European investments have gained significant momentum, as it’s jumped from a 25 percent share of global HPC spending to 29 percent, and Conway added that funding is increasing dramatically across the board.


Hyperion: China Maintains Lead in Race to Exascale | HPCwire
 
. .
Source: Hyperion Research (compiled through a combination of publicly available data and Hyperion’s estimates)
“神威E级原型机设备已经运抵国家超算济南中心!”20日,记者从山东省科学院计算中心(国家超级计算济南中心)获悉,神威E级原型机目前正在安装过程中,“需要一周的时间才能够安装完毕。”国家超算济南中心副主任潘景山告诉记者,神威E级原型机是我国完全自主研发的E级计算机原型机,具有完全自主知识产权。
“Sunway E-class prototype equipment has arrived at the National Supercomputer Jinan Center!” On the 20th, the reporter learned from the Computing Center of Shandong Academy of Sciences (National Supercomputing Jinan Center) that the Sunway E-class prototype is currently in the process of installation. “It would takes a week to install.” Pan Jingshan, deputy director of the National Supercomputer Jinan Center, told the reporter that the Shenwei E-class prototype is a fully-developed E-class computer prototype in China with complete independent intellectual property rights.

我们花了6000万,构建了神威E级原型机,它的运行速度是3—4个P(1P=1千万亿次)。”潘景山表示,2011年,运行速度为1个P的神威·蓝光落户济南时花了6个亿。而E级超算的投入更惊人,“E级计算机+E级存储,总投入在40亿左右。”潘景山告诉记者,这么大的投入,为尽量避免失败,2016年科技部立项了3台原型机进行验证,其中神威原型机就落在济南,“这样利用E级原型机的研制验收关键技术,来向E级超算过渡。”
We spent 60 million to build the Sunway E-class prototype, which runs at 3-4 P (1P = 1 petaflop). Pan Jingshan said that in 2011, Sunway·Blu-ray, which operated at a speed of 1 P, cost 600 million yuan when it is installed in Jinan. The investment in E-class supercomputer is even more amazing. “E-class computer + E-class storage”, total investment is 4 billion. Pan Jingshan told reporters that such a large investment, in order to avoid mishap, that in 2016, the Ministry of Science and Technology set up plan for three prototypes, of which Sunway prototype is allocated to Jinan, "This makes use of prototyping for key technologies development and verification, for the transition to Exascale. ”

641
 
.
China unveils sample machine for new-generation exascale supercomputer
Xinhua | Updated: 2018-05-17 23:53
f_art.gif
w_art.gif
in_art.gif
more_art.gif

5afe279ba3103f68b65b8bde.jpeg
Photo taken on May 17, 2018 shows the prototype of Tianhe-3, a supercomputer capable of at least a billion billion calculations per second, during the 2nd World Intelligence Congress in North China's Tianjin. [Photo/Xinhua]

TIANJIN - The National Supercomputer Center in Tianjin unveiled a sample machine for the new-generation exascale supercomputer at the second World Intelligence Congress.

It is the first time the machine, which has three sets of equipment, each about two meters high, was on public view.

The new supercomputer Tianhe-3 will be 200 times faster and have 100 times more storage capacity than the Tianhe-1 supercomputer, China's first petaflop supercomputer launched in 2010, said Xia Zijun, deputy director of the research and development branch at the center.

5afe279ba3103f68b65b8be0.jpeg
Photo taken on May 17, 2018 shows the prototype of Tianhe-3, a supercomputer capable of at least a billion billion calculations per second, during the 2nd World Intelligence Congress in North China's Tianjin. [Photo/Xinhua]

The sample machine will be a test for the computing power of the Tianhe-3 supercomputer, which is expected to be ready in 2020, he said. The sample machine will go operational by year end.

It will pave the way for the development of a supercomputer capable of a billion billion calculations per second.

The center will explore the application of the computers in super computing, cloud computing, big data, artificial intelligence and internet of things, he said.

The supercomputer center in Tianjin began developing the exascale supercomputer with the National University of Defense Technology in 2016.

“天河三号”E级原型机研制部署完成-新华网
"Tianhe No.3" E-class prototype development and deployment completed - Xinhua Net
2018-07-26 11:35:34 来源: 天津日报 Source: Tianjin Daily
记者从国家超级计算天津中心获悉,由国防科技大学和国家超级计算天津中心等团队合作承担的“天河三号E级原型机系统”研制项目,经过两年多的持续关键技术攻关和突破,原型系统研制成功,在国家超级计算天津中心部署完成,于7月22日顺利通过国家科技部高技术中心组织的课题验收,将逐步进入开放应用阶段。
Translation:
The reporter learned from the National Supercomputing Tianjin Center that the "Tianhe No. 3 E-class prototype system" development project undertaken by the National Defense Science and Technology University and the National Supercomputing Tianjin Center and other teams, after more than two years of continuous key technology research and breakthrough, prototype system was successfully developed and deployed in the National Supercomputing Tianjin Center. On July 22, it successfully passed project acceptance organized by the High Technology Center of the Ministry of Science and Technology, and will now enter gradual open operational and application phase.
 
.
“天河三号”E级原型机研制部署完成-新华网
"Tianhe No.3" E-class prototype development and deployment completed - Xinhua Net
2018-07-26 11:35:34 来源: 天津日报 Source: Tianjin Daily

Translation:
The reporter learned from the National Supercomputing Tianjin Center that the "Tianhe No. 3 E-class prototype system" development project undertaken by the National Defense Science and Technology University and the National Supercomputing Tianjin Center and other teams, after more than two years of continuous key technology research and breakthrough, prototype system was successfully developed and deployed in the National Supercomputing Tianjin Center. On July 22, it successfully passed project acceptance organized by the High Technology Center of the Ministry of Science and Technology, and will now enter gradual open operational and application phase.

So, US position may be short-lived.
 
.
China launches exascale supercomputer prototype
Source: Xinhua| 2018-08-06 00:28:30|Editor: Mu Xuequan


JINAN, Aug. 5 (Xinhua) -- China on Sunday put into operation a prototype exascale computing machine, the next-generation supercomputer, according to the developers.

The Sunway exascale computer prototype was developed by the National Research Center of Parallel Computer Engineering and Technology (NRCPC), the National Supercomputing Center in Jinan, east China's Shandong Province, and the Pilot National Laboratory for Marine Science and Technology (Qingdao).

The NRCPC led the team that developed Sunway TaihuLight, crowned the world's fastest computer two years in a row at both the 2016 and 2017 International Supercomputing Conferences held in Frankfurt, Germany.

"The Sunway exascale computer prototype is very much like a concept car that can run on road,"said Yang Meihong, director of the National Supercomputing Center in Jinan.

"We expect to build the exascale computer in the second half of 2020 or the first half of 2021," said Yang.

Another prototype exascale supercomputer Tianhe-3 passed the acceptance tests on July 22. Its final version is expected to come out in 2020.

The two prototypes marked a further step towards China's successful development of the next-generation supercomputer.

Supercomputers are changing people's life in fields such as weather forecast, calculation of ocean currents, financial data analysis, high-end equipment manufacturing, and car collision simulation, said Pan Jingshan, deputy director of the National Supercomputing Center in Jinan.

Pan said the new-generation supercomputers will provide strong support to scientific research in more fields.

An exascale computer is able to execute a quintillion calculations per second. In China, prototypes are being developed by three teams led by the NRCPC, Dawning Information Industry C. (Sogon), and National University of Defense Technology (NUDT).

The United States and Japan are also speeding up the development of the exascale supercomputer, expecting to unveil it in as early as 2021.

129926915_15334696852851n.jpg

129926915_15334697008371n.jpg

20180805195413_f46e232e8dc161d4daff15e6773e673a_5.jpeg

20180805195413_f46e232e8dc161d4daff15e6773e673a_7.jpeg

20180806031746810.jpg
 
.
Prototypes of China’s Exascale Supercomputers Point to Some New Realities
Michael Feldman | August 6, 2018 17:11 CEST

Two prototypes of China’s initial batch of exascale supercomputers are now up and running according to local news reports. And neither of them appears to be based on x86 technology.

The first system, a prototype of Tianhe-3, was announced at the 2nd World Intelligence Congress on May 17, where the system was displayed by the National Supercomputing Center in Tianjin. In a report by the Xinhuanet news agency, pictures of the prototype show a six-rack system, with one of the open racks sporting 20 server blades. A subsequent report by the news agency on July 27 stated the prototype is “complete” and showed what appeared to be larger system in a somewhat different rack enclosure. The Tianhe-3 exascale supercomputer is scheduled to boot up in 2020.

tianhe-3-prototype-800x440.jpeg
Prototype of Tianhe-3 at the National Supercomputer Center in Tianjin. Source: Xinhuanet news agency

No details were provided in any of these reports as far as the prototype’s internals or computational capabilities. Supposedly, the Tianjin exascale machine will be based on Chinese-designed Arm technology, likely some version of Phytium’s Xiaomi platform. In 2016, Phytium revealed it had developed a 64-core Arm CPU, known as the FT-2000/64, for high-end server work. At the time, the company claimed the FT-2000/64 had a peak performance of 512 gigaflops – not nearly powerful enough for a practical exascale machine, but certainly suitable for a prototype. Of course, the system could also be built from generic Arm processors or, for that matter, from any processor that can emulate an Arm instruction set.

As we’ve noted before, China is not the only country with designs on Arm technology for supercomputing. Japan is building its first exascale machine, Post-K, based on a Fujitsu-designed Arm SVE chip. The company recently revealed it had completed a prototype of the processor. Likewise, the EU’s European Processor Initiative (EPI) looks like it will rely on Arm technology to develop processors for Europe’s pre-exascale and exascale systems. Even the US is getting serious about Arm-based supercomputing. HPE recently announced that Sandia National Laboratories will soon be installing a 2.3-petaflop system, known as Astra, using Cavium ThunderX2 processors.

The second Chinese prototype system, which was announced on August 5, is the precursor to the Sunway exascale machine that is slated to be installed at the National Supercomputing Center in Jinan. The prototype was developed by center researchers, along with teams from the National Research Center of Parallel Computer Engineering and Technology (NRCPC) and the Pilot National Laboratory for Marine Science and Technology (Qingdao). According to Yang Meihong, director of the National Supercomputing Center in Jinan, they expect the actual exascale system to be built in “the second half of 2020 or the first half of 2021.”

A report by Sdchina (the Information Office of Shandong Provincial People’s Government) and attributed to China Daily, states that the performance of the Sunway prototype is triple that of the Sunway Bluelight supercomputer (1.0 peak petaflops/795.9 Linpack teraflops), which is currently ranked 420 on the TOP500 list. If that’s the case, we should expect to see the Sunway prototype show up on the next TOP500 list in November.

The Sdchina writeup goes on to say that prototype has already run 35 applications, including those in climate change, ocean simulation, biomedical simulation, big data processing and brain-like intelligence. Although once again, no mention was made of the system internals, presumably the Sunway prototype is based on some version of the ShenWei processor. The original BlueLight machine, which is still cranking away at the Jinan center, is powered by 16-core ShenWei 1600 (SW1600) processors. The newer and much more powerful Sunway TaihuLight machine uses the 260-core ShenWei 26010 (SW26010) chips.

It’s a good bet that the prototype of China’s third exascale system is currently under development. This machine is slated to be built by Sugon and is expected to be based on home-grown x86 silicon, which China now has thanks to a licensing agreement between Hygon and AMD. And since Hygon announced last month that it is now producing such chips, the last major impediment to domestically produced x86-powered supercomputers has been removed.

At this point, Hygon can only implement processors using AMD’s first-generation Zen EPYC microarchitecture, so their ability to power an exascale machine on their own is rather limited. But again, for a prototype system, the Zen chips would probably suffice. And if Hygon follows up with Zen 2 and Zen 3 licensing agreements (or perhaps even deals to implement AMD Radeon GPU or APU designs), an x86-powered Chinese exascale machine would certainly be possible.

The roll-out of these prototypes suggests that the Tianhe-3 system will be China’s first exascale supercomputer, followed by the Sunway and Sugon machines. That implies a rather remarkable development, namely that of the four HPC superpowers – China, the EU, Japan, and the US – all of them, except for the US, could enter the exascale era with Arm technology rather than x86 hardware. Considering that there are currently no Arm-powered supercomputers that have even reached the petascale level yet (the Astra system has yet to come online), this is quite a show of confidence for an unproven HPC technology.

As we’ve alluded to before, the run-up to exascale appears to be fostering the end of x86 hegemony in HPC. Such a development is being driven by the need for more customized hardware for supercomputers and by national and regional desires to produce the most critical pieces of these systems domestically. It remains to be seen to what degree all of this will usher in a new HPC landscape, but as these prototypes roll out, we’re getting a much better idea of the shape of things of to come.


Prototypes of China’s Exascale Supercomputers Point to Some New Realities | TOP500 Supercomputer Sites
 
.
So, US position may be short-lived.

Umm... No. The earliest timeline for this exa-scale machine to be built is actually 2021. So till then, US remains definitely in lead.

Also, US is also developing its own exa-scale machine that is to be released around the same time.

The 2nd known "exascale" machine in development. :D

Umm.. No. Right now only 3 exa scale prototypes have been confirmed. No mention of how many of them will be scaled up. (Also, I have issue with the word prototype here, since these are more like technology demonstrators and not prototypes. These are in themselves machines of a few Petaflops.)
 
.
Dawning Information, CAS to Build Two New Supercomputers for Strategic Sectors
TANG SHIHUA
DATE: THU, 08/23/2018 - 14:38 / SOURCE:YICAI
2.1%E4%B8%AD%E7%A7%91%E6%9B%99%E5%85%89%E5%AD%90%E5%85%AC%E5%8F%B8%E6%90%BA%E6%89%8B%E4%B8%AD%E7%A7%91%E9%99%A2%E8%AE%A1%E7%AE%97%E6%89%80%E8%81%94%E5%90%88%E7%A0%94%E5%88%B6%E6%96%B0%E5%9E%8B%E8%B6%85%E7%BA%A7%E8%AE%A1%E7%AE%97%E6%9C%BAvcg.jpg

Dawning Information, CAS to Build Two New Supercomputers for Strategic Sectors

(Yicai Global) Aug. 23 -- A unit under Dawning Information Industry has partnered the Chinese Academy of Sciences to develop a safe and controllable supercomputer and another that uses artificial intelligence to better serve China’s strategic sectors.

Dawning Information Industry Beijing and the Institute of Computing Technology will work together to build the processors using CNY1.66 billion (USD242 million) in funding from CAS’ Bureau of Major R&D Programs, the computing institute said in a statement yesterday. The subsidiary will take lead on the project with the CAS faculty, which is Dawning’s actual controller, expected to receive about CNY40 million (USD5.9 million) of the funding.

The supercomputers will be used to simulate complex turbulent flows, global ocean currents, astronomical occurrences and molecular behavior, among other processes in key disciplines.

The project will promote the cross-research of innovative applications in supercomputing, big data and AI, and facilitate Dawning’s competitiveness in high-end computing and other fields concerning information technology equipment, the statement added.

Advanced computing technology as an important symbol of a country’s national strength and tech innovation levels, and this project demonstrates that CAS recognizes Dawning’s technical and scientific research capabilities, it said.
 
.
Can China build a US$145 million superconducting computer that will change the world? | South China Morning Post
Chinese scientists are embarking on a one-billion yuan, high-risk, high-reward plan to build low-energy top-performance computing systems

PUBLISHED : Sunday, 26 August, 2018, 11:02pm
UPDATED : Monday, 27 August, 2018, 12:44pm

9fadba9e-a844-11e8-851a-8c4276191601_1280x720_124439.jpg

Stephen Chen

China is building a 1 billion yuan (US$145.4 million) “superconducting computer” – an unprecedented machine capable of developing new weapons, breaking codes, analysing intelligence and – according to official information and researchers involved in the project – helping stave off surging energy demand.

Computers are power-hungry, and increasingly so. According to an estimate by the Semiconductor Industry Association, they will need more electricity than the world can generate by 2040, unless the way they are designed is dramatically improved.

The superconducting computer is one of the most radical advances proposed by scientists to reduce the environmental footprint of machine calculation.

The concept rests on sending electric currents through supercooled circuits made of superconducting materials. The system results in almost zero resistance – in theory at least – and would require just a fraction of the energy of traditional computers, from one-fortieth to one-thousandth, according to some estimates.

INTO THE SUPER LEAGUE

Chinese scientists have already made a number of breakthroughs in applying superconducting technology to computers. They have developed new integrated circuits with superconducting material in labs and tested an industrial process that would enable the production of relatively low cost, sophisticated superconducting chips at mass scale. They have also nearly finished designing the architecture for the computer’s systems.

Now the aim is to have a prototype of the machine up and running as early as 2022, according to a programme quietly launched by the Chinese Academy of Sciences (CAS) in November last year with a budget estimated to be as much as one billion yuan.

If these efforts are successful, the Chinese military would be able to accelerate research and development for new thermonuclear weapons, stealth jets and next-generation submarines with central processing units running at the frequency of 770 gigahertz or higher. By contrast, the existing fastest commercial processor runs at just 5Ghz.

The advance would give Chinese companies an upper hand in the global competition to make energy-saving data centres essential to processing the big data needed for artificial intelligence applications, according to Chinese researchers in supercomputer technology.

CAS president Bai Chunli said the technology could help China challenge the US’ dominance of computers and chips.

“The integrated circuit industry is the core of the information technology industry … that supports economic and social development and safeguards national security,” Bai said in May during a visit to the Shanghai Institute of Microsystem and Information Technology, a major facility for developing superconducting computers.

“Superconducting digital circuits and superconducting computers … will help China cut corners and overtake [other countries] in integrated circuit technology,” he was quoted as saying on the institute’s website.

But the project is high-risk. Critics have questioned whether it is wise to put so much money and resources into a theoretical computer design that is yet to be realised, given that similar attempts by other countries have ended in failure.

IN THE BEGINNING

The phenomenon of superconductivity was discovered by physicists more than a century ago. After the second world war, the United States, the former Soviet Union, Japan and some European countries tried to build large-scale, cryogenically cooled circuits with low electric resistance. In the US, the effort attracted the support of the government’s spy agency, the National Security Agency (NSA) and defence department because of the technology’s potential military and intelligence applications.

But the physical properties of superconducting materials, such as niobium, was less well understood than silicon, which is used in traditional computers.

As a result, chip fabrication proved challenging, and precise control of the information system at low temperatures, sometimes close to absolute zero, or minus 273 degrees Celsius, were a headache. Though some prototypes were made, none could be scaled up.

Meanwhile, silicon-based computers advanced rapidly with increasing speed and efficiency, raising the bar for research and development for a superconducting computer.

But those big gains using silicon seem to have ended, with the high-end Intel Core i7 chips, for instance, have been on computer store shelves for nearly a decade.

And as supercomputers grow bigger, so too does their energy consumption. Today’s fastest computers, the Summit in the US and China’s Sunway TaihuLight, require 30 megawatts of power to run at full capacity, more power than a Los Angeles-class nuclear submarine can generate. And their successors, the exascale supercomputers, which are capable of 1,000 petaflops, or performing 1 million trillion floating-point operations per second, is likely to need a stand-alone power station.

Li Xiaowei, executive deputy director of the State Key Laboratory of Computer Architecture, who is well acquainted with the Chinese programme, said the main motivation to build a superconducting computer was to cut the energy demands of future high-performance computers.

“It will be a general-purpose computer capable to run different kinds of algorithms … from text processing to finding big prime numbers”, the latter an important method to decode encrypted messages, according to Li.

Li would not give technical details of the machine under construction but he confirmed it would not be a quantum computer.

“It is built and run on a classical structure,” he told the South China Morning Post.

Instead of encoding information in bits with a value of one or zero, quantum computers use qubits, which act more like switches and can be a one and a zero at the same time. Most types of quantum computers also require extremely cold environments to operate.

Quantum computers are believed to be faster than classical superconducting computers but are likely to be limited to specific jobs and take a lot longer to realise. Many technologies, though, can be shared and moved from one platform to another.

THE RACE IS ON

China is not the only country in the race. The NSA launched a similar project in 2014. The Cryogenic Computing Complexity programme under the Office of the Director of National Intelligence has awarded contracts to research teams at IBM, Raytheon-BBN and Northrop Grumman to build a superconducting computer.

“The power, space, and cooling requirements for current supercomputers based on complementary metal oxide semiconductor technology are becoming unmanageable,” programme manager Marc Manheimer said in a statement.

“Computers based on superconducting logic integrated with new kinds of cryogenic memory will allow expansion of current computing facilities while staying within space and energy budgets, and may enable supercomputer development beyond the exascale.”

During the initial phase of the programme, the researchers would develop the critical components for the memory and logic subsystems and plan the prototype computer. The goal was to later scale and integrate the components into a working computer and test its performance using a set of standard benchmarking programs, according to NSA.

The deadline and budget of the US programme has not been disclosed.

Back in China, Xlichip, an electronics company based in Shenzhen, a growing technology hub in the country’s south, confirmed on Tuesday that it had been awarded a contract to supply test hardware for a superconducting computer programme at CAS’s Institute of Computing Technology in Beijing.

“The client has some special requirements but we have confidence to come up with the product,” a company spokeswoman said, without elaborating.

Fan Zhongchao, researcher with CAS’s Institute of Semiconductors who reviewed the contract as part of an expert panel, said the hardware was a field-programmable gate array (FPGA), a reconfigurable chip that could be used to simulate and test the design of a large-scale, sophisticated integrated circuit.

“The overall design [of the FPGA testing phase] is close to complete,” he said.

There are signs that China is getting closer to its superconducting goal.

Last year, Chinese researchers realised mass production of computer chips with 10,000 superconducting junctions, according to the academy’s website. That compares to the more than 800,000 junctions a joint research team at Stony Brook University and MIT squeezed into a chip. But most fabrication works reported so far were in small quantities in laboratories, not scaled up for factory production.

Zheng Dongning, leader of the superconductor thin films and devices group in the National Laboratory for Superconductivity at the Institute of Physics in Beijing, said that if 10,000-junction chips could be mass produced with low defect rates, they could be used as building blocks for the construction of a superconducting computer.

CHIPPING AWAY

Zheng said China’s determination to develop the new technology would only be strengthened by the trade war with the United States. Many Chinese companies are reliant on US computing chips and the White House’s threats in May to ban chip exports to Chinese telecommunications giant ZTE almost sent the company to the wall.

“It is increasingly difficult to get certain chips from the US this year. The change is felt by many people,” he said.

But Zheng said China should not count on the superconducting computer technology to challenge US dominance. The US and other countries such as Japan had invested many more years in this area of research than China and although their investments were smaller they were consistent, giving them a big edge in knowledge and experience.

“One billion yuan is a lot of money, but it cannot solve all the remaining problems. Some technical issues may need years to find a solution, however intensive the investment,” Zheng said.

“Year 2022 may be a bit of a rush.”

Wei Dongyuan, a researcher at the Chinese Academy of Science and Technology for Development, a government think tank on science policies, said China should be more transparent about the programme and give the public more information about its applications.

“It can be used in weather forecasts or to simulate the explosion of new nuclear weapons. One challenge is to develop a new operating system. Software development has always been China’s soft spot,” he said.

Chen Quan, a supercomputer scientist at Shanghai Jiao Tong University, said superconducting was often mentioned in academic discussions on the development of the next generation of high-performance computers.

China was building more than one exascale computer, and “it is possible that one will be superconductive”, he said.
 
.
But the project is high-risk. Critics have questioned whether it is wise to put so much money and resources into a theoretical computer design that is yet to be realised, given that similar attempts by other countries have ended in failure.
Do people in HK have this type of loser mentality?
 
.
Back
Top Bottom