0

Nvidia announces powerful H100 chip, Grace processors and new DGX supercomputers

Share

During the GTC 2022 event, Nvidia presented many interesting technological solutions. The hardware shown is the most interesting, including the long-awaited H100 processor, based on the Hopper architecture. It is a real beast of performance, which may in some ways show what future graphics cards for gamers may look like.

Nvidia’s presentation at GTC 2022 was full of interesting announcements and announcements. New artificial intelligence techniques, advanced Omniverse capabilities, technologies for the automotive industry and smart factories were unveiled. There was also a lot of new equipment. The new H100 was by far the most important announcement. It’s a powerful framework designed for computing platforms to control AI computing.

Greens made significant improvements to the new Hopper architecture, mostly in terms of memory bandwidth and I/O capacity for machine learning. The flagship GH100 chip consists of 80 billion transistors and is made using TSMC 4N technology, which is an improved version of 5nm lithography. Compared to the A100, the number of cores, clock frequency and, unfortunately, power consumption have increased significantly (it’s at 700W). However, this had an impact on performance – in some tasks it more than tripled.

Nvidia notes that many interesting improvements have been applied to the Hopper architecture, including handling sensitive data, improving the multi-instance environment, or using a new version of NVLink. Hopper is also the first GPU to support the PCIe 5.0 interface. Such a branching system can be combined with others based on the same HGX base plate.

Along with the Nvidia H100 gas pedal, DGX H100 systems have also been announced, with eight such chips on board and a total of 640GB of HBM3 memory. Such a set provides performance of 5 PFLOPS for FP16 computation and up to 10 PFLOPS for INT8. However, this is not the end, as customers can combine such a set with others, thereby creating a DGX POD platform with up to 256 gas pedals. We had to use a new switch offering as many as 128 NVLink lines.

However, this is not the end, because Nvidia will also build an EOS supercomputer using the DGX SuperPOD platform. The structure will be equipped with as many as 576 DGX H100 systems, which means it will offer as many as 4608 H100 gas pedals. Its AI computing performance is expected to be 18.4 EFPOPS. The system should be launched at the end of this year.

In addition to graphics processors, Nvidia also unveiled its Grace processor. It is a 144-core chip based on ARM Neoverse architecture, consisting of two interconnected systems connected via NVLink C2C technology. Nvidia used a 900 GB/s interface in its chassis, provided support for LPDDR5x memory with ECC correction and contributed to performance optimization. This result should be pretty good – in SPECrate®2017_int_base Grace scores about 740 points.

Green has prepared the Grace Hopper superchip for AI and HPC calculations. It combines one CPU and one GPU, which communicate over a 900 gigabyte interface. It also uses NVLink C2C connectivity, and the device is fully compatible with Nvidia’s full suite of software, including Nvidia HPC, Nvidia AI and Nvidia Omniverse.