CPU
Now Reading
NVIDIA H100 GPU based on new NVIDIA Hopper architecture announced at GTC 2022
Contents
0

NVIDIA H100 GPU based on new NVIDIA Hopper architecture announced at GTC 2022

by Vyncent ChanMarch 24, 2022
What's your reaction?
Me Gusta
0%
WOW
0%
Potato
0%
Sad Reacc
0%
Angery
0%

NVIDIA took the wraps off next-gen NVIDIA Hopper architecture at the recent GTC 2022, introducing the NVIDIA H100 Tensor Core GPU. NVIDIA is reportedly going to use NVIDIA Hopper only for their high-performance computing (HPC) products, while gaming GPUs will be based on a different Ada Lovelace architecture, but that’s probably a discussion for another day.

NVIDIA Hopper: 80 billion transistors, up to 16896 FP32 CUDA cores per GPU

NVIDIA H100 GPU SXM

NVIDIA will be making the GPUs on the TSMC 4nm node, versus the 7nm node used for the NVIDIA A100.  We are looking at a much beefier GPU this time around, weighing in at 80 billion transistors, the NVIDIA H100 packs up to 16896 FP32 CUDA cores per GPU, with up to 528 4th Gen Tensor cores. The new Tensor cores also support new FP8 to increase the raw computational power over the NVIDIA A100 by 2X.

To further take advantage of the new FP8-capable Tensor cores, NVIDIA baked in the new Transformer Engine to manage and dynamically switch between FP8 and 16-bit calculations. New DPX instructions also accelerate dynamic programming algorithm by up to 7X over the last gen NVIDIA A100. FP64 and FP32 processing rates on the H100 are tripled versus A100, thanks to 2X faster clock-per-clock performance coupled with higher clocks, as well as higher core counts on the NVIDIA H100 GPU. Needless to say, the NVIDIA H100 is going to be a massive improvement over last-gen, which NVIDIA claims is at around 6X the peak throughput of the NVIDIA A100.

NVIDIA is also equipping the NVIDIA H100 with up to 50MB of L2 cache to reduce the number of trips to the HBM3 cache. Speaking of which, the HBM3 cache delivers up to 3TB/s of bandwidth, or nearly 2X the bandwidth that was available to the NVIDIA A100. Of course, we are only talking about a single GPU here, and the NVIDIA H100 isn’t intended to be used as a single, standalone GPU, and instead in massive servers with multiples of them.

NVIDIA Hopper DGX family

To facilitate that, NVIDIA developed the 4th Gen NVLink with a new NVLink Switch system that can connect up to 256 GPUs to deliver up to 9X the bandwidth of previous-generation InfiniBand systems. NVIDIA will be shipping the NVIDIA H100-based systems in Q3 2022, with DGX H100 and DGX SuperPod servers, as well as servers from OEM partners featuring the GPUs. As the NVIDIA H100 supports PCIe 5.0, do expect to see next-gen CPUs in the NVIDIA DGX solutions.

Pokdepinion: Well, now I am just hoping to see what’s coming on the gaming end. Will get these fancy new SMs and Tensor cores?

About The Author
Vyncent Chan
Technology enthusiast, casual gamer, pharmacy graduate. Strongly opposes proprietary standards and always on the look out for incredible bang-for-buck.