The NVIDIA H100 Tensor Core GPU is a powerful, highly optimized processing platform for AI inference and other demanding GPU-accelerated applications. It supports floating-point operations with precision from FP64 to FP8 as well as INT8 (integer) calculations - making it a single accelerator for pretty much every compute workload. With 94GB of high-bandwidth memory on a massive 6016-bit bus, it also outpaces traditional consumer and even professional video cards by a wide margin when it comes to memory-intensive jobs.
This H100 NVL variant allows pairs of cards to be connected via three bridges so they can utilize NVLink: a very high-speed interconnect for improved multi-GPU operation. Each card is housed in a traditional 2-slot graphics card form factor and fits in a standard PCI-Express 5.0 x16 slot, but the fanless heatsink means these are only functional in purpose-built systems that have been designed to fit and cool such powerful GPUs.
This H100 NVL variant allows pairs of cards to be connected via three bridges so they can utilize NVLink: a very high-speed interconnect for improved multi-GPU operation. Each card is housed in a traditional 2-slot graphics card form factor and fits in a standard PCI-Express 5.0 x16 slot, but the fanless heatsink means these are only functional in purpose-built systems that have been designed to fit and cool such powerful GPUs.
Specifications
Chipset Manufacturer | NVIDIA |
Product Category | Data Center |
Motherboard Connection | PCI Express 5.0 x16 |
Cooling Method | Passive Heatsink |
Core Specifications | |
Core Speed | 1,080 MHz |
Boost Speed | 1,785 MHz |
Processors | 14,592 |
Memory Specifications | |
Onboard Memory | 94GB |
Memory Type | HBM3 |
Memory Speed | 2,619 MHz |
Memory Bus Width | 6016-bit |
Bandwidth | 3,938 GB/s |
Power Connectors | |
Plug 1 | 16-pin PCIe |
Dimensions | |
Length | 267 mm (10.5 in) |
Height | 112 mm (4.4 in) |
Width | 42 mm (1.7 in) |
Net Weight | 1.28 kg (2.8 lbs) |
Utilizes 3x NVIDIA RTX A6000 NVLink Bridge per pair of GPUs