Gpu Accelerator Specs: NVIDIA_H100_NVL
GPU and AI accelerator specifications for datacenter and HPC planning — compute performance (FP64/FP32/TF32/FP16/BF16/FP8/FP4/INT8 TFLOPS/TOPS), memory capacity and bandwidth, thermal design power, interconnect bandwidth (NVLink, PCIe, Infinity Fabric), and physical/process node data. Covers NVIDIA A100, H100, H200, B200, B100 and AMD Instinct MI300X.
| gpu model | spec category | architecture | form factor | interconnect type | memory bandwidth gb s (GB/s) | memory gb (GB) | memory type | mig instances (dimensionless) | nvlink bandwidth gb s (GB/s) | nvlink generation (dimensionless) | pcie bandwidth gb s (GB/s) | pcie gen (dimensionless) | process node nm (nm) | transistors billion (billion) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| NVIDIA_H100_NVL | interconnect | — | — | PCIe | — | — | — | — | 600 | 4 | 128 | 5 | — | — |
| NVIDIA_H100_NVL | memory | — | — | — | 3,900 | 94 | HBM2e | — | — | — | — | — | — | — |
| NVIDIA_H100_NVL | physical | Hopper | PCIe_FHFL_dual_slot | — | — | — | — | 7 | — | — | — | — | 4 | 80 |
This is a sample. Search the full gpu accelerator specs dataset and 350+ more by creating a free account and accessing the database.
Get started for FREE