Gpu Accelerator Specs: NVIDIA_H100_SXM
GPU and AI accelerator specifications for datacenter and HPC planning — compute performance (FP64/FP32/TF32/FP16/BF16/FP8/FP4/INT8 TFLOPS/TOPS), memory capacity and bandwidth, thermal design power, interconnect bandwidth (NVLink, PCIe, Infinity Fabric), and physical/process node data. Covers NVIDIA A100, H100, H200, B200, B100 and AMD Instinct MI300X.
| gpu model | spec category | bf16 tc tflops (TFLOPS) | fp16 tc tflops (TFLOPS) | fp32 tflops (TFLOPS) | fp64 tc tflops (TFLOPS) | fp64 tflops (TFLOPS) | fp8 tc tflops (TFLOPS) | int8 tc tops (TOPS) | interconnect type | memory bandwidth gb s (GB/s) | memory gb (GB) | memory type | nvlink bandwidth gb s (GB/s) | nvlink generation (dimensionless) | pcie bandwidth gb s (GB/s) | pcie gen (dimensionless) | tf32 tc tflops (TFLOPS) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| NVIDIA_H100_SXM | compute_performance | 1,979 | 1,979 | 67 | 67 | 34 | 3,958 | 3,958 | — | — | — | — | — | — | — | — | 989 |
| NVIDIA_H100_SXM | interconnect | — | — | — | — | — | — | — | NVLink+PCIe | — | — | — | 900 | 4 | 128 | 5 | — |
| NVIDIA_H100_SXM | memory | — | — | — | — | — | — | — | — | 3,350 | 80 | HBM3 | — | — | — | — | — |
This is a sample. Search the full gpu accelerator specs dataset and 350+ more by creating a free account and accessing the database.
Get started for FREE