Gpu Accelerator Specs: NVIDIA_H100_NVL

GPU and AI accelerator specifications for datacenter and HPC planning — compute performance (FP64/FP32/TF32/FP16/BF16/FP8/FP4/INT8 TFLOPS/TOPS), memory capacity and bandwidth, thermal design power, interconnect bandwidth (NVLink, PCIe, Infinity Fabric), and physical/process node data. Covers NVIDIA A100, H100, H200, B200, B100 and AMD Instinct MI300X.

Electrical Engineeringgpu_model: NVIDIA_H100_NVL3 rows
gpu modelspec categoryarchitectureform factorinterconnect typememory bandwidth gb s (GB/s)memory gb (GB)memory typemig instances (dimensionless)nvlink bandwidth gb s (GB/s)nvlink generation (dimensionless)pcie bandwidth gb s (GB/s)pcie gen (dimensionless)process node nm (nm)transistors billion (billion)
NVIDIA_H100_NVLinterconnectPCIe60041285
NVIDIA_H100_NVLmemory3,90094HBM2e
NVIDIA_H100_NVLphysicalHopperPCIe_FHFL_dual_slot7480

This is a sample. Search the full gpu accelerator specs dataset and 350+ more by creating a free account and accessing the database.

Get started for FREE

Related data