Gpu Accelerator Specs: NVIDIA_A100_40GB_PCIe
GPU and AI accelerator specifications for datacenter and HPC planning — compute performance (FP64/FP32/TF32/FP16/BF16/FP8/FP4/INT8 TFLOPS/TOPS), memory capacity and bandwidth, thermal design power, interconnect bandwidth (NVLink, PCIe, Infinity Fabric), and physical/process node data. Covers NVIDIA A100, H100, H200, B200, B100 and AMD Instinct MI300X.
| gpu model | spec category | fp32 tflops (TFLOPS) | fp64 tc tflops (TFLOPS) | fp64 tflops (TFLOPS) | int8 tc tops (TOPS) | interconnect type | memory bandwidth gb s (GB/s) | memory gb (GB) | memory type | nvlink bandwidth gb s (GB/s) | nvlink generation (dimensionless) | pcie bandwidth gb s (GB/s) | pcie gen (dimensionless) | tf32 tc tflops (TFLOPS) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| NVIDIA_A100_40GB_PCIe | compute_performance | 19.5 | 19.5 | 9.7 | 624 | — | — | — | — | — | — | — | — | 156 |
| NVIDIA_A100_40GB_PCIe | interconnect | — | — | — | — | NVLink+PCIe | — | — | — | 600 | 3 | 64 | 4 | — |
| NVIDIA_A100_40GB_PCIe | memory | — | — | — | — | — | 1,555 | 40 | HBM2 | — | — | — | — | — |
This is a sample. Search the full gpu accelerator specs dataset and 350+ more by creating a free account and accessing the database.
Get started for FREE