Description: NVIDIA A100 40GB HBM2 PCIe 4.0 GPU This is A100 40GB SXM4 PCI-E Converted Nvidia SXM GPUs to PCIE version Nvidia Ampere GPU 40GB of HBM2 memory System Interface PCI Express 4.0 x16 TDP: 250 W Model: Tesla A100 P1001B 699-21001-0200-400 900-21001-0000-000 NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the worlds highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. Graphics Processor GGPU Name: GA100 Architecture: Ampere Process Size: 7 nm Clock Speeds GPU Clock: 1095 MHz Boost Clock: 1410 MHz Memory Clock: 1215 MHz Power connectors and headers: One CPU 8-pin auxiliary power connector Memory Memory Size: 40 GB Memory Type: HBM2e Memory Bus: 5120 bit Bandwidth: 1,555 GB/s Board Design Slot Width: IGP TDP: 250 W Outputs: No outputs Render Config Shading Units: 6912 TMUs: 432 ROPs: 160 SM Count: 108 Tensor Cores: 432 Theoretical Performance FP64 9.7 TFLOPS FP64 Tensor Core 19.5 TFLOPS FP32 19.5 TFLOPS Tensor Float 32 (TF32) 156 TFLOPS | 312 TFLOPS BFLOAT16 Tensor Core 312 TFLOPS | 624 TFLOPS FP16 Tensor Core 312 TFLOPS | 624 TFLOPS INT8 Tensor Core 624 TOPS | 1248 TOPS Please verify compatibility before placing an order
Price: 4750 USD
Location: San Francisco, California
End Time: 2024-12-12T21:12:49.000Z
Shipping Cost: N/A USD
Product Images
Item Specifics
All returns accepted: ReturnsNotAccepted
Return policy details:
Brand: NVIDIA
Chipset Manufacturer: NVIDIA
Memory Size: 40 GB
APIs: DirectX, OpenCL, CUDA, Vulkan
Power Cable Requirement: 8-Pin PCI-E
MPN: 699-21001-0200-400, 900-21001-0000-000
Compatible Slot: PCI Express 4.0 x16
Cooling Component Included: Heatsink only
Memory Type: HBM2
Chipset/GPU Model: A100