NVIDIA A40 48GB
10€ (excl. TAX)

P/N: 900-21001-0100-030
10€ (excl. TAX)
Delivery is made within 14-21 days
The prices are presented for both corporate clients and individuals.
Warranty 1 year
The prices are presented for both corporate clients and individuals.
Free 14 day returns
The prices are presented for both corporate clients and individuals.
Pricing for
The prices are presented for both corporate clients and individuals.
In stock
Guaranteed Safe Checkout:
NVIDIA A100 40GB. Ampere accelerator with 40GB HBM2e for AI inference & HPC. Direct import, 1–3y warranty, fast delivery, compliant docs, any payment option.
| Weight | 1 kg |
|---|---|
| Dimensions | 252025141500-0400 × 102025131500-0500 cm |
| Country of manufacture | Taiwan |
| Manufacturer's warranty (years) | 1 |
| Model | NVIDIA A100 |
| Cache L2 (MB) | 40 |
| Process technology (nm) | 4 |
| Memory type | HBM2e |
| Graphics Processing Unit (Chip) | |
| Number of CUDA cores | 6912 |
| Number of Tensor cores | 432 |
| GPU Frequency (MHz) | 765 |
| GPU Boost Frequency (MHz) | 1410 |
| Video memory size (GB) | 40 |
| Memory frequency (MHz) | 14000 |
| Memory bus width (bits) | 5120 |
| Memory Bandwidth (GB/s) | 1555 |
| Connection interface (PCIe) | PCIe 4.0 x16 |
| FP16 performance (TFLOPS) | 312 |
| FP32 performance (TFLOPS) | 156 |
| FP64 performance (TFLOPS) | Tue Jul 08 2025 14:15:00 GMT-0400 (Eastern Daylight Time) |
| Cooling type | Passive (server module) |
| Number of occupied slots (pcs) | 2 |
| Length (cm) | Fri Jul 25 2025 14:15:00 GMT-0400 (Eastern Daylight Time) |
| Width (cm) | Fri Jan 10 2025 13:15:00 GMT-0500 (Eastern Standard Time) |
| Weight (kg) | 1 |
| Temperature range (°C) | 0–85 |
| NVLink Throughput (GB/s) | 600 |
| Multi-GPU support | Yes, via NVLink |
| Virtualization/MIG support | MIG (up to 7 instances) |
NVIDIA A100 40GB PCIe OEM is a professional accelerator based on the Ampere architecture — the benchmark for modern data centers and enterprise AI solutions. Featuring 40 GB of high-bandwidth HBM2 memory, it delivers an exceptional balance of power and efficiency, allowing organizations to scale compute infrastructure with flexibility and confidence.
This GPU is widely used across various industries — from machine learning training and inference to complex scientific simulations and industrial modeling. Unlike consumer graphics cards, the A100 is purpose-built for professional workloads, where precision, memory bandwidth, and enterprise-grade reliability are essential.
NVIDIA A100 40GB PCIe OEM is a universal accelerator suitable for:
The A100 40GB marked the beginning of the Ampere generation, setting new standards for enterprise AI acceleration. It’s designed for organizations requiring high training throughput and the ability to scale compute resources efficiently.
Compared with the 80 GB version, the A100 40GB PCIe targets workloads where ultra-high memory capacity isn’t required but bandwidth and Tensor Core power remain critical. With MIG technology, the GPU can be split into seven independent virtual instances — ideal for cloud providers and distributed compute environments.
Relative to the previous Tesla V100 generation, the A100 delivers up to a 20× performance increase in AI and HPC tasks, along with significantly improved energy efficiency.
NVIDIA A100 40GB PCIe OEM is a trusted enterprise-class accelerator that combines the Ampere architecture, high-speed HBM2 memory, and powerful Tensor Cores. It unlocks new possibilities in AI development, big data analytics, and scientific computing — the ideal choice for organizations that demand performance, scalability, and reliability.
Only logged in customers who have purchased this product may leave a review.
Here goes your text ... Select any part of your text to access the formatting toolbar.
Reviews
There are no reviews yet.