Design
| Cooling type |
Passive |
| Number of slots |
2 |
Processor
| Peak floating point performance (double precision) |
12000 Gflops |
| CUDA |
Y |
| Graphics processor family |
NVIDIA |
Power
| Supplementary power connectors |
1x 8-pin |
Memory
| Discrete graphics adapter memory |
24 GB |
| Graphics adapter memory type |
GDDR5 |
| Memory bus |
384 bit |
| Memory bandwidth (max) |
346 GB/s |
Ports & interfaces
| Interface type |
PCI Express x16 3.0 |
Additionally
| Graphics controller |
Tesla P40 |
NVIDIA Tesla P40, CUDA 3840, 384-bit, 24 GB GDDR5, PCI Express 3.0 x16, 8-pin CPU power connector, 968 g
<b>INFERENCING ACCELERATOR</b>
In the new era of AI and intelligent machines, deep learning is shaping our world like no other computing model in history. GPUs powered by the revolutionary NVIDIA Pascal™ architecture provide the computational engine for the new era of artificial intelligence, enabling amazing user experiences by accelerating deep learning applications at scale.
The NVIDIA Tesla P40 is purpose-built to deliver maximum throughput for deep learning deployment. With 47 TOPS (Tera-Operations Per Second) of inference performance and INT8 operations per GPU, a single server with 8 Tesla P40s delivers the performance of over 140 CPU servers. As models increase in accuracy and complexity, CPUs are no longer capable of delivering interactive user experience. The Tesla P40 delivers over 30X lower latency than a CPU for real-time responsiveness in even the most complex models.
<b>REAL-TIME INFERENCE</b>
The Tesla P40 delivers up to 30X faster inference performance with INT8 operations for real-time responsiveness for even the most complex deep learning models.
PNY provides unsurpassed service and commitment to its professional graphics customers offering: 3 year warranty, pre- and post-sales support, dedicated Quadro Field Application engineers and direct tech support hot lines.