处理器
处理器频率 |
1150 MHz |
图形处理器系列 |
NVIDIA |
CUDA |
Y |
内存参数
图形适配器内存类型 |
GDDR5 |
内存带宽(最大) |
144 GB/s |
数据宽度 |
384 bit |
内存时钟速度 |
1500 MHz |
独立显卡适配器內存 |
6 GB |
性能
DirectX版本 |
11 |
最高着色模型版本 |
5.0 |
PhysX |
N |
集成电视调谐器 |
N |
HDCP |
Y |
系统要求
可支持的Linux操作系统 |
Y |
最低处理器 |
Intel Pentium 4 |
另外
图形处理器 |
Tesla C2070 |
双dvi(互应式数字视频) |
Y |
NVIDIA Tesla C2070 - PCIe x16 Gen2, 6GB GDDR5, 448 CUDA Cores, 225W
The NVIDIA Tesla™ C2050 and C2070 by PNY fuel the transition to parallel computing and bring the performance of a small cluster to the desktop. Based on the next-generation CUDA architecture, the 20-series family of Tesla GPUs support many “must have” features for technical and enterprise computing including C++ support, ECC memory for uncompromised accuracy and scalability, and a 7X increase in double precision performance compared Tesla 10-series GPUs.
Compared to the latest quad-core CPUs, Tesla C2050 and C2070 by PNY deliver equivalent supercomputing performance at 1/10th the cost and 1/20th the power consumption
EATURES
GPUs powered by the Fermi-generation of the CUDA architecture
Delivers cluster performance at 1/10th the cost and 1/20th the power of CPU-only systems based on the latest quad core CPUs.
448 CUDA Cores
Delivers up to 515 Gigaflops of double-precision peak performance in each GPU, enabling a single workstation to deliver a Teraflop or more of performance. Single precision peak performance is over a Teraflop per GPU.
ECC Memory
Meets a critical requirement for computing accuracy and reliability for workstations. Offers protection of data in memory to enhance data integrity and reliability for applications. Register files, L1/L2 caches, shared memory, and DRAM all are ECC protected.
Desktop Cluster Performance
Solves large-scale problems faster than a small server cluster on a single workstation with multiple GPUs.
6GB of GDDR5 memory per GPU
Maximizes performance and reduces data transfers by keeping larger data sets in local memory that is attached directly to the GPU.
NVIDIA Parallel DataCache™
Accelerates algorithms such as physics solvers, ray-tracing, and sparse matrix multiplication where data addresses are not known beforehand. This includes a configurable L1 cache per Streaming Multiprocessor block and a unified L2 cache for all of the processor cores.
NVIDIA GigaThread™ Engine
Maximizes the throughput by faster context switching that is 10X faster than previous architecture, concurrent kernel execution, and improved thread block scheduling.
Asynchronous Transfer
Turbocharges system performance by transferring data over the PCIe bus while the computing cores are crunching other data. Even applications with heavy data-transfer requirements, such as seismic processing, can maximize the computing efficiency by transferring data to local memory before it is needed.
CUDA programming environment with broad support of programming languages and APIs
Choose C, C++, OpenCL, DirectCompute, or Fortran to express application parallelism and take advantage of the “Fermi” GPU’s innovative architecture. NVIDIA Parallel Nsight™ tool is available for Microsoft Visual Studio developers.
High Speed , PCIe Gen 2.0 Data Transfer
Maximizes bandwidth between the host system and the Tesla processors. Enables Tesla systems to work with virtually any PCIe-compliant host system with an open PCIe x16 slot.