This site may earn chapter commissions from the links on this page. Terms of use.

Roughly seven months ago, Nvidia launched the Tesla V100, a $x,000 Volta GV100 GPU for the supercomputing and HPC markets. This massive card was intended for specialized markets thanks to its enormous die size (815 mm sq) and massive transistor count (21.1B). In render, it offered specialized tensor cores, 16GB of HBM2, and theoretical operation in sure workloads far higher up annihilation Nvidia had shipped before.

Today, at the Conference on Neural Information Processing Systems (NIPS), Jen-Hsun surprise-launched the same GV100 architecture in a traditional GPU form gene. Simply as the GTX 1080 Ti is a trimmed-downward version of the Nvidia Titan Xp, this new Titan 5 slims down in some spots compared with the full-fat Tesla V100. Memory clocks are very slightly lower (1.7Gbps transfer rate, down from 1.75Gbps), and the GPU has iii memory paths at iii,072 $.25, rather than the 4,096-bit interface the Tesla V100 offers. It as well offers simply 12GB of HBM2, rather than the 16GB on the Tesla V100.

Nvidia is trumpeting the Titan 5 equally offering 110 TFLOPS of horsepower, "9x that of its predecessor." We don't doubt that's literally true, but it's non a comparison to the single-precision or double-precision math we've typically referenced when discussing GPU FLOPS performance. It's a reference to Volta'due south performance improvement in deep learning tasks over Pascal, and it' s derived by comparing Volta'south tensor functioning (with its specialized tensor cores) against Pascal'southward 32-bit single-precision throughput. That doesn't mean the comparison is invalid, since Volta has specialized tensor cores for training neural networks, and Pascal doesn't, only it'due south a piddling like comparing AES encryption performance on a CPU with specialized hardware for that workload with another CPU that lacks it. Is the comparison fair? Admittedly. But it's fair only for the specific metric being measured, as opposed to existence a generalizable test case for the rate of improvement one CPU offers over the other.

Nvidia's stated goal with the Titan 5 is to offering researchers who don't have access to supercomputers or big atomic number 26 HPC installations the same admission to cut-edge hardware performance that their compatriots enjoy. While the GPU is priced at an heart-popping $3,000 (relative to the regular PC market place), that's not very much compared with the typical cost of an HPC server.

"Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, retentiveness architecture and processor links," said Nvidia CEO Jen-Hsun Huang. "With TITAN V, we are putting Volta into the hands of researchers and scientists all over the earth. I can't look to see their breakthrough discoveries."

You can purchase a Titan 5 at the Nvidia store correct at present, just we can't honestly say we'd recommend one for anyone non working in these fields. Despite the "Titan" make having originally debuted as a high-end consumer carte with some specialized scientific compute capabilities, this GPU family has been moving back towards its scientific computing inquiry roots for a number of years. While Nvidia volition obviously back up the GPU with a unified commuter model, I wouldn't hold my jiff waiting for fine-tuned gaming back up from a GPU family that so few customers will ever take access to.