http://www.nvidia.com/object/deep-learning-system.html
GPUs : 8x Tesla GP100
TFLOPS (GPU FP16 / CPU FP32) : 170/3
GPU Memory : 16 GB per GPU
CPU : Dual 16-core Intel Xeon E5-2698 v3 2.3 GHz
NVIDIA CUDA® Cores : 28672
System Memory : 512 GB 2133 MHz DDR4 LRDIMM
Storage : 4x 1.92 TB SSD RAID 0
Network : Dual 10 GbE, 4 IB EDR
Software : Ubuntu Server Linux OS, DGX-1 Recommended GPU Driver
System Weight : 134 lbs
System Dimensions : 866 D x 444 W x 131 H (mm)
Packing Dimensions : 1180 D x 730 W x 284 H (mm)
Maximum Power Requirements : 3200W
Operating Temperature Range : 10 - 35 °C
Note : 134 pounds is 60kg.
3200W power requirement ... *whistles*.
My Seasonic PSU definitely won't be able to support this
There might be potential applications for this in GPU mining, depending on the price of this hardware.
These kind of machine is only good for scientific computing with double precision operation. It is too expensive for mining.