The Tesla P40 is powered by the Pascal architecture and delivers over 47 TOPS of deep learning inference performance. A single server with 8 Tesla P40s can replace up to 140 CPU-only servers for deep learning workloads, resulting in substantially higher throughput with lower acquisition cost. TensorRT included with NVIDIA deep learning SDK and deep stream SDK help customers seamlessly leverage inference capabilities like the INT8 operations and video trans-coding.