Steve Scott, SVP & CTOIn 1976, Seymour Cray, recognized as the “Father of Supercomputing” and the Founder of Cray (NASDAQ: CRAY), designed the signature Cray-1 vector supercomputer, which comprised of ICs (Integrated Circuits) instead of transistors, and boasted a world record processing speed of 170 megaflops. After nearly 40 years, Cray, in 2016, unveiled the Cray XC50 supercomputer, the company’s fastest supercomputer ever with a peak performance of one petaflop in a single cabinet. Over this brief period of transition, the processing speeds of Cray’s revolutionary supercomputers have dramatically changed but, the computing impact is still the same.
Today, with more than four decades of experience, Cray leverages, develops, and services the world’s most advanced supercomputers. As one of the giants in the HPC (High Performance Computing) industry, the company delivers unrivaled performance, efficiency, and scalability to customers through its comprehensive portfolio of supercomputers and big data storage solutions. Cray’s “Adaptive Supercomputing” vision is focused on delivering innovative next-generation products that integrate diverse state-of-the-art processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for improved performances.
The Cray XC series of products operate on the company’s most powerful technology till date. Designed for extreme scalability and application performance, the XC series can model large datasets and simulate relatively massive systems. Cray’s XC series supercomputers are engineered to handle the most challenging workloads requiring sustained multi-petaflop performance. They incorporate the Cray Aries high performance network interconnect for low latency and scalable global bandwidth, as well as the latest Intel Xeon processors, Intel Xeon Phi coprocessors and NVIDIA Tesla GPU accelerators. The XC series stands firm on Cray's commitment to performance supercomputing with an architecture that provides extreme scalability and sustained performance.
Furthermore, to meet the needs for nimble, reliable, and cost-effective cluster systems, Cray developed its Cray CS cluster supercomputer series. These cluster systems are industry-standards-based, highly customizable, and expressly designed to handle the broadest range of medium- to large-scale simulation and data analytics workloads.
We believe that with our Cray Programming Environment, validated toolkits, and the latest processing technologies, we have the right combination of hardware and software expertise
All CS components have been carefully selected, optimized, and integrated to create a powerful computing environment. Flexible node configurations, featuring the latest processor and interconnect technologies allow organizations to tailor a system according to their specific needs—from an all-purpose cluster system to one suited for shared memory, large memory or accelerator-based tasks.
Recently, at the 2016 Supercomputing Conference in Salt Lake City, UT, Cray unveiled its new deep learning capabilities across its line of supercomputing and cluster systems. With validated deep learning toolkits and the most scalable supercomputing systems in the industry, Cray customers can now run deep learning workloads at their fullest potential—at scale on a Cray supercomputer. Cray has validated several deep learning toolkits on Cray XC and Cray CS-Storm systems to simplify the transition to running deep learning workloads at scale. These toolkits include the Microsoft Cognitive Toolkit (previously CNTK), TensorFlow, NVIDIA DIGITS (Deep Learning GPU Training System), Caffe, Torch, and MXNet.
“The convergence of supercomputing and big data analytics is happening now, and the rise of deep learning algorithms is the evidence of how customers are increasingly using HPC techniques to accelerate analytics applications,” says Steve Scott, SVP and CTO, Cray. “We believe that with our Cray Programming Environment, validated toolkits, and the latest processing technologies, we have the right combination of hardware and software expertise. And, this expertise can help our customers efficiently execute deep learning workloads now and in the future.”