Intel Pushes Back at its Competition with Cascade Lake Xeons

NEW-PRODUCT ANALYSIS: The company’s Second Generation Xeon Scalable Processors address the processing demands of data-centric workloads such as AI, machine learning and 5G.

Intel-Xeon-Family-1-2060x1376

Intel’s rollout of the latest generation of its Xeon server processors at a time when its dominant position in the data center is being challenged like it hasn’t in more than a decade.

At an event April 2 in San Francisco, Intel officials announced the much-anticipated launch its “Cascade Lake” processors, the first of two releases of 14-nanometer silicon due from the giant chip maker last year as it prepares for the arrival of its 10nm processors in 2020.

At the same time, the company unveiled a new family of field-programmable gate arrays (FPGAs). The Agilex FPGAs can be customized for a wide range of applications, from analytics at the edge to virtual network functions (VNFs) to acceleration in the data center, and through their software programmability can adapt to the rapidly changing nature of modern compute environments that are dealing with such emerging workloads as artificial intelligence (AI) and machine learning.

The new Second Generation Xeon Scalable Processors and Agilex FPGAs come at a time when the amount of data being generated not only is growing at a rapid pace but also is increasingly being created outside of the data center, including at the edge and in the cloud. This is driving the demand for technology that can not only collect, storage, process and analyze data at the edge but also address the processing needs of modern applications like AI, machine learning, 5G and analytics.

Intel's Data-Centric Infrastructure

“The data-centric era is upon us with data being generated at a pace of 1.7MB per second for every person on earth,” Lisa Spelman, vice president and general manager of Xeon products and data center marketing at Intel, wrote in a blog post. “This data represents a monumental opportunity for our customers to drive new societal insights, business opportunity, and redefine our world. Intel long ago recognized this opportunity and underwent a strategic shift in silicon innovation toward a data-centric infrastructure that will move, store and process data from core data centers to the intelligent edge, and everywhere in between.”

The market opportunity is vast, according to Intel officials. Not only do they see the data-driven computing being in the area of $300 billion, but also noted that about 2 percent of all the data being generated is being analyzed, which means there is an opportunity to help enterprises get more out of the data they’re creating.

The Cascade Lake processors, which offer up to 56 cores, come with a broad array of capabilities designed to address the challenges in this data-centric era of the cloud, the internet of things (IoT), AI and machine learning, and edge computing. Those features also will give Intel more ammunition to push back at the push by such competitors as Advanced Micro Devices, Arm and Nvidia, which see the rapidly changing computing environment and stumbles Intel has had in recent years with its manufacturing processes as an opportunity to chip away at the larger rival’s market share in server chips, which is north of 95 percent.

The new chips, which are available now, include Intel’s Deep Learning Boost (Intel DL Boost), which is designed to drive AI inference workloads, such as image recognition, object detection and image segmentation in the data center and at the edge. The chip maker has worked with partners to optimize such AI frameworks as TensorFlow, PyTorch, Caffe and MXNet for DL Boost.

Support for Optane Persistent Memory

They also include support for Intel’s Optane DC persistent memory—which Spelman said “creates a persistent memory tier that allows users to affordably scale memory capacity, enables data persistence in main memory rather than disks, and unleashes in-memory software”—as well as hardware-enhanced security features.

In addition, Intel’s Xeon product rollout included custom processors that include the 56-core, 12-memory channel Platinum 9200 aimed at high-performance (HPC) computing workloads, AI applications and high-density infrastructure environments. Intel also built network-optimized Xeons in conjunction with communications services providers for network-function virtualization (NFV) infrastructure and upcoming 5G networks. The Xeon D-1600 system-on-a-chip (SoC) is aimed at dense environments where space and power are limited.

Cascade Lake is important to Intel for a number of reasons, according to Patrick Moorhead, principal analyst with Moor Insights and Strategy.

“Cascade Lake introduces for the first time storage-class memory and special machine-learning capabilities built right into the chips,” Moorhead told eWEEK. “These can add tremendous capabilities to enterprises and CSPs [cloud service providers] and for that matter, the channel. I am also interested in how granular Intel is getting in specific workloads, accelerating them many different ways, be it with storage, special accelerators and even software. This is key with its continued fight with AMD, Nvidia and Arm upstarts.”

AMD Improving its Marketshare with Epyc

AMD has muscled its way back into the data center on the strength of its Epyc processors, which are based on the company’s Zen microarchitecture. The company’s ability to follow through on promises with the first generation of the Epyc chips puts it in good position in the market as it readies the release of the 7nm second generation of its server processors this year.

Likewise, Arm—and its chip-making partners like Marvell (which bought Cavium for $6 billion last year) with its ThunderX processors and startup Ampere (founded by former Intel executives)—is looking to making inroads into the data center. Nvidia, which a decade ago pushed its GPUs as accelerators in servers, is expanding the capabilities of its GPUs beyond simply being accelerators.

However, Intel also has been expanding its portfolio, not only enhancing its CPUs but also building out its field-programmable gate arrays (FPGAs) and ASICs (via its acquisition of eASIC in 2018), as well as plans to develop its own discrete GPUs for release next year. It’s lineup also includes Nervana, a neural network chip for machine-learning inference.

According to a product roadmap Intel unveiled last year, the Cascade Lake processors will be followed later this year by 14nm Cooper Lake chips, which will come with more features for accelerating AI applications and deep learning training as well as innovations focused on Optane.

Next year the company will release Ice Lake chips, the first of its 10nm processors which will share a common platform with Cooper Lake.