AMD: 96-Core EPYC ‘Genoa’ Due In 2022, 128-Core EPYC In 2023

AMD said it is on track to launch next-generation EPYC server CPUs next year that will feature up to 96 cores. The chipmaker will then ramp up the core count to 128 with a new slate of EPYC CPUs in 2023 that will be tailored for cloud customers.

The Santa Clara, Calif.-based chipmaker provided the update to its EPYC processor road map during its virtual AMD Data Center Premiere event Monday, where the company revealed the new EPYC Milan-X CPUs for technical computing and Instinct MI200 GPUs for high-performance computing and AI. It also announced new customer wins with Microsoft, SAP and Meta, the new name for Facebook’s parent company.

[Related: AMD: ‘Strong’ Enterprise Growth Led By Data Center, Laptop Sales]

The next generation of EPYC set for production and launch next year, code-named Genoa, will be made using AMD’s next-generation Zen 4 architecture, which will use an optimized version of TSMC’s 5-nanometer manufacturing process, according to AMD CEO Lisa Su.

Most of those details were disclosed back at AMD’s Financial Analyst Day in March 2020, but Su revealed more at Monday’s event, saying that Genoa will feature up to 96 cores for general-purpose computing — 32 more than the current EPYC lineup. She also said that TSMC’s 5nm process will offer two times the density, two times the power efficiency and 25 percent or more performance than the 7nm process used for AMD’s third-generation EPYC processors that launched earlier this year.

“When introduced, we expect Genoa will be the world’s highest-performance processor for general-purpose computing,” Su said. “It’s designed to excel across a broad range of data center workloads from enterprise to HPC to the public cloud.”https://2aaffdd8b95fdef23049f23ab04735e1.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Su said Genoa will become AMD’s first server CPU to support DDR5 memory, PCIe 5.0 connectivity and CXL, an open standard for interconnect technology between CPUs, accelerators and memory that was introduced by Intel and has widespread industry support. Genoa, which is sampling with customers now, will also have “breakthrough memory expansion capabilities for data center applications,” she added.

Following Genoa, AMD will ship its first generation of EPYC processors that will be designed for cloud-native applications in the first half of 2023, Su revealed. Code-named Bergamo, the EPYC CPUs will feature up to 128 cores using a modified version of the Zen 4 architecture called Zen 4c. The processors will support the same features and instruction set as Genoa, and they will also be compatible with server platforms that are built for Genoa.

“It’s fully software compatible with Zen 4 with specific cloud enhancements, including a new density-optimized cache hierarchy to enable additional higher core count configurations for cloud-native workloads that benefit from maximum thread density,” Su said. “And it also includes significantly improved power efficiencies and breakthrough performance per socket.”

In a roundtable with journalists and analysts prior to AMD’s virtual event, top data center executive Forrest Norrod said the introduction of the Zen 4c architecture represents a “broadening” of the company’s portfolio that is necessitated by the increasing complexity of data center workloads. This means a greater need for “workload-specific products,” whether it’s the upcoming Milan X chips for technical computing or Bergamo for cloud computing.

“By doing so, [we] make sure that we can continue to offer leadership performance and leadership [total cost of ownership] in each one of those segments,” said Norrod, whose title is senior vice president and general manage of AMD’s Data Center and Embedded Solutions Group.

Mark Papermaster, AMD’s CTO, said the increased power efficiency of Bergamo’s Zen 4c architecture is what allowed the company to fit more cores in the same footprint as the general-purpose Genoa CPUs. He added that Bergamo will offer “breakthrough socket-level throughput performance” that will be higher than Genoa and any other offerings on the market when it comes out.

“We think that by making this easy to adopt for software, for platform partners and for customers, it gives an additional choice for customers that’s easy for them to deploy,” Papermaster said.

Dominic Daninger, vice president of engineering at Nor-Tech, a Burnsville, Minn.-based HPC system integrator, told CRN that his company appreciates AMD’s “workload-specific” approach with new products like Milan-X because it benefits workloads that are used by many customers.

“The expanded L3 cache [in Milan-X] could be of real interest to us in particular simulation modeling niches,” he said. “It could help AMD to make those kinds of modifications or alterations.”

Daninger said he has also seen an uptick in customer interest for AMD’s GPUs over Nvidia’s for certain applications, though that doesn’t account for AMD’s new Instinct MI200 GPU that the chipmaker says is faster than Nvidia’s A100 across various HPC and AI workloads.

“[AMD’s GPUs prior to the MI200 series] oftentimes do not provide quite the same performance that you can see from Nvidia, but their cost is lower enough that we’re starting to see some traction for simulation and modeling,” he said.

But while Nor-Tech is seeing more demand for AMD-based HPC clusters, “the biggest problem is trying to get their parts,” Daninger said, which isn’t unique to AMD in the semiconductor industry.

“They’ve got a huge amount of back orders,” he said.RELATED TOPICS:

Back to Top

Leave a Reply

Your email address will not be published. Required fields are marked *