Intel’s recent update on its 10nm Sapphire Rapids Xeon CPU lineup has also sparked some new rumors which talk about the launch schedule, performance, and efficiency of the next-gen server portfolio.
Intel’s Next-Gen Sapphire Rapids-SP Xeon CPU Lineup Rumors Talks Performance, Power, Launch Schedule & Competitive Nature Against AMD’s EPYC
In the most recent update, Intel’s CVP & GM of Xeon & Memory Group, Lisa Spelman, announced that the next-generation Sapphire Rapids lineup will be headed for production in Q1 2022 & a Q2 2022 ramp. The CPUs will be officially launched for the public in the first half of 2022 but Intel will be delivering early engineering samples to its partners to validate and optimize their workloads in advance.
Demand for Sapphire Rapids continues to grow as customers learn more about the benefits of the platform.
Given the breadth of enhancements in Sapphire Rapids, we are incorporating additional validation time prior to the production release, which will streamline the deployment process for our customers and partners. Based on this, we now expect Sapphire Rapids to be in production in the first quarter of 2022, with ramp beginning in the second quarter of 2022. via Intel
Prior to this, Intel also announced that it will be integrating HBM within its Sapphire Rapids Xeon CPUs to tackle the huge memory bandwidth requirements of future HPC & datacenter workloads. We reported on that over here so do read for more details on how HBM comes into play on the next-generation chips. For this post, we will be taking a look at two stories (rumors as far as I can tell) from AdoredTV and Videocardz.
Intel Sapphire Rapids-SP Xeon CPU Roadmap – Non-HBM In 2022, HBM in 2023?
First up, let’s talk about the timeline and Intel’s current roadmap. We know that Intel had internally projected a 2021 launch for its Sapphire Rapids Xeon lineup. A 1H 2022 launch means that the CPU lineup has been pushed back by a bit. It was originally going to launch a few quarters after Ice Lake-SP, by the end of 2021, but that didn’t happen. Ice Lake-SP itself was immensely delayed and Intel had to use Cascade Lake-SP and Cooper Lake-SP as intermediate platforms to fill in the gaps from all we can tell.
The rumor from AdoredTV is that the initial Sapphire Rapids-SP are going to be non-HBM parts with up to 56 cores and can be configured in up to 8S platforms. The HBM2 parts might slip in 2023 and will be limited to dual-socket configurations. This will be about the same time when Intel plans to launch its next-generation Emerald Rapids-SP Xeon CPU platform which will be a slight refresh for Sapphire Rapids-SP with a slightly higher core count, faster clocks, process optimizations, & faster memory support.
The problem here is that while Intel will be on 10nm Enhanced SuperFin with up to 56/64 cores, AMD will be releasing its first Zen 4 CPUs based on a 5nm process node with twice the number of cores. For comparison, Genoa is AVX-512 compliant with 96 cores. AdoredTV has pointed to 128 cores but there has been not been much evidence to support this claim. Even 96 cores are 71% more cores than Sapphire Rapids-SP and we aren’t even started with the performance rumors.
Intel Sapphire Rapids-SP Xeon CPU Performance – Perf Leadership vs Milan But Genoa Is Its Main Threat
So let’s start with the performance numbers with the first rumored comparison between the Intel Sapphire Rapids-SP 56 core SKU versus a 64 core AMD EPYC Milan SKU. It looks like Intel will claim its performance leadership for a short period of time as represented in the chart, specifically in AI, networking, and HPC workloads. The lineup will also see marginal gains in floating point (FPU) and Integer performance but there’s another metric here that we should mention here and that’s the power input. The 10nm 56 core chip has a rated TDP of 350W versus AMD’s 280W. That’s 25% more power but in return, you are getting around 25-30% gains on average in the workloads seen below.
Intel Sapphire Rapids Xeon CPUs versus AMD EPYC Milan & EPYC Genoa. (Source: AdoredTV)
However, as noted in the roadmap, the main competition of Intel will be Genoa by the time it’s out in the market. Here, the same 56 core 350W part is compared to an AMD Genoa chip with 96 cores and a 350W TDP. The non-HBM variant will retain its lead in AI and HPC workloads thanks to Intel’s advanced AVX suite while HBM variants will try to mitigate the floating-point difference. AMD EPYC Genoa will however mark a massive lead in almost all other categories and offer tremendous CPU multi-threaded performance with an advanced chiplet based design.
Sapphire Rapids SPR03-LC specifications
from an anonymous tip pic.twitter.com/uro8DDmB3s
— VideoCardz.com (@VideoCardz) June 30, 2021
Videocardz also managed to get hands-on an internal SPR03-LC node specifications sheet which is for a dual-socket platform comprising of two 26 core SKUs for a total of 52 cores. The node comes with 3.84 TB of NVM memory, a 100Gb/s networking switch, and 512 GB of DDR5-4800 memory. It looks like the specifications are derived from a very early sample given the low 2.70 GHz clocks but the power input for the entire node in Linpack is measured at 799W.
Here’s Everything We Know About Intel’s 4th Gen Sapphire Rapids Xeon CPUs
The Sapphire Rapids-SP family will be replacing the Ice Lake-SP family and will go all on board with the 10nm Enhanced SuperFin process node that will be making its formal debut later this year in the Alder Lake consumer family. From what we know so far, Intel’s Sapphire Rapids-SP lineup is expected to utilize the Golden Cove architecture & will be based on the 10nm Enhanced SuperFin process node.
The Sapphire Rapids lineup will make use of 8 channel DDR5 memory with speeds of up to 4800 MHz and support PCIe Gen 5.0 on the Eagle Stream platform. The Eagle Stream platform will also introduce the LGA 4677 socket which will be replacing the LGA 4189 socket for Intel’s upcoming Cedar Island & Whitley platform which would house Cooper Lake-SP and Ice Lake-SP processors, respectively. The Intel Sapphire Rapids-SP Xeon CPUs will also come with CXL 1.1 interconnect that will mark a huge milestone for the blue team in the server segment.
Coming to the configurations, the top part is started to feature 56 cores with a TDP of 350W. What is interesting about this configuration is that it is listed as a low-bin split variant which means that it will be using a tile or MCM design. The Sapphire Rapids-SP Xeon CPU will be composed of a 4-tile layout with each tile featuring 14 cores each.
It looks like AMD will still hold the upper hand in the number of cores & threads offered per CPU with their Genoa chips pushing for up to 96 cores whereas Intel Xeon chips would max out at 56 cores if they don’t plan on making SKUs with a higher number of tiles. Intel will have a wider and more expandable platform that can support up to 8 CPUs at once so unless Genoa offers more than 2P (dual-socket) configurations, Intel will have the lead in the most number of cores per rack with an 8S rack packing up to 448 cores and 896 threads.
The Intel Saphhire Rapids CPUs will contain 4 HBM2 stacks with a maximum memory of 64 GB (16GB each). The total bandwidth from these stacks will be 1 TB/s. According to leaked details from AdoredTV, HBM2 and GDDR5 will be able to work together in flat, caching/2LM, and hybrid modes. The presence of memory so near to the die would do absolute wonders for certain workloads that require huge data sets and will basically act as an L4 cache.
AMD has been taking away quite a few wins from Intel as seen in the recent Top500 charts from ISC ’21. Intel would really have to up their game in the next couple of years to fight back the AMD EPYC threat.
Intel Xeon SP Families:
|Family Branding||Skylake-SP||Cascade Lake-SP/AP||Cooper Lake-SP||Ice Lake-SP||Sapphire Rapids||Emerald Rapids||Granite Rapids||Diamond Rapids|
|Process Node||14nm+||14nm++||14nm++||10nm+||10nm Enhanced SuperFin?||10nm Enhanced SuperFin?||7nm?||sub-7nm?|
|Platform Name||Intel Purley||Intel Purley||Intel Cedar Island||Intel Whitley||Intel Eagle Stream||Intel Eagle Stream||Intel Mountain Stream |
Intel Birch Stream
|Intel Mountain Stream |
Intel Birch Stream
|MCP (Multi-Chip Package) SKUs||No||Yes||No||No||Yes||TBD||TBD (Possibly Yes)||TBD (Possibly Yes)|
|Socket||LGA 3647||LGA 3647||LGA 4189||LGA 4189||LGA 4677||LGA 4677||LGA 4677||TBD|
|Max Core Count||Up To 28||Up To 28||Up To 28||Up To 40||Up To 56?||TBD||TBD||TBD|
|Max Thread Count||Up To 56||Up To 56||Up To 56||Up To 80||Up To 112?||TBD||TBD||TBD|
|Max L3 Cache||38.5 MB L3||38.5 MB L3||38.5 MB L3||60 MB L3||TBD||TBD||TBD||TBD|
|Memory Support||DDR4-2666 6-Channel||DDR4-2933 6-Channel||Up To 6-Channel DDR4-3200||Up To 8-Channel DDR4-3200||Up To 8-Channel DDR5-4800||Up To 8-Channel DDR5-5200?||TBD||TBD|
|PCIe Gen Support||PCIe 3.0 (48 Lanes)||PCIe 3.0 (48 Lanes)||PCIe 3.0 (48 Lanes)||PCIe 4.0 (64 Lanes)||PCIe 5.0 (80 lanes)||PCIe 5.0||PCIe 6.0?||PCIe 6.0?|
|TDP Range||140W-205W||165W-205W||150W-250W||105-270W||Up To 350W?||TBD||TBD||TBD|
|3D Xpoint Optane DIMM||N/A||Apache Pass||Barlow Pass||Barlow Pass||Crow Pass||Crow Pass?||Donahue Pass?||Donahue Pass?|
|Competition||AMD EPYC Naples 14nm||AMD EPYC Rome 7nm||AMD EPYC Rome 7nm||AMD EPYC Milan 7nm+||AMD EPYC Genoa ~5nm||AMD Next-Gen EPYC (Post Genoa)||AMD Next-Gen EPYC (Post Genoa)||AMD Next-Gen EPYC (Post Genoa)|