AMD Reaffirms EPYC Bergamo CPUs In 1H 2023, Instinct MI300 APUs In 2H 2023

AMD reaffirmed the launch plans of its next-generation EPYC Bergamo CPUs and the Instinct MI300 APUs which launch this year.

AMD EPYC Bergamo CPUs & Instinct MI300 APUs To Power Next-Gen Data Centers This Year

AMD already got a lead on Intel with its EPYC Genoa CPUs that launched months ahead of the Xeon Sapphire Rapids CPUs. Fast forward to 2023, and AMD is planning to launch four brand new data-center products which include Genoa-X, Bergamo, Siena, and Instinct MI300. During its recent Q4 2022 earnings call, AMD once again confirmed that its EPYC Bergamo CPUs will launch in 1H 2023 followed by Instinct MI300 APUs in 2H 2023.

AMD Instinct MI300 In 2H 2023 – Powering 2+ Exaflops El Capitan Supercomputer

The AMD Instinct MI300 will be a multi-chip and multi-IP Instinct accelerator that not only features the next-gen CDNA 3 GPU cores but is also equipped with the next-generation Zen 4 CPU cores.

The latest specifications that were unveiled for the AMD Instinct MI300 accelerator confirm that this exascale APU is going to be a monster of a chiplet design. The CPU will encompass several 5nm 3D chiplet packages, all combining to house an insane 146 Billion transistors. Those transistors include various core IPs, memory interfaces, interconnects, and much more. The CDNA 3 architecture is the fundamental DNA of the Instinct MI300 but the APU also comes with a total of 24 Zen 4 Data Center CPU cores & 128 GB of the next-generation HBM3 memory running in 8192-bit wide bus config that is truly mind-blowing.

AMD will be utilizing both 5nm and 6nm process nodes for its Instinct MI300 ‘CDNA 3’ APUs. The chip will be outfitted with the next generation of Infinity Cache and feature the 4th Gen Infinity architecture which enables CXL 3.0 ecosystem support. The Instinct MI300 accelerator will rock a unified memory APU architecture and new Math Formats, allowing for a 5x performance per watt uplift over CDNA 2 which is massive. AMD is also projecting over 8x the AI performance versus the CDNA 2-based Instinct MI250X accelerators. The CDNA 3 GPU’s UMAA will connect the CPU and GPU to a unified HBM memory package, eliminating redundant memory copies while delivering low TCO.

In terms of when – we’ve talked before about sort of our Data Center GPU ambitions and the opportunity there. We see it as a large opportunity. As we go into the second half of the year and launch MI300, sort of the first user of MI300 will be the supercomputers or El Capitan, but we’re working closely with some large cloud vendors as well to qualify MI300 in AI workloads. And we should expect that to be more of a meaningful contributor in 2024. So lots of focus on just a huge opportunity, lots of investments in software as well to bring the ecosystem with us.

AMD CEO, Lisa Su (Q4 2022 Earnings Call)

AMD EPYC Bergamo In 1H 2023 – Topping Up The Core Count To 128 With Zen 4C

The AMD EPYC Bergamo chips will be featuring up to 128 cores and will be aiming at the HBM-powered Xeon chips along with server products from Apple, Amazon, and Google with higher core counts (ARM architecture). Both Genoa and Bergamo will utilize the same SP5 socket and the main difference is that Genoa is optimized for higher clocks while Bergamo is optimized around higher-throughput workloads.

Bergamo will launch in the first half of the year. We are on track for the Bergamo launch, and you’ll see that become a larger contributor in the second half. So as we think about the Zen 4 ramp and the crossover to our Zen 3 ramp, it should be towards the end of the year, sort of in the fourth quarter, that you would see a crossover of sort of Zen 4 versus Zen 3, if that helps you.

AMD CEO, Lisa Su (Q4 2022 Earnings Call)

It is stated that AMD’s EPYC Bergamo CPUs will be arriving in the first half of 2023 and will use the same code as Genoa and also run like Genoa but the code is half the size of Genoa. The CPUs are specifically mentioned to compete against the likes of AWS’s Graviton CPUs and other ARM-based solutions where peak frequency isn’t a requirement but throughput through the number of cores is. One workload example for Bergamo would be Java where the extra cores can definitely come in handy. Following Bergamo will be the TCO-optimized Siena lineup for the SP6 platform which will play a crucial role in expanding AMD’s TAM growth in the server segment.

AMD’s EPYC & Instinct chips are expected to push the company’s market share holding to 30% and possibly even breach it by the end of this year. The company really has a strong roadmap laid out in the server market segment and we can’t wait to see how things evolve in the coming quarters.

AMD EPYC CPU Families:

Family Name AMD EPYC Venice AMD EPYC Turin AMD EPYC Siena AMD EPYC Bergamo AMD EPYC Genoa-X AMD EPYC Genoa AMD EPYC Milan-X AMD EPYC Milan AMD EPYC Rome AMD EPYC Naples
Family Branding EPYC 7007? EPYC 7006? EPYC 7004? EPYC 7005? EPYC 7004? EPYC 7004? EPYC 7003X? EPYC 7003 EPYC 7002 EPYC 7001
Family Launch 2025+ 2024-2025? 2023 2023 2023 2022 2022 2021 2019 2017
CPU Architecture Zen 6? Zen 5 Zen 4 Zen 4C Zen 4 V-Cache Zen 4 Zen 3 Zen 3 Zen 2 Zen 1
Process Node TBD 3nm TSMC? 5nm TSMC 4nm TSMC 5nm TSMC 5nm TSMC 7nm TSMC 7nm TSMC 7nm TSMC 14nm GloFo
Platform Name TBD SP5 / SP6 SP6 SP5 SP5 SP5 SP3 SP3 SP3 SP3
Socket TBD LGA 6096 (SP5)
LGA 4844 LGA 6096 LGA 6096 LGA 6096 LGA 4094 LGA 4094 LGA 4094 LGA 4094
Max Core Count 384? 256 64 128 96 96 64 64 64 32
Max Thread Count 768? 512 128 256 192 192 128 128 128 64
Max L3 Cache TBD TBD 256 MB? TBD 1152 MB? 384 MB? 768 MB? 256 MB 256 MB 64 MB
Chiplet Design TBD TBD 8 CCD’s (1CCX per CCD) + 1 IOD 12 CCD’s (1 CCX per CCD) + 1 IOD 12 CCD’s (1 CCX per CCD) + 1 IOD 12 CCD’s (1 CCX per CCD) + 1 IOD 8 CCD’s with 3D V-Cache (1 CCX per CCD) + 1 IOD 8 CCD’s (1 CCX per CCD) + 1 IOD 8 CCD’s (2 CCX’s per CCD) + 1 IOD 4 CCD’s (2 CCX’s per CCD)
Memory Support TBD DDR5-6000? DDR5-5200 DDR5-5600? DDR5-5200 DDR5-5200 DDR4-3200 DDR4-3200 DDR4-3200 DDR4-2666
Memory Channels TBD 12 Channel (SP5)
6-Channel (SP6)
6-Channel 12 Channel 12 Channel 12 Channel 8 Channel 8 Channel 8 Channel 8 Channel
PCIe Gen Support TBD TBD 96 Gen 5 160 Gen 5 160 Gen 5 160 Gen 5 128 Gen 4 128 Gen 4 128 Gen 4 64 Gen 3
TDP Range TBD 480W (cTDP 600W) 70-225W 320W (cTDP 400W) 200W (cTDP 400W) 200W (cTDP 400W) 280W 280W 280W 200W

Share this story



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button