Ever wanted a cheap entry into the world of EPYC? AMD is announcing its first DDR5 entry-level processors, based on the consumer AM5 platform but with suppor...
Not really. It’s just a normal Zen 4 CPU with some server features like ECC memory support.
The biggest downfall of these chips is they have the same 28 PCI-E lanes as any consumer grade Zen 4 CPU. Quite the difference between that and the cheapest EPYC CPUs outside the 4000 series.
You’re going to run in to some serious I/O shortages if trying to fit a 10gbe card, an HBA card for storage, and a graphics card or two and some NVME drives.
A lot of the Zen based APUs don’t support ECC. The next thing is if it supports registered or unregistered modules - everything up to threadripper is unregistered (though I think some of the pro parts are registered), Epycs are registered.
That makes a huge difference in how much RAM you can add, and how much you pay for it.
Not officially. Only Ryzen Pro have official (unregistered) ECC support and not many motherboards support it either. AFAIK Threadripper doesn’t officially support it either but I could be wrong.
No, even the earliest Ryzens support ECC reporting just fine, given the motherboard used supports it, which many boards do. Only the non-Pro APUs do not support ECC.
The ostensibly top of this range 4584PX, also with 16 cores but at double the clock speed, 28 PCIe 5.0 lanes, and 120W seems like it would be a perfectly fine drop-in replacement for that.
(I will note there is one significant difference that the Xeon does come with a built-in NIC; in this case the 4-port 25Gb “E823-C”, saving you space and PCIe lanes in your system)
As more PCIe 5.0 expansion options land, I’d expect the need for large quantities of PCIe to diminish somewhat. A 100Gb NIC would only require a x4 port, and even a x8 HBA could push more than 15GB/s. Indeed, if you compare the total possible PCIe throughput of those CPUs, 32x 4.0 is ~63GB/s, while 28x 5.0 gets you ~110GB/s.
Unfortunately, we’re now at the mercy of what server designs these wind up in. I have to say though, I fully expect it is going to be smaller designs marketed as “edge” compute, like that Dell system.
We’ll see if they even make them. I can’t imagine there’s a huge customer base who really needs to cram all that I/o through only two or 4 lanes. Why make these ubiquitous cards more expensive if most of the customers buying them are not short PCI-E lanes? So far most making use of 5.0 are graphics and storage devices. I’ve not seen any hint of someone making a sas or 10 gbe card that uses 5.0 and fewer lanes. Most cards for sale today still use 3.0 let alone 4.0.
I might as well just drop the cash on a real EPYC CPU with 128 lanes if I’m only going to be able to buy cutting edge expansion cards that companies may or may not be motivated to make.
Agreed the PCI layout is bad. My problem is the x16 slot.
I would prefer 8 slots/onboard with PCIe5 x2 from CPU. From the chipset 2 slots of PCIe4 x2. This would probably adequate IO. Aiming for 2x25 Gbits performance.
Not really. It’s just a normal Zen 4 CPU with some server features like ECC memory support.
The biggest downfall of these chips is they have the same 28 PCI-E lanes as any consumer grade Zen 4 CPU. Quite the difference between that and the cheapest EPYC CPUs outside the 4000 series.
You’re going to run in to some serious I/O shortages if trying to fit a 10gbe card, an HBA card for storage, and a graphics card or two and some NVME drives.
I’m pretty sure all the Zen CPUs have supported ECC memory, ever since the first generation of them.
A lot of the Zen based APUs don’t support ECC. The next thing is if it supports registered or unregistered modules - everything up to threadripper is unregistered (though I think some of the pro parts are registered), Epycs are registered.
That makes a huge difference in how much RAM you can add, and how much you pay for it.
Not officially. Only Ryzen Pro have official (unregistered) ECC support and not many motherboards support it either. AFAIK Threadripper doesn’t officially support it either but I could be wrong.
Many boards support ECC even when not mentioned. Most ASUS and ASRock boards do for example.
The newest Threadripper 7000 series not only support ECC, but require it to work. It only accepts DDR5 registered ECC RAM.
Consumer CPUs were lacking ECC reporting, so you never really knew if ECC was correcting errors or not.
No, even the earliest Ryzens support ECC reporting just fine, given the motherboard used supports it, which many boards do. Only the non-Pro APUs do not support ECC.
Probably best to look at it as a competitor to a Xeon D system, rather than any full-size server.
We use a few of the Dell XR4000 at work (https://www.dell.com/en-us/shop/ipovw/poweredge-xr4510c), as they’re small, low power, and able to be mounted in a 2-post comms rack.
Our CPU of choice there is the Xeon D-2776NT (https://www.intel.com/content/www/us/en/products/sku/226239/intel-xeon-d2776nt-processor-25m-cache-up-to-3-20-ghz/specifications.html), which features 16 cores @ 2.1GHz, 32 PCIe 4.0 lanes, and is rated 117W.
The ostensibly top of this range 4584PX, also with 16 cores but at double the clock speed, 28 PCIe 5.0 lanes, and 120W seems like it would be a perfectly fine drop-in replacement for that.
(I will note there is one significant difference that the Xeon does come with a built-in NIC; in this case the 4-port 25Gb “E823-C”, saving you space and PCIe lanes in your system)
As more PCIe 5.0 expansion options land, I’d expect the need for large quantities of PCIe to diminish somewhat. A 100Gb NIC would only require a x4 port, and even a x8 HBA could push more than 15GB/s. Indeed, if you compare the total possible PCIe throughput of those CPUs, 32x 4.0 is ~63GB/s, while 28x 5.0 gets you ~110GB/s.
Unfortunately, we’re now at the mercy of what server designs these wind up in. I have to say though, I fully expect it is going to be smaller designs marketed as “edge” compute, like that Dell system.
We’ll see if they even make them. I can’t imagine there’s a huge customer base who really needs to cram all that I/o through only two or 4 lanes. Why make these ubiquitous cards more expensive if most of the customers buying them are not short PCI-E lanes? So far most making use of 5.0 are graphics and storage devices. I’ve not seen any hint of someone making a sas or 10 gbe card that uses 5.0 and fewer lanes. Most cards for sale today still use 3.0 let alone 4.0.
I might as well just drop the cash on a real EPYC CPU with 128 lanes if I’m only going to be able to buy cutting edge expansion cards that companies may or may not be motivated to make.
Agreed the PCI layout is bad. My problem is the x16 slot.
I would prefer 8 slots/onboard with PCIe5 x2 from CPU. From the chipset 2 slots of PCIe4 x2. This would probably adequate IO. Aiming for 2x25 Gbits performance.