Comments Locked

28 Comments

Back to Article

  • tygrus - Monday, August 2, 2021 - link

    There are not many apps/tasks that make good use of more than the 64c/128t. Some of those tasks are better suited for GPU, accelerators or a cluster of networked systems. Some tasks just love having the TB's RAM while others will be limited by data IO (storage drives, network). YMMV. Have fun with testing it but it will be interesting to find people with real use cases that can afford this.
  • questionlp - Monday, August 2, 2021 - link

    Being capable of handling more than 64c/128t across two sockets doesn't mean that everyone will drop more than that on this board. You can install two higher clock 32c/64t processors into each socket, have shed load of RAM and I/O for in-memory databases, software-defined (insert service here) or virtualization (or a combination of those).

    Installer lower core count, even higher clock speed CPUs and you have yourself an immensely capable platform for per-core licensed enterprise database solutions.
  • niva - Wednesday, August 4, 2021 - link

    You can but why would you when you can get a system where you can slot a single CPU with 64C?

    This is a board for the cases where 64C is clearly not enough, and really catering towards server use, for cases where less cores but more power per core are needed, there are simply better options.
  • questionlp - Wednesday, August 4, 2021 - link

    The fastest 64c/128t Epyc CPU right now as a base clock of 2.45 GHz (7763) while you can get 2.8 GHz with a 32c/128t 7543. Slap two of those on this board, you'll get a lot more CPU power than a single 64c/128t and double the number of memory channels.

    Another consideration is licensing. IIRC, VMware per-CPU licensing maxes out at 32c per socket. To cover a single 64c Epyc, you would end up with the same license count as two 32c Epyc configuration. Some customers were grandfathered in back in 2020; but, that's no longer the case for new licenses. Again, you can scale better with 2 CPU configuration than 1 CPU.

    It all depends on the targeted workload. What may work for enterprise virtualization won't work for VPC providers, etc.
  • linuxgeex - Monday, August 2, 2021 - link

    The primary use case is in-memory databases and/or high-volume low-latency transaction services. The secondary use case is rack unit aggregation, which is usually accomplished with virtualisation. ie you can fit 3x as many 80-thread high performance VPS into this as you can into any comparably priced Intel 2U rack slot, so this has huge value in a datacenter for anyone selling such a VPS in volume.
  • logoffon - Monday, August 2, 2021 - link

    Was there a revision 2.0 of this board?
  • Googer - Tuesday, August 3, 2021 - link

    There is a revision 3.0 of this board.
  • MirrorMax - Friday, August 27, 2021 - link

    No and more importantly this is exactly the same board as rev1 but with a Rome/Milan bios, so you can bios update rev1 boards to rev3 basically, odd that the review doesn't touch on this
  • BikeDude - Monday, August 2, 2021 - link

    Task Manager screenshot reminded me of Norton Speed Disk; We now have more CPUs than we had disk clusters back in the day. :P
  • WaltC - Monday, August 2, 2021 - link

    In one place you say it took 2.5 minutes to post, in another place you say it took 2.5 minutes to cold boot into Win10 pro. I noticed you used a Sata 3 connector for your boot drive, apparently, and I was reminded of booting Win7 from a Sata3 7200rpm platter drive taking me 90-120 seconds to cold boot--in Win7 the more crowded your system with 3rd-party apps and games the longer it took to boot...;) (That's not the case with Win10/11, I'm glad to say, as with TB's of installed programs I still cold boot in ~12 secs from an NVMe OS partition.) Basically, servers are not expected to do much in the way of cold booting as up time is what most customers are interested in...but I doubt the S3 drive had much to do with the 2.5 minute cold-boot time, though. An NVMe drive might have shaved a few seconds off the cold-boot, but that's about it, imo.

    Interesting read! Enjoyed it. Yes, the server market is far and away different from the consumer markets.
  • Grayswean - Monday, August 2, 2021 - link

    256 threads, 1024 bits of memory bus -- resembles a low-end GPU of ~5 years ago.
  • Oxford Guy - Tuesday, August 3, 2021 - link

    What ‘low-end’ GPUs came with more than a 128-bit memory bus?
  • bananaforscale - Friday, August 6, 2021 - link

    You need HBM to go past 1024 bits, or compute cards. Low end is 64 to 128 bit bandwidth, and consumer cards don't hit 1024.
  • Oxford Guy - Sunday, August 15, 2021 - link

    Consumer cards did ship with HBM, in 4096-bit (Fury-X) and 2048-bit (AMD’s HBM-2 cards) as I recall. However, none of those were priced for the low end.
  • Threska - Monday, August 2, 2021 - link

    "In terms of power, we measured a peak power draw at full load with dual 280 W processors of 782 W."

    Looks like a new PSU is in order. Add in things like a GPU might push things over the edge.
  • Threska - Monday, August 2, 2021 - link

    " It does include a TPM 2.0 header for users wishing to run the Windows 11 operating system, but users will need to purchase an additional module to use this function as it doesn't come included in the packaging."

    I assume Windows 11 doesn't use any on-chip TPM.

    https://semiaccurate.com/2017/06/22/amds-epyc-majo...
  • Mikewind Dale - Monday, August 2, 2021 - link

    Why did you measure long idle differently? I agree it's interesting to measure power consumption while turned off. But why conflate that measurement with other systems that are turned on with idling OSes?

    And that DPC latency looks terrible. I see several other EPYC systems in the chart that don't have anywhere near that bad latency. In fact, the lowest latency in the chart is achieved by an ASRock EPYC.
  • watersb - Monday, August 2, 2021 - link

    2 x $7500 = $15,000 for two EPYC processors
    16 x $3600 = $57,600 for 4TB RAM

    $1000 each for power supply, motherboard

    Throw in an EATX chassis I have lying around

    $75,000 before sales tax or storage.

    I'd have to run a dedicated 15-Amp circuit to my main breaker box, well within a 1500 Watt spec for a standard residential receptacle.

    Probably want to upgrade the UPS.

    $100k ought to do it.
  • Mikewind Dale - Tuesday, August 3, 2021 - link

    Just run a 20 amp circuit. Most of the cost is labor anyway, not the wire. The difference between the cost of a 15A wire and a 20A wire is trivial.
  • jhh - Tuesday, August 3, 2021 - link

    A 15A 120V circuit will not do it in the US, as continuous loading of that circuit only supports 1440W of continuous service. 15A x 120V x 80% derating for continuous service is 1440W. On top of that, if the UPS is recharging after a power outage, that power diverted to the battery has to come out of the circuit as well. Perhaps a 240V 15A circuit would work better. Otherwise, you would need one of those strange 20A plugs to use the sideways position in a 20A receptacle.
  • watersb - Tuesday, August 3, 2021 - link

    Awesome, I always learn something here!

    The 20A receptacles aren't all that unusual, a good commercial-grade 20A is in regular stock at my local hardware store... and I live in a remote small town.

    Mikewind Dale's suggestion is sound: run a 20A if you're putting anything new in. Just be certain you don't string that behind an older 15A breaker! Should be a home run from your receptacle direct to the panel. Don't know if isolated ground specifically makes a difference, but it would likewise be a trivial cost.

    Does anyone make a 20A ATX power supply? They are more common in the data center, and one of my home rack PDUs showed up in the 20A version. (Then I got a Raspberry Pi, and replaced two servers with my MacBook Pro M1, and the power delivery system looked a bit embarrassed. So of course it's time to buy more silly gear...)
  • Foeketijn - Wednesday, August 4, 2021 - link

    I never understood why the US never changed the voltage system. The reason the US still uses 110V dual phase, is because after supporting World wide triple phase 220V the government found out it saved loads of copper. And the copper industry was depending on using that much copper.
    But nowadays the Chinese make the copper anyways.
    3600W from a normal fuse. 11kW from a normal triplephase 3 fuse outlet is your house has triple phase.
    Being wary about fuses just is not a European/Asian thing. Nor should it be American
  • mnemotronic - Tuesday, August 3, 2021 - link

    Server board? Please tell me it supports ECC memory.
  • Mikewind Dale - Wednesday, August 4, 2021 - link

    It supports RDIMM and LRDIMM. Although that's not the same as ECC, it's pretty much 100% correlated with ECC. I've never heard of a server board that supports RDIMM and LRDIMM but not ECC.

    Heck, most ThreadRipper non-Pro boards support ECC, and many Ryzen boards do. It would be unthinkable for this board not to support ECC.
  • Mikewind Dale - Wednesday, August 4, 2021 - link

    And just for comparison, Gigabyte's ThreadRipper Pro WRX80-SU8-IPMI board says "Support for UDIMM (ECC), RDIMM, 3DS RDIMM and LRDIMM memory modules". Notice that "ECC" is a qualifier for "UDIMM". It appears that for Gigabyte, ECC is only a question for UDIMM; for RDIMM and LRDIMM, ECC goes without saying.
  • bananaforscale - Friday, August 6, 2021 - link

    "Size E-ATX"

    There's "larger than ATX" but there's no E-ATX standard.
  • Axel_K. - Tuesday, August 10, 2021 - link

    When will this motherboard be available at online retailers? By googling I find that it is widely available only in Russia. When will it be available in the US and other countries?
  • MirrorMax - Saturday, August 28, 2021 - link

    Few errors in the article. Epyc rome already had 280w cpus with the 7H12, which was supported on the rev1 board. and there's nothing new on the rev3 board except a rome/milan bios instead of the naples/rome bios from what I can see. I assume they couldn't fit all 3 gens into one bios. Rev1 boards are also flashable to milan/rome bios according to gigabyte support. they are not too happy about customers bios flashing unless they have issues so i assume thats why they released this a seperate Rev not just a bios update.

Log in

Don't have an account? Sign up now