BIOS Overview

As with most server motherboards, the BIOS interface is the old-style black/blue/white on grey. This is an Aptio utility on top of an American Megatrends base. There are a number of options here that would often be hidden in a regular consumer motherboard. We’ve chosen a few of the procedural highlights for this review.

The main entry point is the Main screen, which states the BIOS version and build date, as well as the memory installed, but not a lot else. Typically we prefer to see also the CPUs listed here, if only for a quick visual check when entering the BIOS.

The screen with most of the functional options is Advanced, as shown here. There are sub-menus for most of the functional elements on the board, including Boot, Security Processor, IO, CPU/NorthBridge (DRAM), USB, SATA, Networking, and even a RAMDisk option.

Under the PSP menu shows the hierarchy and firmware versions for PSP support.

For the CPU configuration, we still haven’t seen what CPUs are installed, but here users can select to enable/disable simultaneous multi-threading, core performance (fixed frequency or fixed power), C-state control, Core Complex control, and other features like the hardware prefetchers (some software works better when these are disabled, depending on how the software is written).

For the Core Control, users can select how many cores per CCX need to be enabled. The full L3 of the chiplet is still available, so this can be used to optimize software that can benefit from more L3 per core (if you didn’t buy a cheaper EPYC to begin with).

Here we get to the CPU information, finally. Our 7F52 here had SMT disabled, and is showing as running at 700 MHz while in the BIOS. The chip has a nominal operating voltage of 1.1 volts.

Under the North Bridge configuration is where we get some of the IO options as well as Memory configurations. Included here is the determinism option (for when QoS is required), as well as cTDP options for processor models that support it.

Users looking to have some high-end GPU compute will need to enable 4G Decoding, in the PCIe sub-menu. Users can also manage how the PCIe devices and slots are managed here, including the onboard video.

This board also supports RAMDisk operation, and like other RAMDisks this data is lost when power is lost.

Boot options are extensive, with the board supporting boot from just about anything. Here we disabled Legacy boot due to some detection issues with our USB devices.

The ever critical Boot Override is also present. Users will need to press F11 in order to get to the boot menu during boot, or they can enter the BIOS and select it here.

Gallery: H11DSi BIOS

The Supermicro H11DSi Motherboard Mini-Review IPMI Overview
POST A COMMENT

35 Comments

View All Comments

  • bryanlarsen - Wednesday, May 13, 2020 - link

    > the second CPU is underutilized.

    This is common in server boards. It means that if you don't populate the second CPU, most of your peripherals and slots are still usable.
    Reply
  • The_Assimilator - Wednesday, May 13, 2020 - link

    It's almost like technology doesn't exist for the board to detect when a second CPU is present, and if so, switch some of the PCIe slots to use the lanes from that CPU instead. Since Supermicro apparently doesn't have access to this holy grail, they could have opted for a less advanced piece of manual technology known as "jumpers" and/or "DIP switches".

    This incredible lack of basic functionality on SM's part, coupled with the lack of PCIe 4, makes this board DOA. Yeah, it's the only option if you want dual-socket EPYC, but it's not a good option by any stretch.
    Reply
  • jeremyshaw - Wednesday, May 13, 2020 - link

    For Epyc, the only gain of dual socket is more CPU threads/cores. If you wanted 128 PCIe 4.0 lanes, single socket Epyc can already deliver that. Reply
  • npz - Wednesday, May 13, 2020 - link

    You do have other options with pcie 4.0 if you're willing to go for full systems or proprietary form factors.
    One of Supermicro's own dual EPYC with PCIE 4.0:
    https://www.supermicro.com/en/products/motherboard...

    all dual epyc w/ pcie 4.0 rackmount:
    https://www.pogolinux.com/products/amd-epyc-server...

    They must figure that the market is too small to devote investing in making EATX mobos. And I'm sure they're right that most people will be purchasing full systems and not individual EATX mobos
    Reply
  • Deicidium369 - Wednesday, May 13, 2020 - link

    most of the purchases of Supermicro are not individual motherboards but motherboard + case + redundant power supplies - and in some cases they will not sell without the CPUs and memory. Reply
  • mark625 - Wednesday, May 13, 2020 - link

    That's great, but the cool thing is the server that the custom SuperMicro board goes in: https://www.supermicro.com/en/Aplus/system/2U/2124... . The chassis holds four of those boards in a 2U form-factor with redundant power supplies and 24x2.5in hot-swap drives, 6 per board. Fully populated, that would give you 8 CPUs (for up to 512 threads) and 16TB of RAM.

    However, it's ability to play Crysis is still in question...
    Reply
  • mark625 - Wednesday, May 13, 2020 - link

    Oops, that would be 512 cores and 1024 threads across 8 processors in 2U space, not 512. Reply
  • Samus - Thursday, May 14, 2020 - link

    The complexity of using jumpers to reallocate entire PCIe lanes would be insane. You'd probably need a bridge chip to negotiate the transition, which would remove the need for jumpers anyway since it could be digitally enabled. But this would add latency - even if it wasn't in use since all lanes would need to be routed through it. Gone are the days of busmastering as everything is so complex now through serialization. Reply
  • bryanlarsen - Friday, May 15, 2020 - link

    Jumpers and DIP switches turn into giant antennas at the 1GHz signalling rate of PCIe3. Reply
  • kingpotnoodle - Monday, May 18, 2020 - link

    Have you got an example of a motherboard that implements your idea with PCIe? I've never seen it and as bryanlarsen said this type of layout where everything essential is connected to the 1st CPU is very standard in server and workstation boards. It allows the board to boot with just one CPU, adding the second CPU enables additional PCIe sockets usually. Reply

Log in

Don't have an account? Sign up now