At last week’s Intel Architecture Day, Intel’s chief architect, Raja Koduri, briefly held up the smallest member of the company’s forthcoming Xe-HP series of server CPUs, the one tile configuration. Now, only a few days later, he has upped the ante by showing off the largest, four tile configuration.

Designed to be a scalable chip architecture, Xe-HP is set to be available with one, two, or four tiles. And while Intel has yet to disclose too much in the way of details on the architecture, based on their packaging disclosures it looks like the company is using their EMIB tech to wire up the GPU tiles, as well as the GPU’s on-package HBM memory.

Assuming it makes it to market, a multi-tiled GPU – essentially multiple GPUs in a single package – would be a major accomplishment for Intel. GPUs are notoriously bandwidth-hungry due to the need to shovel data around between cores, caches, and command frontends, which makes them non-trivial to split up in a chiplet/tiled fashion. Even if Intel can only use this kind of multi-tile scalability for compute workloads, that would have a significant impact on what kind of performance a single GPU package can attain, and how future servers might be built.

POST A COMMENT

27 Comments

View All Comments

  • close - Tuesday, August 18, 2020 - link

    It's not a "fault", it's something that made it into their lingo. Now both companies use it to jab each other. The problem is born yesterday commenters who can't get a joke older than they are and in their quest to sound smart just ruin it. Reply
  • close - Tuesday, August 18, 2020 - link

    @edzieba, no, there's no "Because of the difference between" on this Earth that makes a shred of difference for the purpose of this discussion.

    It's a "running joke" that's been going for a decade and a half. It's not as if they are *actually* glued so trying to explain it with tech is pretty dumb. It's as simple as "single package" vs "multiple packages" and egos.
    Reply
  • psychobriggsy - Wednesday, August 19, 2020 - link

    Although AMD was reasonably correct with the Pentium D - it was two dies on the same package, but there was nothing special about the die interconnect, they were both connected to the same old Front Side Bus with all the limitations that brought with a multi-drop bus. It saved motherboard sockets and space at least.

    Whereas Epyc at least had dedicated chip links through a fabric. Obviously it had its own flaws due to the arrangement and decentralised I/O aspects that Epyc 2 fixed.

    Of course, this GPU has its own dedicated fabric interconnect as well, and indeed RDNA3 or 4 is supposed to go chiplet based as well. The glue term should go away.
    Reply
  • dullard - Tuesday, August 18, 2020 - link

    https://www.anandtech.com/show/1665/2 Reply
  • dullard - Tuesday, August 18, 2020 - link

    Close, no, it was not Intel's terminology. It was AMD's terminology from 15 years ago. Comments? Questions? Reply
  • Luminar - Wednesday, August 19, 2020 - link

    The actual terminology is Intel duct tapes while AMD innovates. Reply
  • name99 - Thursday, August 20, 2020 - link

    Lisa Su doesn't need to kill anyone's dog. She just needs to contract with Siliconware (and likely already has):
    https://www.3dincites.com/2020/07/iftle-456-spil-f...

    EMIB is just the Intel brand for something that other people can also do.
    Reply
  • Adonisds - Monday, August 17, 2020 - link

    How would more than 1 tile scale for gaming? Would it not scale well, like SLI? Reply
  • Duraz0rz - Monday, August 17, 2020 - link

    I doubt they'll bring this type of tiling to the HPG line; these GPUs are destined for data centers and racks where density is king. Reply
  • Valantar - Tuesday, August 18, 2020 - link

    "scale well, like SLI" - did you mean "scale well, UNlike SLI"? Otherwise, that statement is self-contradictory. Reply

Log in

Don't have an account? Sign up now