One of the highlights of Hot Chips from 2019 was the startup Cerebras showcasing its product – a large ‘wafer-scale’ AI chip that was literally the size of a wafer. The chip itself was rectangular, but it was cut from a single wafer, and contained 400,000 cores, 1.2 trillion transistors, 46225 mm2 of silicon, and was built on TSMC’s 16 nm process.

The whole thing created a very big buzz, and later that year the company showed of the first system, CS-1, which is a 15U unit in order to power a single chip. Power consumption was in the 15 kW range, and the unit cost a few million. They are already being deployed by research institutions.

Initially Cerebras was set to announce its 2nd generation product here at Hot Chips this year, however the time-scale didn’t align exactly, so the company is instead going through its software procedures. But at the end of the slide deck, there’s a special slide with some details about the next generation.

Obviously when doing wafer scale, you can’t just add more die area, so the only way is to optimize die area per core and take advantage of smaller process nodes. That means for TSMC 7nm, there are now 850,000 cores and 2.6 trillion transistors. Cerebras has had to develop new technologies to deal with multi-reticle designs, but they succeeded with the first gen, and transferred the learnings to the new chip. We’re expecting more details about this new product later this year.

Comments Locked

32 Comments

View All Comments

  • Fozzie - Tuesday, August 18, 2020 - link

    Ian- Why are you not trying to take a bite out of it???
  • Ian Cutress - Thursday, August 20, 2020 - link

    I have that picture posted elsewhere. Mixing it up a bit :)
  • Arbie - Tuesday, August 18, 2020 - link

    Tech post in 2040: "Remember when computers this powerful were massive affairs, cost millions and required 15KW of power?"
  • tygrus - Tuesday, August 18, 2020 - link

    Your observation is nothing new "Arbie". For the last 35 years, remember 30 years ago when computers this powerful were massive, cost millions & required 100x the power. See history of: IBM et. al. from 1950's onward; early Cray and other supercomputers onward.
  • Arbie - Tuesday, August 18, 2020 - link

    That was, ummm, the point "tygrus". Next time I'll put in a special note for you.
  • FunBunny2 - Tuesday, August 18, 2020 - link

    mainframes have been downsized than PCs as process nodes have fallen. IBM, mostly, does it's own building, considering that the z ISA can't go away. DASD, however, has been commodity 'PC' drives emulating CKD for decades. these days most IBM 'mainframes' will fit inside the envelope of a generous CEO desk, and be air-cooled as well. still cost a million or so Bongo Bucks.

    supercomputers still cost a dear amount. the only real difference is that the problems that supercomputers tend to do, nucular bombs and weather, are embarassingly parallel, so work just fine with thousands of Xeons in a cabinet. IBM never had much of a footprint in supers from the 360 on.
  • Spunjji - Wednesday, August 19, 2020 - link

    Noice
  • PatotoChaos - Wednesday, August 19, 2020 - link

    How does Ceberas feed the huge data from memory to this wafer with reconfigurable connection computing engines? Does the benchmark comparison with any details of the platform (box) configuration (or assuming all data are in the local engine unit's memory already)?
  • boeush - Friday, August 21, 2020 - link

    As to the 'how' - I'd guess that's what the rest of this box's internal bulk is mainly for (when subtracting out the space needed for supplying power to, and cooling of, 15 KW worth of compute...) Probably full of additional processors, network cards, storage, etc. - there just to manage and feed the beast.
  • twtech - Sunday, August 23, 2020 - link

    This sort of thing is the future of computing. Maybe it will be more like a computing cube than a wafer though, with heat transfer layers interspersed with the compute layers.

Log in

Don't have an account? Sign up now