Server Buying Decisions: Memoryby Johan De Gelas on December 19, 2013 10:00 AM EST
- Posted in
- IT Computing
- Cloud Computing
We reviewed several types of server memory back in August 2012. You still have the same three choices—LRDIMMs, RDIMMs, and UDIMMs—but the situation has significantly changed now. The introduction of the Ivy Bridge EP is one of those changes. The latest Intel Xeon has better support for LR-DIMMs and supports higher memory speeds (up to 1866 MHz).
But the biggest change is that the pricing difference between LRDIMMs and RDIMMs has shrunk a lot. Just a year ago, a 32GB LRDIMM cost $2000 and more, while a more "sensible" 16GB RDIMM costs around $300-$400. You paid about three times more per GB to get the highest capacity DIMMs in your servers. Many servers could benefit from more memory, but that kind of pricing made LRDIMMs only an option for IT projects where hardware costs were dwarfed by other costs like consulting and software licenses. Fifteen months in IT is like half a decade in other industries; just look at the table below.
(*) Quad, but load of dual
If you need a refesher on UDIMMs, RDIMMs and LRDIMMs, check out our technical overview here. The price per GB of LRDIMMs is only 60% higher than that of the best RDIMMs. Quadrank 32GB RDIMMs used to be a lot cheaper than their load reduced competition and that difference is now negligible.
Post Your CommentPlease log in or sign up to comment.
View All Comments
JohanAnandtech - Friday, December 20, 2013 - linkFirst of all, if your workload is read intensive, more RAM will almost always be much faster than any flash cache. Secondly, it greatly depends on your storage vendor whether adding more flash can be done at "dramatically lower cost". The tier-one vendors still charge an arm and a leg for flash cache, while the server vendors are working at much more competitive prices. I would say that in general it is cheaper and more efficient to optimize RAM caching versus optimizing your storage (unless your are write limited).
blaktron - Friday, December 20, 2013 - linkNot only are you correct, but significantly so. Enterprise flash storage at decent densities is more costly PER GIG than DDR3. Not only that, but you need the 'cadillac' model SANs to support more than 2 SSDs. Not to mention fabric management is a lot more resource intensive and more prone to error.
Right now, the best bet (like always) to get performance is to stuff your servers with memory and distribute your workload. Because its poor network architecture that creates bottlenecks in any environment where you need to stuff more than 256GB of RAM into a single box.
hoboville - Friday, December 20, 2013 - linkAnother thing about HPC is that, as long as a processor has: enough RAM to do its dataset on the CPU/GPU before it needs more data, the quantity of RAM is enough. Saving on RAM can let you buy more nodes, which gives you more performance capacity.
markhahn - Saturday, January 4, 2014 - linkheadline should have been: if you're serving static content, your main goal is to maximize ram per node. not exactly a shocker eh? in the real world, at least the HPC corner of it, 1G/core is pretty common, and 32G/core is absurd. hence, udimms are actually a good choice sometimes.
mr map - Monday, January 20, 2014 - linkVery interesting article, Johan!
I would very much like to know what specific memory model (brand, model number) you are referring to regarding the 32GB LRDIMM—1866 option.
I have searched at no avail.
Johan? / Anyone?
Thank you in advance!
Gasaraki88 - Thursday, January 30, 2014 - linkA great article as always.
ShirleyBurnell - Tuesday, November 5, 2019 - linkI don't know why people are still going after server hardware. I mean it's the 21st century. Now everything is on cloud. Where you have the ability to scale your server anytime you want to. I mean the hosting provider companies like: AWS, DigitalOcean, Vultr hosting https://www.cloudways.com/en/vultr-hosting.php, etc. has made it very easy to rent your server.