art with code

2018-04-23

Compute-NVMe

So there's this single-socket EPYC TYAN server with 24 NVMe hotswap bays. That's... a lot of NVMe. And they're basically PCIe x4 slots.

What if you turned those NVMe boxes into small computers. Beefy well-cooled ARM SoC with 32 gigs of RAM and a terabyte of flash, wired to the ARM SoC with a wide bus. You might get 200 GB/s memory bandwidth and 10 GB/s flash bandwidth. The external connectivity would be through the PCIe 4.0 x4 bus at 8 GB/s or so.

The ARM chip would perform at around a sixth the perf of a 32-core EPYC, but it'd have a half-teraFLOP GPU on it too. With 24 of those in a 2U server, you'd get four 32-core EPYCs worth of CPU compute, and nearly a Tesla V100 of GPU compute. But. You'd also have aggregate 4.8 TB/s memory bandwidth and 240 GB/s storage bandwidth. In a 2U. Running at, what, 10 W per card?

Price-wise, the storage and RAM would eclipse the price of the ARM SoC -- maybe $700 for the RAM and flash, then $50 for the SoC. Put two SoCs in a single box, double the compute?

Anyway, 768 GB of RAM, 24 TB of flash, 128 x86 cores of compute, plus 80% of a Tesla V100, for a price of $20k. Savings: $50k. Savings in energy consumption: 800 W.

Post a Comment

Blog Archive

About Me

My photo

Built art installations, web sites, graphics libraries, web browsers, mobile apps, desktop apps, media player themes, many nutty prototypes