We’re Calling It: Flash Storage May Not Be a Bottomless Pit of Performance

In the world of storage, we’re used to seeing drives double in size every 12-18 months (call it Moore’s Law for storage). Recently, however, we saw a massive jump from 3.8TB to 15+TB per drive inside of one year followed by the promise of 64TB models in the near future. Either solid state drives (SSD) are losing their touch or flash is undergoing its first evolution. What do we mean? 

Co-written by Jeff Thompson

Okay, let’s geek out for a second.

Let’s use a calculation of IOPS per TB to provide some crucial insight into flash data storage. In the simplest of terms, we’re looking at how many TB’s you can put on a drive versus how many IOPS that drive can pump out.

For example, a conservative IOP per drive amount for a 10K serial attached SCSI (SAS) drive is 130-150 IOPS. The largest SAS drive readily available is a 1.8TB drive (we won’t be discussing RAID 5, 6 or 10 or RAIN because it’s irrelevant if we just stick to a constant like RAW). If we run a calculation of IOPS per TB, we end up with about 78 IOPS per TB(140 / 1.8 = 77.77).  

Dedupe on spinning drives is not uniformly adopted in the industry, so 78 IOPS per TB is a reasonably consistent number across many vendors. And, just in case inline compression is performed in cache or memory, we can cut this number in half and call it 39.

Okay, but what about all those new high-capacity SSD drives?

New drives are roughly 15.2TB, and if we do the same calculation using 4000 IOPS for one of these drives (which we think might be a tad bit aggressive but we’re taking our vendors’ word for it),   it comes out to 263 IOPS per TB.

Here’s the fun part (we’ve been having a blast this entire time, though).  Nearly all vendors are performing inline deduplication and compression on these drives. So, if we use rough estimates and apply  2:1 compression and 3:1 dedupe, we come up with an all-time low of 44 IOPS per TB.

MIND BLOWN. Did you see what we just did there?!  Here’s a breakdown for the nonbelievers…4000 IOPS / 15.2TB = 263 IOPS per TB / 2:1 compression = 131.5 IOPS per TB / 3:1 Dedupe = 43.8 IOPS per TB.

Quick recap for you…

SAS – 140 IOPS / 1.8TB = 78 IOPS per TB / 2:1 Compression = 39 IOPS per TB.

SSD – 4000 IOPS / 15.2 = 263 IOPS per TB / 2:1 Compression / 3:1 Dedupe = 44 IOPS per TB.

That’s a 5 IOPS difference, resulting in only 12% increase in performance. It’s not the significant performance boost we expected from SSDs. Now what happens when flash densities quadruple to 64TB? When you store 384 effective TB (6:1 reduction ratio) on a single drive, IOPS drops to a shockingly low 11…Well, that can’t be right. 71% less performance?

We’re calling it. 

As flash densities increase, the performance paradigm shifts.

Now you might be thinking that this isn’t very realistic because you never access all of your data on any one drive at any one point, and you’d be right. But IOPS per TB is a realistic calculation, and if you multiply the number of drives by these numbers, you’ll see some disappointing results. The harsh reality is that the new large-capacity SSDs offer about the same or lower overall performance than a large-capacity SAS drive.

Of course, we understand that SSDs have other purposes, such as boosting speed, storage density and power efficiency at a unit cost close to (or less than) spinning media. But the driving force behind SSD adoption, i.e. cost reduction, is leading to a decrease in IOPS per TB.  I think we’d call this a “be careful what you wish for” moment.

Okay, so what are the alternatives? Well, we can think of a few:

  1. Dedupe takes a backseat or isn’t even included as an option (there are a couple vendors already opting for this route).
  2. We move away from commodity form factors to specialized units (yes, some vendors are doing this already too), and the IOPS per TB calculation swings back into a positive performance scenario.
  3. We start to see SSD tiering with small high performance as a tier and large drives for general purpose (heck if it doubles again in 12 months to 64TB drives we might even see archive SSDs).
  4. Additional data services that could concatenate or combine blocks of data to increase throughput per individual IO.

Again, we’re calling it!

SSDs may not be a bottomless pit of performance as the sizes increase.

Are you experiencing difficulties on the data storage front? Razor Tech can help you determine which technologies and solutions are best suited to address your business needs and deliver the best return on IT investments. Contact us today.


No Comments

Post A Comment