I have been in many meetings over the past year or so and there are still many people that do not seem to know what NVMe is or what it can deliver.

This is powerful technology in the data center world that is going to enable great gains in storage performance.

In a software defined data center discussion, it’s interesting how often the software guys talk about this amazing hardware innovation.  I keep hearing how hardware is just a commodity, but that is a topic for another day.

I think first we need to level set and take a step to look at where this technology fits in the data center.  NVMe is simply just a new protocol for storage.  But to understand why it’s so powerful, we need to contrast it to the existing protocols and the storage medium they were built to support.  All of the protocols up to NVMe have been for the most part small steps forward in carry SCSI packets between a storage device and a storage controller.  The limitation for a long time has been the storage device or storage medium being so slow that the protocol didn’t need to provide much.  The protocol needed, above all else, stability and consistency more than performance.  So, when you have 7200 RPM or 10K/15K spinning hard drives there are only so many operations those can handle before it’s over run.  So, protocols like FC, SAS, NL-SAS did not need to provide a ton of performance because the bottle neck was the storage device. If you flooded that storage device with IO you would create more problems.  So, storage architects would have to meticulously design solutions around the limitations of the storage medium and, by proxy, the storage protocol.

One area I would discuss with people a lot was how queueing would work in whatever device we were talking about.  A simple example of this is a 7200 RPM SATA drive could handle 2 outstanding operations, and a 15K FC drive could handle 8 operations.  That’s not a lot, but you can see that, even though a 15k drive is twice as fast as a 7200 drive, it could also queue 4x the number of operations. So, it was faster and way more efficient.

When flash became the new medium in the data center, the industry was still relying on FC, SATA, SAS, etc. to carry the IO between that medium and the controller.  Those 7200 RPM drives were your Chevy Pickup trucks and 15K drives were your Chevy Impalas with a V8 and a decent suspension package.  Then came flash drives. The first iteration of flash in the data center is NAND which is your Chevy Corvette.  Putting NAND on a SATA protocol was like trying to drive that Corvette through Atlanta at rush hour.  It looks like this:

With NAND, which is your Corvette, we need a more efficient highway to drive it on. Also, we don’t want to just use 87 octane gasoline any more as the fuel; we want rocket fuel.  NAND changed the capabilities for storage which in turn changed the expectations.

You need a new protocol that was developed specifically for a storage medium and that is 100 times faster than the old protocols. This is NVMe, your new, more efficient highway to carry data to NAND. So instead of 8 lanes of traffic through downtown Atlanta, you have 64,000. That’s a lot of Corvettes.

Then replace the NAND with storage class memory and you have gains in storage performance that are no longer huge, they are insane. The first implementations of NVMe are going to be between the storage controller and the storage medium.  In a hyper converged world, you can get those gains immediately. But say you have an application that still needs external storage and the NVMe fabric story starts to become clearer and insanely powerful at more layers for your business.