Graphic of a microchip with cubes around it to represent storage

The NVMe Promise: Squeeze More Out of Existing CPUs

The ability to increase the performance of SATA interface has made most data centers slow to adopt NVMe technology. It's easier to increase capacity or increase the performance with higher IOPS or lower latency when you're using something slow as SATA. If you look at today’s data centers, most data center architects are focused on enhancing CPU utilization. With a rack full of expensive CPUs (whatever number of cores or licensing costs paid), data centers are rarely even capable of utilizing the CPUs to anywhere close to even thirty percent of their maximum capacity.

Imagine paying for a server room full of Ferraris only to end up stuck driving them at 20 mph. This isn’t a Ford vs. Ferrari moment, but rather an unleaded vs high octane moment.

NVMe is driving new change with both transfer speeds and in-memory provisioning allowing you to double utilization from thirty to almost sixty percent. Using existing infrastructure, NVMe can get the CPUs to run more efficiently with lower latency and higher throughput. However, you must be able to accommodate NVMe. Limitations can include your existing backplanes or the inability to plug-in and replace with your current form factor. It becomes a larger overhaul.

Kingston DC1000B Server SSDs

To move from a SAS based system, the architecture of the server must change unless you use an adaptor to get NVMe SSDs onto the PCIe bus. It will be a full platform change for a customer. Compared to using SATA and SAS hardware-based host controllers, the PCIe interface is software-defined and delivers higher efficiency to dedicated processes. It is astounding how NVMe provides low latency and the CPUs ability to multithread.

Now, the next question you might ask is, “What’s more important today? Upgrading the entire car, or just supercharging the engine?"

For most data center managers, the change is going to be gradual – starting with slight upgrades like the Kingston DC1500M and DC1000B.

Illustration of a speedometer with a light streak going through it

NVMe Over Fabrics

NVMe-oF enables centralized, shared access of NVMe devices over a specialized network (FC/RDMA/TCP), enabling access to the network-linked drive as if it were local to the client servers. The benefits of centralized storage management of NVMe include simplified management, better capacity utilization and easier elimination of single point of failures. The NVMe-oF spec calls for either Fiber channel, RDMA or TCP fabrics. Fiber channel protocol (FCP) has been the leading enterprise storage transport technology since the mid-1990s, used for transporting SCSI packets over fiber channel networks so it became pivotal for NVMe to define the new “FC-NVMe” protocol and make it possible to transport both SCSI and NVMe traffic over fiber channel, enabling existing users of FCP in existing SAN environments to upgrade to FC-NVMe. RDMA (Remote Direct Memory Access) is also another mainstream protocol that has existed for years on Infiniband, RoCE (RDMA over converged ethernet), and iWARP fabrics, and so building on RDMA was a way for NVMe to leverage these existing transport technologies. TCP/IP is the most mainstream network transport protocol with its solid design principles since the late 70s. It was natural for NVMe to develop a methodology for transporting NVMe commands over existing TCP networks, to lower deployment costs and faster setup time.

The emergence of NVMe-oF also brings about more challenges to IT infrastructure since the bottleneck that existed with SCSI devices moves up the stack to network controllers and interfaces, but many companies have innovated switches and NICs that support higher network speeds and highly tunable QoS. All-flash array manufacturers have also innovated in offering an end-to-end NVMe-oF implementation with an array of tools to tune for better QoS and eliminate noisy neighbors.

#KingstonIsWithYou

Related articles