View down a data center hallway with blue lighting and green LEDs

How to Test an Enterprise SSD
Part 1: Use Your Exact Environment with Real Data, Apps, and Hardware

It’s a fact: you get what you pay for. The same goes when buying SSDs. But choosing the right one isn’t as simple as trusting a spec sheet to really know how things will run in your stack. Sure, you could get a bunch of cheap drives from Amazon and make someone happy from a budgetary perspective. But when those drives fail under the stress of long-term 24/7 use, will everyone still be happy when you need to replace all of them?

Choosing the right enterprise-grade SSD means running real-world tests to see how well they’ll perform when rolled out into production. Tests and comparisons stress the drives to push the limits of failure, check the endurance, and to see if the performance will change over time.

This article in two parts, will dive into the challenges and suggestions for building and running effective enterprise SSD tests; first, from the hardware perspective and getting your testing rig configured, then by examining how to compare benchmarks.

With that in mind, what are the hardware requirements for a test bed?

Real Tests Need Real Hardware

Racks of servers

First, you can’t run adequate enterprise tests on a laptop, and anyone who says otherwise is misinformed. Your test bed must start with real-world data center hardware. This means a dedicated server with RAID controllers like those found in the data center.



If you’re buying a new test system, we’d recommend a 2U server running the current generation chipsets. If you plan on testing NVMe, select a motherboard that supports PCIe 4.0 as it’s backwards compatible with PCIe 3.0 drives and is ready for the newest generation of products. Based on the form factors, fill it up with a set number of SATA or NVMe drives configured for your RAID controller. You may also choose a high-bandwidth PCIe network card.

Steady State or BUST

A person in a suit drawing an upward trending curve on a line chart

When testing, it’s important to know that a drive’s performance will change over time. That’s why preconditioning prior to sequential and random workloads is critical. Both have their own standards for preconditioning and are extremely important to meet specs.

Once the drive is fully burned-in and filled, it will perform differently compared to a new drive fresh out of the box. Thus, it’s ever more important to test a drive after it’s been filled and operating at a steady-state.

This may require developing specific scripts to help burn-in with appropriate data sets that match the use case. For example, if you use a lot of MySQL, PHP, or Oracle, these databases create scripts that will fill the drive with appropriate amount of data tables prior to running your OLTP benchmark workloads. If you use drives for virtual computing, precondition the virtual hard drive to ensure it reaches steady- state performance.

When it comes to hardware considerations, you need to remember that all tests must mimic real-world working conditions. From the steady-state of a drive, to the system and components it plugs into, the only way to safely set-up your tests for accurate results is to start with the hardware configuration that best matches your data center environment.

#KingstonIsWithYou

In our next article, we’ll examine how to build the reports and tests that will provide accurate benchmarks to compare your SSDs.

Related articles