Man working on a server

Rip and Replace vs Predictability: Why SSDs with Predictable Latency Matters

Most people have heard the adage of individuals who come from “humble beginnings.” But the same could be said for many of today’s applications and data centers. Many of the world’s most successful applications today started as internal products that ran on small, private servers.

More companies and services begin by prototyping on internal servers, built with off-the-shelf components in order to reduce startup costs allowing an inexpensive way to ramp up as they push from prototype to production. Often, this means using consumer-oriented SSDs, which are cheap to procure and replace.

PCB with chips

This is a good model when applications are designed for intermittent use, but what happens when an app becomes popular enough to have full-time demand? In reality, unless you’re outsourcing to a dedicated tier-1 data center, the original design specs need to change to accommodate the increase in demand.

This is especially true when dealing with security or privacy information that you want kept inside the network, unavailable to public access. This notion, called “on-premises” or "on-prem" for short, is where companies continuously use a mix of internal servers, hosting multiple internal applications (Wiki's, Sharepoint, Call Center scripting) that can’t be put in the Cloud for compliance (HIPAA, PCI-DSS etc.) or other business reasons. So ripping and replacing internal resources becomes a regular phenomenon as drives age, fail or don’t perform for the growing workforce demands.

While internal application architectures seem to be the biggest culprit for ripping and replacing drives, we still hear about larger application and service providers that continue to use consumer drives in data centers at significant scale. When things go wrong, they just rip and replace the old drives.

The number of data centers employing this model is shrinking as the cost of enterprise SSDs is coming down ― making them more affordable and attractive to ultra-price sensitive operations, and those that want predictable and stable performance from their servers.

Predictability is Key

An IT professional woman thinking with sun in the background

For high-end data centers, predictability of performance is a key attribute they are designing for. Many data centers now realize and understand that there is value in buying the proper class of SSD for an application. Cloud hosting companies need to know that their enterprise SSDs will deliver consistent performance with minimal latency.

Imagine an e-commerce site where customers add items to their cart, but upon checkout, there is a lag to process the order and payment. Both the seller and customer know this is a less than optimal experience, nor satisfying, and can have a real-world business impact over time. If e-commerce sites continue to experience latency, and customers complain about the checkout process, they’ll likely start looking for another host, CDN or another application platform.

Abstract graphic of various color beams representing data moving

Beyond latency, the endurance rating of an SSD should also be heavily considered. Across the board, data center SSDs have higher-rated endurance specifications than client SSDs - making them safer to use from a reliability standpoint and meet product lifecycle requirements the data center sets.

Many consumer drives today use exotic write cache methods to save on cost by removing the high-speed DRAM component. A lot of consumer SSDs use a small write buffer carved out of the onboard NAND Flash. A typical client drive that buffers will likely never be completely filled during the lifetime of the system, so no change in user experience is ever noticed. But, put that same drive in a data center application that is 100-percent duty cycle (24/7 read/write operation) and slower performance will start to show up.

Predictable Uptime is more than IOPS

Conductor and orchestra

Another consideration is to figure out what happens when something goes wrong, repeatedly. If there is a technical issue with a consumer SSD installed in a server, the likelihood of getting good support or a fix from the manufacturer is not likely as the drive is being used outside of its intended use case. For operations with SLAs that require four or five-9 service uptime, taking a risk with consumer-grade products just doesn’t cut the mustard.

Enterprise SSDs come with the support and service that you cannot find with consumer products. Enterprise-grade SSDs aren’t just off-the-shelf products, they are highly tuned for read-intensive and mixed-use applications. In many instances, there are custom nuances built into each product based on a particular use case, which also includes supporting those use cases to ensure uptime. If something goes wrong with a cache of enterprise SSDs, your support team is just a call away to replace or re-engineer a product based on the operational requirements.

Optimal SSD Performance

The best advice for starting a server operation is to source SSDs from reputable companies and buy SSDs intended for server workloads and not client workloads. When you install a client SSD in a server, you are configuring an untested hardware configuration and are connecting to host controllers (RAID Controllers) that behave differently than client host controllers.

One could start with enterprise-grade products and still maintain the flexibility of ramping operations without sacrificing performance or long-term scalability. Moreover, implementing enterprise SSDs provides greater stability and reliability for the overall server architecture.

Humble beginnings don’t have to hamstring operations.

#KingstonIsWithYou

Ask and Expert - SSD

Ask a Server SSD Expert

Planning the right solution requires an understanding of your project’s security goals. Let Kingston’s experts guide you.

Ask an Expert

Related Videos

Related articles