The Tech Behind All-Flash Data Centers

    By

    Dec 17, 2021

    Building on my three previous posts about the all-flash DC, this fourth instalment in a six-part series focuses on the technologies powering not only the all-flash DC but also the digital transformation (DX) of enterprises.

    1. All-Flash Data Centers: Turning Your Data Green
    2. All-Flash Data Centers: Accelerating Your Digital Transformation
    3. All-Flash Data Centers: Realize the Value of Data

    One of the first things we associate with an all-flash DC is the storage media — SSDs. I briefly touched on SSDs in my first post, noting that they offer around 100-times faster data access, 100-times higher throughput, and more than 1000-times higher per-disk IOPS than HDDs. Those numbers are impressive, but how do SSDs achieve them? To answer that, let’s look at how SSDs work.

    Solid-State Drives — Speedy Storage Devices

    SSDs store data in semiconductor cells (forming NAND flash memory) that each contains between 1 and 4 bits of data. A single-level cell (SLC) contains 1 bit and is typically the most reliable, durable, fast, and expensive type of memory. A double-level cell (DLC) contains 2 bits (it’s also called “MLC”, but that nomenclature is confusing), a triple-level cell (TLC) contains 3 bits, and a quad-level cell (QLC) contains 4 bits — these three types are collectively known as multi-level cells (MLC). Now you can see why the term “MLC” is confusing. I’ll stick with using “DLC” for double-level cells to avoid confusion.

    In SLC memory, each cell has two possible states: A full electrical charge in the cell represents 0, whereas no charge represents 1. As such, an SLC can store only one bit of data. Although SLC memory offers higher write speeds, lower power consumption, and higher endurance, it costs more because more cells are needed to store the equivalent amount of data that MLC memory stores.

    MLC storage represents different combinations of 1s and 0s by using more levels of electrical charge (voltage levels). For example, in DLC memory, a charge of 25% represents 11; 50% represents 01; 75% represents 00; and 100% represents 10. More voltage levels mean that cells can represent more combinations of 1s and 0s, but as the difference between each voltage level decreases, there is a greater chance of uncertainty between them. Consequently, it becomes more difficult to precisely read the data stored in the cell.

    In balancing the trade-offs between cost and performance, DLC NAND has become the mainstream of enterprise-level SSDs, while QLC NAND will be extensively adopted in cloud, CDN, and tiered storage solutions.

    The other key component of an SSD is the controller. Put simply, it is a processor that executes the firmware to perform functions such as read and write caching, error detection and correction, and wear levelling. The latter is an important concept in how SSDs work, as it directly affects their longevity. Writing to and deleting data from the same cell repeatedly will eventually lead to it wearing out, meaning that the cell can no longer store data. Wear levelling aims to distribute writes as evenly as possible across all cells. An SLC may have a lifetime of about 50,000 to 100,000 write/delete cycles, whereas a DLC is about 1,000 to 10,000.

    All of this electrical wizardry means that SSDs do not have any mechanical moving parts, eliminating the need to spin up platters and move read/write heads. This is one of the main reasons why SSDs are so much faster than mechanical HDDs.

    SSDs that use non-volatile memory express (NVMe) also achieve higher concurrency, higher throughput, and lower I/O response latency while reducing the storage capacity cost per GB.

    NVMe — Designed Specifically for SSDs

    The NVMe specification defines how host software communicates with SSDs across multiple transports such as peripheral component interconnect express (PCIe), remote direct memory access (RDMA), TCP/IP, and more. Designed to be more scalable and efficient while also offering higher performance and lower latency than legacy protocols such as SATA and SAS, NVMe allows host hardware and software to fully exploit the performance available with SSDs.

    NVMe is a high-performance, highly scalable, and feature-rich storage protocol optimised for non-uniform memory access (NUMA) of SSDs directly connected to the CPU via the PCIe interface — PCIe 3.0 x4 NVMe SSDs are 6–7 times faster than a SATA SSD, while PCIe 4.0 SSDs are 12–13 times faster.

    Parallel, low-latency data paths to SSDs is a key feature in the NVMe architecture, which supports up to 64K I/O queues with each queue having 64K entries. Legacy SATA and SAS support only single queues with 32 and 254 entries respectively. The NVMe architecture allows applications to start, execute, and finish multiple I/O requests in parallel and use SSDs efficiently in order to maximise speed and minimise latency.

    In addition to being a major performance boost in directly connecting SSDs to hosts (direct attached storage, or DAS), NVMe can also be used as a networking protocol — referred to as NVMe over Fabrics (NoF). In a DC, storage media connected to one server might go underutilised while other servers are overloaded. NoF creates a high performance storage network that allows storage media to be shared among servers while achieving low latency comparable to that of DAS.

    NoF can run over fibre channel (FC), called FC-NVMe, and is a great choice for those already invested in FC because most modern FC infrastructure already supports it. A more promising option is NVMe over RoCE (RDMA over Converged Ethernet), which enables servers on the same network to exchange data without involving the processor, cache, or OS. As such, NVMe over RoCE is usually the fastest way to transmit data across a network with minimal overhead.

    While SSDs deliver blazing-fast speed and NVMe offers a low latency, high throughput interface between them and the CPU, we also need to address the latency issues between SSDs and memory. This is where storage-class memory (SCM) comes into play.

    SCM — Persistent Memory, Superior Performance

    Although SSDs deliver higher performance than HDDs, there are still latency bottlenecks between them and memory. To address this, SCM provides DRAM-like performance and larger storage capacity at a lower cost per GB than double data rate (DDR) synchronous dynamic random-access memory (SDRAM). It also increases performance density in other aspects, such as needing less back-end storage capacity, further reducing the cost of deploying an all-flash DC.

    SCM falls between the NAND storage used in SSDs and DRAM, sharing some features of both. Because it comes with a built-in power source, SCM is a type of persistent memory that won’t lose data if the power is shut off. This is a big advantage over DRAM, which is volatile memory, meaning it loses data when the power is cut.

    Accessing data from memory is significantly faster than accessing it from storage media. As SCM treats non-volatile memory in the same way as DRAM, it can read and write data up to 10 times faster than NAND storage.

    Like DDR SDRAM, SCM is bit-addressable and offers the same level of write durability. It is also non-volatile like SSDs and delivers performance between that of SSDs and DDR SDRAM. Using NAND as a backing store and DRAM as a cache for active data, SCM enables high DRAM capacities to be deployed at a lower cost than using DRAM alone, meeting the real-time, availability, and functionality needs of next-generation applications.

    All-Flash DC — More than the Sum of Its Parts

    These are just a few examples of the numerous technologies involved in an all-flash DC. Describing them all would exceed the limits of this blog. But what we can see from this is that an all-flash DC is more than a single technology — it is the combination of many. By integrating these technologies seamlessly, the all-flash DC delivers substantially more than the sum of its parts. From low latency data access to high storage capacity and to exceptionally green credentials, the all-flash DC is not only revolutionising DC storage, but also becoming a key enabler in enterprise DX.

    In my next post, I’ll discuss some of the success stories of applying the all-flash DC in real-world cases.

    This post is adapted from the Huawei-sponsored IDC White Paper Moving Towards an All-Flash Data Center Era to Accelerate Digital Transformation.


    Disclaimer: Any views and/or opinions expressed in this post by individual authors or contributors are their personal views and/or opinions and do not necessarily reflect the views and/or opinions of Huawei Technologies.

    Loading

      Leave a Comment

      Reply
      Posted in

      TAGGED

      Posted in