What Are Hyper-Converged Networks?
In addition to hyper-converged infrastructures (HCI), we also have hyper-converged networks. Both systems focus on efficient operations and simple scaling.
HCI is a long-standing feature of the data center. Here, networked server systems are provided with software for compute, network, and storage virtualization. To this end, standard servers can be provided with a suitable software stack or preconfigured appliances used. The main benefits relate to simple scaling and uniform operations.
Read more: Agile Architectures with Hyper-Converged
What are Hyper-Converged Networks?
Hyper-converged networks involve the uniform, software-based operation of all data center networks. These include the conventional service networks plus storage networks and high-performance networks. There are two reasons why we need this architecture.
First, the commonly available Ethernet-based technology has evolved. We now have bandwidths of up to 400 GB, with 800 GB already knocking on the door. In addition, lossless RDMA standards, such as RoCE, have matured and are widely used.
The second reason stems from the storage area, where NVMe and SCM drives are increasingly being used. New networks are needed to ensure that the performance of these storage devices also reaches the CPU. The conventional SAN is not only lagging behind Ethernet speeds, which are currently 64 GB to 400 GB, but also has disadvantages in terms of protocol overheads. As is typical of SAN, two passes are needed for each IO. First, the client logs onto the storage and notifies that it wants to read or write. The client can then send or receive the relevant data. With NVMe over Fabric (NoF)—the Ethernet-based variant—you simply need a single command to do all this. This saves a round-trip in each case.
Anyone interested can also read one of my other articles “Storage Networks Are Undergoing an Upheaval! How Do You Choose the Right Solution?”
Just as with state-of-the-art storage, latencies and bandwidths are also relevant with high-performance computing (HPC). So far, InfiniBand (IB) and Omni-Path (OPA) are the solution of choice. Here too, the trend is toward Ethernet networks. These offer similar latencies and bandwidths and are also much more scalable.
How can you operate these kinds of networks?
Data Center Networks (DCN) are already fairly complex. We’re already talking about thousands of ports, dozens of switches and complex devices such as routers, firewalls and load balancers. Plus instead of having a single network, you have many different networks, such as management, backup, heartbeat, service networks, etc., all of which are subdivided with VLANs or overlays. What I find amazing is that most network administrators can control this from the command line. But more on that later.
Our CloudFabric 3.0!
Our solution is called CloudFabric 3.0. It precisely maps the above requirements for a hyper-convergent network. These are lossless Ethernet networking, not just for storage and HPC, automated end-to-end lifecycle management, and AI or rule-based operations and maintenance processes.
The whole thing is of course available via APIs, which facilitates integration into fully automated service provisioning.
Finally, I’d like to mention three highlights:
- The usual server virtualizations and container runtimes are incorporated.
- Public cloud networks can be integrated directly for this purpose. This is done using clear graphical interfaces.
- We’ve also thought about our network administrators, who can still perform administration tasks using their consoles.
Until next time!
Disclaimer: Any views and/or opinions expressed in this post by individual authors or contributors are their personal views and/or opinions and do not necessarily reflect the views and/or opinions of Huawei Technologies.
Leave a Comment