Did you know that a single cable inside a data center can serve multiple purposes? With help from data center bridging technology, it can.
Data center bridging allows Ethernet and storage traffic to be combined on the same physical cables. This opens the door to important benefits like reduced complexity, faster troubleshooting and enhanced physical safety inside data centers.
Let’s explore how data center bridging transforms traditional networking by introducing key capabilities.
What Is Data Center Bridging?
Data center bridging is an extension of Ethernet technology that allows Ethernet cables to manage traffic for both standard local area network (LAN) and storage area networks (SANs).
This is significant because, traditionally, Ethernet has mainly been used for LAN traffic. SAN infrastructure typically uses fiber channel cables, and while Ethernet could be used with SAN as well, doing so usually requires an Ethernet cable dedicated to the SAN, which is separate from the LAN Ethernet connection.
With data center bridging, however, LAN and SAN traffic can converge on the same cable. This means traffic for both types of connection can flow over a single Ethernet cable.
Data center bridging is based primarily on a set of standards (including 802.11Qbb, 802.1Qaz, 802.1Qau, 802.1AB) defined by the Institute of Electrical and Electronics Engineers (IEEE), which promulgates technology standards, among other activities. The technology has existed for more than a decade, but adoption rates have varied, and many data centers have yet to fully take advantage of its benefits.
The Advantages of Data Center Bridging
At first glance, the ability to transmit multiple types of data on the same Ethernet cable might not seem like a huge deal. But it is, in certain key respects. Data center bridging enables opportunities such as:
Data center bridging has existed for more than a decade, but adoption rates have varied. Image: Alamy.
Data Center Bridging Challenges
While data center bridging can solve some common challenges, like cable sprawl and Ethernet packet loss, it also creates some challenges.
The biggest added complexity is at the networking layer. Data center bridging leads to a more complex configuration. It also introduces more potential ways for connections to fail if networking equipment doesn’t handle bridged connections properly due to software bugs or configuration oversights.
Bridging also requires technicians to divide each cable’s bandwidth into different allocations, each of which handles a distinct type of traffic. This is an advantage in the sense that it makes it possible to create dedicated virtual ‘pipes’ using a single cable. However, if the bandwidth allocation for a given type of traffic is too low, which could happen if admins don’t accurately forecast how much data the virtual pipe will need to handle, performance problems may result.
How to Implement Data Center Bridging
To implement data center bridging in a data center, you typically need to turn it on at two layers of your infrastructure:
In addition to turning data center bridging features on, you’ll also need to define settings related to bandwidth allocation and traffic prioritization. These choices should reflect your organizational needs. If LAN traffic is more important than storage-related traffic, for instance, you’ll want to prioritize packets accordingly on converged cable infrastructure.