“Latency: The Other Critical Storage Networking Metric”

Date:
2015-03-20 03:39:29
   Author:
10Gtek
  
Tag:

From a data storage perspective, the definition of latency is the time it takes for a data packet to travel from the initiator within the primary server to the target device. Excessive latency can be particularly threatening to a storage environment because it creates a bottleneck that can delay or prevent data from reaching its final destination. When this occurs, additional packets must be sent, creating more traffic on the wire and can ultimately create network congestion.


In today’s data storage environments, latency is more important than ever. Connections to low latency storage such as SSD’s and hybrid arrays is on the increase, VM densities are on the rise and attempts to keep up with ever increasing SLAs are a growing challenge. For more information on how killer applications are causing bottlenecks, read IT Brand Pulse’s brief on why “16Gb Fibre Channel Eases Data Center Traffic Jams” on the Fibre Channel Industry Association’s Knowledge Vault.


Three components—distance, equipment, and protocols—can each contribute to the overall latency of a storage network. Let’s focus on protocols and network latencies as they compare within different storage protocols: Fibre Channel, iSCSI and 10GbE Fibre Channel over Ethernet (FCoE) networks.


Fibre Channel has been the default SAN technology for many medium-to-large enterprise data centers running business-critical applications, with high bandwidth, high transaction and low latency requirements. For an example, listen to Dan Pollack, Chief Storage Architect, AOL talk about how the need for shared storage and sustained performance in a short, informative?video.


Although slower than Fibre Channel, iSCSI has begun to carve out a niche in small-to-medium-sized businesses because it can be run over existing Ethernet networks, commodity 1Gb Ethernet NICs or LAN on Motherboards. With the introduction of 10GbE speeds, iSCSI is now considered for deployment in enterprise environments, but imposes additional overhead and has higher latency than 8 or 16 Gb Fibre Channel.


While FCoE is well-suited for supporting convergence between LAN and SAN, it has yet to gain the popularity it deserves. Both iSCSI and FCoE utilize the TCP/IP protocol, which is typically processed by the host CPU. This added overhead of the IP layer, creates inefficiencies that appear as lower bandwidth, lower IOPS, higher latency and increased CPU utilization.


To improve latency, Fibre Channel was designed to use the Layer 2 network protocol. By encapsulating data in Ethernet frames instead of IP packets you have less overhead. Fibre Channel also makes use of buffer-to-buffer credits, meaning it never stops the transfer of data; it simply slows down as the buffer fills. By comparison, FCoE uses TCP for flow control of iSCSI and Priority Flow Control for lossless Ethernet. This means when the target starts running out of buffers, it has to send a message to stop all transmissions from the sender and all packets in transit are lost.


One thing is certain - Fibre Channel isn’t going away anytime soon. Research shows that deployment is at an all-time high, as is evidenced in this case study from Symantec. Fibre Channel is a purpose-built, data center-proven network infrastructure for storage, and it’s the most functional protocol for efficiently transferring block-level storage. More than just a “speed,” Fibre Channel provides back-end network capabilities such as LAN-less backup and high speed data migration, and it efficiently handles the high percentage of random I/O typical in highly virtualized environments.