What is InfiniBand and What is an Infiniband Cable?

Date:
2012-12-11 11:38:12
   Author:
10Gtek
  
Tag:

What is InfiniBand?
InfiniBand is a switched fabric communications link used in high-performance computing and enterprise data centers.


Its features include high throughput, low latency, quality of service and failorver, and it is designed to be scalrable. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. Infiniband host bus adapters and network switches are manufactured by Mellanox and Intel (which acquired Qlogic's infiniband business in January 2012)


InfiniBand forms a superset of the Virtual Interface Architecture (VIA).


Description of InfiniBand?
Like Fibre Channel, PCI Express, Serial ATA, and many other modern interconnects, InfiniBand offers point-to-point bidirectional serial links intended for the connection of processors with high-speed peripherals such as disks. On top of the point to point capabilities, InfiniBand also offers multicast operations as well. It supports several signaling rates and, as with PCI Express, links can be bonded together for additional throughput.


Applications?
InfiniBand has been adopted in enterprise datacenters, for example Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud and Oracle SPARC SuperCluster, financial sectors, cloud computing (an InfiniBand based system won the best of VMWorld for Cloud Computing) and more. InfiniBand has been mostly used for high performance clustering computer cluster applications. A number of the TOP500 supercomputers have used InfiniBand including the former reigning fastest supercomputer, the IBM Roadrunner.


SGI, LSI, DDN, Oracle, Rorke Data among others, have also released storage utilizing InfiniBand "target adapters". These products essentially compete with architectures such as Fibre Channel, SCSI, and other more traditional connectivity-methods. Such target adapter-based discs can become a part of the fabric of a given network, in a fashion similar to DEC VMS clustering. The advantage to this configuration is lower latency and higher availability to nodes on the network (because of the fabric nature of the network). In 2009, the Oak-Ridge National Lab Spider storage system used this type of InfiniBand attached storage to deliver over 240 gigabytes per second of bandwidth.


What is an Infiniband Cable?
InfiniBand cables are factory terminated copper cable assemblies constructed of high-speed twinaxial shielded cable terminated to 4X MicroGigaCN™ type connectors on each end. The cables are designed for insertion into standard 4X receptacles.


InfiniBand cables are generally thicker, bulkier, and heavier than traditional Category 5e and Category 6 UTP cabling. These cables are also sensitive to bend radius and care should be used during installation with attention to proper strain relief to ensure reliability of connection over time.


Passive copper InfinBand copper cables do have reach limitations and thus confine the size of the cluster that can be built with copper. Generally DDR clusters of 500 nodes can be built and even larger clusters can be achieved with optimized layouts and readily be realized with passive copper cables.