Storage predictions for 2015

Date:
2014-12-25 19:39:42
   Author:
10Gtek
  
Tag:

In 2015, infrastructure convergence will become the standard and the new norm in data centers. IT vendors have been hyping “Hyper-converged” solutions as the focus of their value proposition for reducing costs and simplifying IT roll-outs as they provide you with modular building blocks for the new data center. The enabling technology for these new converged solutions is virtualization, which leads to intelligent abstraction. Server virtualization is now the new norm, and has become a factor in driving converged infrastructure for both networks and storage.

 

Virtual servers are killing off traditional SAN storage. Blade servers and virtual servers need to share CPU hardware and storage interconnects as a way to make them cost effective. The sharing of these interconnects is forcing the storage to move closer to, or even back into the servers. The fact that virtual servers share hardware for compute and memory resources, including input/output connectivity for both network and storage is increasing the need for improved latency and connectivity, along with higher bandwidth for those shared resources.

 

As the adoption of modular building blocks for the converged data center continue, the traditional host bus adapter (HBA),  which is currently used for fibre channel based block I/O in a SAN will be replaced with either HCA’s (Host Channel Adapters) or CNA’s (Converged Network Adapters). These adapters enable multiple protocols over a single link, and the fact that multiple links can be combined together will mean some astounding performance benefits for storage in 2015.

 

Just like Fibre Channel (FC), Infiniband (IB) has been around for years, but the adoption of Infiniband has stayed mainly within the high performance computing (HPC), scientific, grid-computing, and multi-node high performance cluster spaces where the database and big data are king.  IBM has been using Infiniband as the internal interconnect in their large P-series servers for quite some time now, and Oracle and others have now adopted Infiniband as the interconnect within their high-end converged solutions.

 

Most blade server vendors also use either IB or backplane crossbar switches as the internal interconnect in their blade systems. According to the Infiniband Trade Association (IBTA), using 12x enhanced data rate (EDR) links, speeds of 300Gb per second or more will be available in the 2015 timeframe.

 

In 2015, I foresee the next gen Ethernet and Infiniband network adapters making further inroads as the topology of choice for storage I/O in 2015. Having fast SSD storage is silly if the interconnect between the devices is slower than the storage itself. As an example, the current 10Gb fast Ethernet can only move about 1.25GB/s (large B for Bytes) per second, which translates to only about 4.5TB/h (Terabytes per hour}. As a rule of thumb, you can average about 100 MB/s for every 1Gb/s of bandwidth (assuming 8 bits per byte and typical network hops and issues)  Heck, my laptop’s solid state disk (SSD) can do sustained reads at over 500MB/s, which would saturate about half of the 10Gb network just to back up that one drive.

 

Think about that, even with current high speed 10Gb Ethernet cards in your servers, you can only move about 4.5TB per hour per link. In today’s world, four Terabyte (4TB) is a typical capacity of a single enterprise disk drive, let’s say your organization has about a hundred of them which need to be backed up.

 

100 x 4 =400 / 4.5 = 88 hours to back it up!

 

Even if you multiplex two connections, it’s still 44 hours per 100 drives. Remember the Ethernet protocol was built for sharing files and objects, not blocks. This is why a SAN is still a good idea for data protection. The fibre channel protocol was built for moving large blocks of data fast. I see the traditional dedupe and backup solutions vendors adding 100Gb/s Ethernet or InfiniBand as the interconnect in 2015.

 

The physics of protecting large amounts of data will drive the adoption of protection as a service in the new data center. Continuous protection will be used to assure data is always protected for applications requiring stringent service levels. Everything else will be protected either by snapshots, or some form of geographic multiplexing of data objects for dispersion and protection (Erasure coding is an example).

 

In fact, the method of providing guaranteed performance and protection may become the deciding value factor in the solutions you choose in 2015. Other factors will be the ability to dynamically mirror or migrate data between storage vendors and clouds, the ability to dynamically move and track data to archives, and the ability to provide metadata about the stored data so it can be used as a big data source. The inclusion of snapshots and clones for test and development storage will be a given, as will be integrated multi-tenancy and security.  The best solutions will include storage abstraction and policy based intelligence, which I describe as a Data Services Engine (DSE)

 

Next steps

 

The advent of converged data centers into server node based building blocks will require a new way to architect how data centers are built, and in determining the value of a particular solution building block being procured. New standards for measurement in units of combined storage, network and CPU performance will need to be adopted. Current standards for measuring power efficiency units and floor space requirements need to be included. Let’s define this unit of measurement as a DCE, or data center element. Data center elements can be combined together into racks to create a data center unit (DCU). The DCU becomes the modular building block for the next generation data center.

 

The concept is simple. By combining NAS, SAN and Object storage into a single node, which also includes the network, compute, and interconnect elements (IE: the Host Channel Adapter or HCA), a DCE is built. Each DCU is built from multiple DCE’s in a standard 19 inch rack. The number of MIPS, IOPS, and network throughput, combined with the power, floor space and cooling requirements make up the data center unit of measurement provided by that solution. The cost per DCU could be the determining factor of how data center infrastructure is measured and procured. This model works for both cloud and traditional data center environments.

 

Summary

 

Change in IT is happening fast, and keeping up with all the changes in technology is hard. This is why the vendors are so busy trying to simplify their offerings to make it easier for you to buy their stuff. Instead of you buying components from multiple vendors and creating our own best practices for implementation as in the past, the vendors are busy doing all the hard work for you. The standards bodies now need to do their thing and crate the standards and tools to make it simple for you to determine which solutions provide the best value for your money.  When common measurement units are available for you to effectively compare converged infrastructure solutions, creating the next gen data center may become as simple as a return on investment (ROI) math problem.   Here’s looking at a happy new year! Good luck in the year ahead.