Category Archives: Uncategorized

Storage System Refresh – Making a Case for Mandatory Retirement

It’s hard to retire a perfectly good storage array.    Budgets are tight, there’s a backlog of new projects in the queue, people are on vacation, and migration planning can be difficult.   As long as there is not a compelling reason to take it out of service, it’is far easier to simply leave it alone and focus on more pressing issues.

While this may be the path of least resistance, it can come at a high price.  There are a number of good reasons why upgrading storage arrays to modern technology may yield superior results and possibly save money too!

Capacity – When your aging disk array was installed several years ago, 300 GB, 10K RPM, FC disk drives were mainstream technology.    It was amazing to realize you could squeeze up to 45 TB in a single 42U equipment rack!   Times have changed.   The same 10K RPM DISK drive has tripled in capacity, providing 900 GB in the same 3.5 inch disk drive “footprint”.   It’s now possible to get 135 TB (a 300% capacity increase) into the same equipment rack configuration.    Since data center rack space currently costs around $3000 per month, that upgrade alone will dramatically increase capacity without incurring any increase in floor-space cost.

Density – Previous generation arrays packaged from (12) to (15) 3.5 inch FC or SATA disk drives into a single rack-mountable 4U array.    Modern disk arrays support from (16) 3.5 inch disks per 3U tray, to (25) 2.5 inch disks in a 2U tray.   Special ultra-high density configurations may house up to (60) FC, SAS, or SATA DISK drives in a 4U enclosure.   As above, increasing storage density within an equipment rack significantly increases capacity while requiring no additional data center floor-space.

Energy Efficiency – Since the EPA’s IT energy efficiency study in 2007 (Report to Congress on Server and Data Center Energy Efficiency, Public Law 109-431), IT manufacturers have increased efforts to improve the energy efficiency of their products.   This has resulted in disk drives that consume from 25% to 33% less energy, and storage array controllers lowering power consumption by up to 30%.  That has had a significant impact on energy costs, including not only the power to run the equipment, but also power to operate the cooling systems needed to purge residual heat from the environment.

Controller Performance – Storage array controllers are little more than specialized servers designed specifically to manage such functions as I/O ports, disk mapping, RAID and cache operations, and execution of array-centric internal applications (such as thin provisioning and snapshots).   Like any other server, storage controllers have benefited from advances in technology over the past few years.    The current generation of disk arrays contain storage controllers with from 3 to 5 times the processing power of their predecessors.

Driver Compatibility – As newer technologies emerge, they tend to focus on developing software compatibility with the most recently released products and systems on the market.   With the passage of time, it becomes less likely for  storage arrays to be supported by the latest and greatest technology on the market.  This may not impact daily operations, but it creates challenges when a need arises to integrate aging arrays with state-of-the-art systems.

Reliability – Common wisdom used to be that disk failure characteristics could be accurately represented by a ”bathtub graph”.   The theory was the potential for failure was high when a disk was new.  It then flattened out at a low probability throughout the disk’s useful life, then took a sharp turn upswing as it approached end-of-life.   This model implied that extending disk service life had no detrimental effects until it approached end-of-life for the disks.

However over the past decade, detailed studies by Google and other large organizations with massive disk farms have proven the “bathtub graph” model incorrect.   Actual failure rates in the field indicate the probability of a disk failure increases by 10% – 20% for every year the disk is in service.  It clearly shows the probability of failure increases in a linear fashion over the disk’s service life.  Extending disk service-life greatly increases the risk for disk failure.

Service Contracts –Many popular storage arrays are covered by standard three-year warranties.   This creates a dilemma, since the useful service life of most storage equipment is considered to be either four or five years.   When the original warranty expires, companies must decide whether to extend the existing support contract (at a significantly higher cost), or transitioning to a time & materials basis for support (which can result in some very costly repairs).

Budgetary Impact – For equipment like disk arrays, it is far too easy to fixate on replacement costs (CAPEX), and ignore the ongoing cost of operational expenses (OPEX).   This may avoid large upfront expenditures, but it slowly bleeds the IT budget to death by having to maintain increasingly inefficient, fault-prone, and power hungry equipment.

The solution is to establish a program of rolling equipment replenishment on a four- or five-year cycle.   By regularly upgrading 20% to 25% of all systems each year, the IT budget is more manageable, equipment failures are controlled, and technical obsolescence remains in check.

Getting rid of familiar things can be difficult.   But unlike your favorite slippers, the LazyBoy recliner, or your special coffee cup, keeping outdated storage arrays in service well beyond their prime can cost your organization plenty.

16 Gbps Fibre Channel – Do the Benefits Outweigh the Cost?

With today’s technology there can be no status quo.  As the IT industry advances, so must each organization’s efforts to embrace new equipment, applications, and approaches.  Without an ongoing process of improvement, IT infrastructures progressively become outdated and the business group they support grows incrementally less effective.

In September of 2010, the INCITS T11.2 Committee ratified the standard for 16Gbps Fibre Channel, ushering in the next generation of SAN fabric.  Unlike Ethernet, Fibre Channel is designed for one specific purpose – low overhead transmission of block data.  While this capability may be less important for smaller requirements where convenience and simplicity are paramount, it is critical for larger datacenters where massive storage repositories must be managed, migrated, and protected.  For this environment, 16Gbps offers more than twice the bandwidth of the current 8Gbps SAN and 40% more bandwidth than the recently released 10Gbps Ethernet with FCoE (Fibre Channel over Ethernet).

But is an investment in 16Gbps Fibre Channel justified?  If a company has reached a point where SAN fabric is approaching saturation or SAN equipment is approaching retirement, then definitely yes!  Here is how 16Gbps stacks up against both slower fibre channel implementations and with 10Gbps Ethernet.

Emulex
Model
Port Speed Protocol Average HBA/NIC   Price Transfer
Rate
Transfer Time for 1TB Bandwidth
Cost per
MB/sec.
Bandwidth
Difference
LPE16002 16 Gbps Fibre Channel $1,808 1939 MB/sec. 1.43 Hrs. $0.93 160%
OCe11102 10 Gbps Ethernet $1,522 1212 MB/sec. 2.29 Hrs. $1.26 100%
LPe12002 8 Gbps Fibre Channel $1,223 800 MB/sec. 3.47 Hrs. $1.53 65%
LPe11000 4 Gbps Fibre Channel $891 400 MB/sec. 6.94 Hrs. $2.23 32%

This table highlights several differences between 4/8/16 Gbps fibre channel and 10Gbps Ethernet with FCoE technology (sometimes marketed as Unified Storage).  The street prices for a popular I/O Controller manufacturer clearly indicates there are relatively small differences between controller prices, particularly for the faster controllers.  Although the 16Gbps HBA is 40% quicker, it is only 17% more expensive!

However, a far more important issue is that 16Gbps fibre channel is backward compatible with existing 4/8 Gbps SAN equipment.  This allows segments of the SAN to be gradually upgraded to leading-edge technology without having to suffer the financial impact of legacy equipment rip-and-replace approaches.

In addition to providing a robust, purpose-built infrastructure for migrating large blocks of data, it also offers lower power consumption per port, a simplified cabling infrastructure, and the ability to “trunk” (combine) channel bandwidth up to 128Gbps!   It doubles the number of ports and available bandwidth in the same 4U rack space for edge switches, providing the potential for a saving of over $3300 per edge switch.

Even more significant is that 16Gbps provides the additional performance necessary to support the next generation of storage, which will be based on 6Gbps and 12Gbps SAS disk drives.  Unlike legacy FC storage, which was based upon 4Gbps FC-AL arbitrated loops, the new SAS arrays are on switched connections.  Switching provides a point-to-point connection for each disk drive, ensuring every 6Gbps SAS connection (or in the near future, 12Gbps SAS connection) will have a direct connection to the SAN fabric.  This eliminates backend saturation of legacy array FC-AL shared busses, and will place far greater demand for storage channel performance on the SAN fabric.

So do the benefits of 16Gbps fibre channel outweigh its modest price premium?  Like many things in life – it depends!  Block-based 16Gbps fibre channel SAN fabric is not for every storage requirement, but neither is file-based 10Gbps FCoE or iSCSI. If it is a departmental storage requirement or an environment where NAS or iSCSI has previously been deployed, then replacing the incumbent protocol with 16Gbps fibre channel may or may not have merit.  However, large SAN storage array are particularly dependent on high performance equipment specifically designed for efficient data transfers.  This is an arena where the capabilities and attributes of 16Gbps fibre channel will shine.

In any case, the best protection against making a poor choice is to thoroughly research the strengths and weaknesses of each technology and seek out professional guidance from a vendor-neutral storage expert with a Subject Matter Expert level understanding of the storage industry and its technology.