Monthly Archives: July 2012
Modular Datacenter Units – The End of Traditional Enterprise Datacenters?
Traditional brick and mortar datacenters have been a mainstay of enterprise computing since the day of the mainframe. IT systems were kept in isolation in windowless, highly secure facilities that provided a constant temperature and humidity environment on a 7×24 basis. Although the cost of building new datacenters continues to increase substantially, until now relatively few options have been available.
However, with the development of the portable modular datacenter, the day of the traditional datacenter may be coming to an end. While there are several variations on the market, the most promising appears to be the completely built out facility. New datacenter modules are built from ISO standard shipping containers. They incorporate chillers, power and communications buses, forced air cooling, equipment racks, and all other components necessary for a modern datacenter. These units can be trucked to any location, moved into position on a concrete pad, connected to external resources, and be ready for systems build-out on short notice. They can be configured to operate as a singular unit, multiple units, and even as stacked arrays of modular datacenter units.
In addition to serving as a modular replacement for traditional brick-and-mortar datacenter s, there are other possibilities for Portable Modular Datacenter s:
RAPID DEPLOYMENT MODULES – For situations where rapid implementation is a key driver, or when companies simply can’t wait the 18-24 months for a new datacenter build-out.
COST CONTAINMENT – Situations where minimizing the cost for building a new datacenter facility is a primary objective
DISASTER RECOVERY – A highly flexible, cost-effective IT environment that can be deployed remotely for a Disaster Recovery solution
CAPACITY-ON-DEMAND –Modular, self-contained units that permit companies to add new datacenter capacity only-as-required (Capacity-as-a-Service?)
TEMPORARY FACILITIES – Allows companies to continue to support ongoing IT operations while a permanent datacenter facility is built
SEGREGATED SYSTEMS – Enables complete isolation of specific IT operation in an otherwise shared environment (Community Cloud?)
DYNAMIC MARKETS – A solution for highly volatile markets where future capacity requirements are difficult to predict
EMERGENCY CAPACITY – Available for relatively rapid deployment when an organization’s primary datacenter runs out of floor space
SYNCRONOUS REPLICATION – Allows the implementation of a small nearby replication site within 40KM of the primary datacenter to support replication while maintaining database consistency
MOBILE SYSTEMS – A portable IT solution that could be relocated to a different region in response to changing corporate needs or an impending disaster (such as a major hurricane).
PREFABRICATED SUB-SYSTEMS – A transportable platform for high growth companies who must buy integrated sub-systems from an external vendor, rather than building the equipment themselves.
REPURPOSING OF BUILDINGS – Modular units may be installed within existing building that are sitting idle, as long as adequate resources (power and communications) are available.
Anotherbig benefit to portable mobile datacenter units is that they’re built in a factory to exact specification. As such, they benefit from repetitive manufacturing processes and ongoing quality assurance reviews. Each module features the same level of quality and reliability as its peers. This is in sharp contrast to traditional brick-and-mortar datacenters, which are normally built as one-off custom configurations.
The concept of portable mobile datacenter units is pretty clever. If there are any downsides to this technology they are not readily apparent. Although this represents a relatively new approach, it appears to be distinctly superior to what’s been done in the past. Don’t be surprised to see a new modular datacenter unit being installed on a concrete pad near you in the foreseeable future.
Storage System Refresh – Making a Case for Mandatory Retirement
It’s hard to retire a perfectly good storage array. Budgets are tight, there’s a backlog of new projects in the queue, people are on vacation, and migration planning can be difficult. As long as there is not a compelling reason to take it out of service, it’is far easier to simply leave it alone and focus on more pressing issues.
While this may be the path of least resistance, it can come at a high price. There are a number of good reasons why upgrading storage arrays to modern technology may yield superior results and possibly save money too!
Capacity – When your aging disk array was installed several years ago, 300 GB, 10K RPM, FC disk drives were mainstream technology. It was amazing to realize you could squeeze up to 45 TB in a single 42U equipment rack! Times have changed. The same 10K RPM DISK drive has tripled in capacity, providing 900 GB in the same 3.5 inch disk drive “footprint”. It’s now possible to get 135 TB (a 300% capacity increase) into the same equipment rack configuration. Since data center rack space currently costs around $3000 per month, that upgrade alone will dramatically increase capacity without incurring any increase in floor-space cost.
Density – Previous generation arrays packaged from (12) to (15) 3.5 inch FC or SATA disk drives into a single rack-mountable 4U array. Modern disk arrays support from (16) 3.5 inch disks per 3U tray, to (25) 2.5 inch disks in a 2U tray. Special ultra-high density configurations may house up to (60) FC, SAS, or SATA DISK drives in a 4U enclosure. As above, increasing storage density within an equipment rack significantly increases capacity while requiring no additional data center floor-space.
Energy Efficiency – Since the EPA’s IT energy efficiency study in 2007 (Report to Congress on Server and Data Center Energy Efficiency, Public Law 109-431), IT manufacturers have increased efforts to improve the energy efficiency of their products. This has resulted in disk drives that consume from 25% to 33% less energy, and storage array controllers lowering power consumption by up to 30%. That has had a significant impact on energy costs, including not only the power to run the equipment, but also power to operate the cooling systems needed to purge residual heat from the environment.
Controller Performance – Storage array controllers are little more than specialized servers designed specifically to manage such functions as I/O ports, disk mapping, RAID and cache operations, and execution of array-centric internal applications (such as thin provisioning and snapshots). Like any other server, storage controllers have benefited from advances in technology over the past few years. The current generation of disk arrays contain storage controllers with from 3 to 5 times the processing power of their predecessors.
Driver Compatibility – As newer technologies emerge, they tend to focus on developing software compatibility with the most recently released products and systems on the market. With the passage of time, it becomes less likely for storage arrays to be supported by the latest and greatest technology on the market. This may not impact daily operations, but it creates challenges when a need arises to integrate aging arrays with state-of-the-art systems.
Reliability – Common wisdom used to be that disk failure characteristics could be accurately represented by a ”bathtub graph”. The theory was the potential for failure was high when a disk was new. It then flattened out at a low probability throughout the disk’s useful life, then took a sharp turn upswing as it approached end-of-life. This model implied that extending disk service life had no detrimental effects until it approached end-of-life for the disks.
However over the past decade, detailed studies by Google and other large organizations with massive disk farms have proven the “bathtub graph” model incorrect. Actual failure rates in the field indicate the probability of a disk failure increases by 10% – 20% for every year the disk is in service. It clearly shows the probability of failure increases in a linear fashion over the disk’s service life. Extending disk service-life greatly increases the risk for disk failure.
Service Contracts –Many popular storage arrays are covered by standard three-year warranties. This creates a dilemma, since the useful service life of most storage equipment is considered to be either four or five years. When the original warranty expires, companies must decide whether to extend the existing support contract (at a significantly higher cost), or transitioning to a time & materials basis for support (which can result in some very costly repairs).
Budgetary Impact – For equipment like disk arrays, it is far too easy to fixate on replacement costs (CAPEX), and ignore the ongoing cost of operational expenses (OPEX). This may avoid large upfront expenditures, but it slowly bleeds the IT budget to death by having to maintain increasingly inefficient, fault-prone, and power hungry equipment.
The solution is to establish a program of rolling equipment replenishment on a four- or five-year cycle. By regularly upgrading 20% to 25% of all systems each year, the IT budget is more manageable, equipment failures are controlled, and technical obsolescence remains in check.
Getting rid of familiar things can be difficult. But unlike your favorite slippers, the LazyBoy recliner, or your special coffee cup, keeping outdated storage arrays in service well beyond their prime can cost your organization plenty.
SAN Fabric for the Next Generation
There’s a quiet revolution going on in large data centers. It’s not as visible or flashy as virtualization or deduplication, but at least equal in important.
As its name implies, SAN “fabric” is a dedicated network that allows servers, storage arrays, backup & recovery systems, replication devices, and other equipment to pass data between systems. Traditionally this has been comprised of 4Gbps Fibre Channel and 1Gbps Ethernet channels. However, a new family of 8Gbps and 16Gbps Fibre Channel, 6Gbps and 12Gbps SAS, and 10Gbps Ethernet are quietly replacing legacy fabric with links capable of 2 – 4 times the performance.
The following is a comparison of the maximum throughput rates of various SAN fabric links:
Performance ranges from the relatively outdated 1Gbps channel (Ethernet or FC) capable of supporting data transfers of up to 100 MB per second, to 16Gbps Fibre Channel capable of handling 1940 MB per second. Since all are capable of full duplex (bi-directional) operations, the sustainable throughput rate is actually twice the speed indicated in the chart. If these blazing new speeds are still insufficient, 10Gbps Ethernet, 12Gbps SAS, and 16Gbps Fibre Channel can be “trunked” – bundled together to produce an aggregate bandwidth equal to the number of individual channels tied together. (For example, eight 16Gbps FC channels can be bundled to create a 128Gbps “trunk”.)
In addition to high channel speeds, 10Gbps Ethernet and 16Gbps Fibre Channel both implement a 64b/66b encoding scheme, rather than the 8b/10b encoding scheme used by lower performance channels. The encoding process improves the quality of the data transmission, but at a cost. An 8b/10b encoding process decreases available bandwidth by 20%, while 64b/66b encoding only reduces bandwidth by 3.03%. This significantly increases data transfer efficiency.
While 8/16Gbps Fibre Channel and 10Gbps Ethernet are changing the game at the front-end, SAS is revolutionizing the back-end disk drive connections as well. For over a decade, enterprise-grade disks had 2Gbps or 4Gbps ports, and were attached to a Fiber Channel Arbitrated Loop (FC-AL). Like any technologies using loop technology, low traffic enjoyed maximum speed but performance dropped off as demand increased. Under heavy load conditions, the back-end bus could become a bottle-neck.
SAS will change that for two reasons. First it uses switched technology, so every device attached to the controller “owns” 100% of the bus bandwidth. The latency “dog leg pattern” found on busy FC-AL busses is eliminated. Secondly current SAS drives are shipping with 6Gbps ports, which are 50% faster than 4Gbps Fibre Channel. Just over the horizon are 12Gbps SAS speeds that will offer a 300% increase in bandwidth to the disks, and do it over switched (isolated) channels.
Recent improvements in fabric performance will support emerging SSD technology, and allow SANs to gracefully scale to support storage arrays staggering under a growth rate of 40% – 50% per year.