Monthly Archives: January 2013

Drive Down Costs with a Storage Refresh

Like most other things, technology suffers from advancing age.  That leading-edge wonder of just a few years ago is today’s mainstream system.  This aging process creates great headaches for IT departments, who constantly see “the bar” being moved upward.  Just when it seems like the computing environment is under control, equipment needs to be updated.

Unless a company is well disciplined in enforcing their technical refresh cycle, the aging process can also lure some organizations into a trap.  The thinking goes something like this – “Why not put off a technology update by a year or two?  Budgets are tight, the IT staff is overworked, and things seem to be going along just fine.”  It makes sense, doesn’t it?

Well, not exactly.  If you look beyond the purchase and migration expenses, there are other major cost factors to consider.

Power Reduction:  There have been major changes in storage device energy efficiency over the past decade.  Five years ago the 300GB, 15K RPM 3.5-inch drive was leading-edge technology.  Today, that has disk been superseded by 2.5-inch disks of the same speed and capacity.  Other than its physical size, other major changes are the disk’s interface (33% faster than Fibre Channel) and its power consumption (about 70% less than a 3.5-inch drive).  For 100TB of raw storage, $3577 per year could be saved by reduced power consumption alone.

300GB Power

Cooling Cost Reduction:  A by-product of converting energy to power is heat, and systems used to eliminate heat consume power too.  The following chart compares the cost for cooling 100TB of 3.5-inch disks with the same capacity provided by 2.5-disks.  Using 2.5-inch disks, cooling costs could be reduced by $3548 per year, per 100TB of storage.

300GB Cooling

Floor Space Reduction:  Another significant data center cost is for floor space.  This expense can vary widely, depending on the type resources provided and level of high availability guaranteed by the Service Level Agreement.  For the purpose of cost comparison, we’ll take a fairly conservative $9600 per equipment rack per year.  We will also assume fractional amounts are available, although in the real world full rack pricing might be required.  Given the higher density provided by 2.5-inch disks, a cost savings of $9,371 would be achieved.

300GB Floor Space

In the example above, simply replacing aging 300GB, 15K RPM 3.5-inch FC disk drives with the latest 300GB, 15K RPM 2.5-inch FC disk drives will yield the following operational costs (OPEX) savings:

Reduced power             $   3,577
Reduced cooling           $   3,548
Less floor space            $   9,371
.                                  ========
Total Savings              $ 16,496 per 100TB of storage 

Over a storage array’s standard 5-year service cycle, OPEX savings could result in as much as $82K dollars or more.

Addition benefits from a storage refresh might also include tiering storage (typically yielding around a 30% savings over non-tiered storage), reduced support contract costs, and less time spent managing older, more labor-intensive storage subsystems.  There is also an opportunity for capital expense (CAPEX) savings by cleverly designing cost-optimized equipment, but that’s a story for a future article.

Don’t be misled into thinking that a delay of your storage technical refresh cycle will save money.  In the end it could be a very costly decision.

Disaster Recovery Strategy for the 21st Century

Blade servers, virtualization, solid state disks, and 16Gbps fibre channel – it’s challenging to keep up with today’s advanced technology.  The complexity and sophistication of emerging products can be dizzying.  In most cases we’ve learned how to cope with these changes, but there are a few areas where we still cling to vestiges of the past.  One of these relics of past decades is the impenetrable, monolithic data center.

The data center traces its roots back to the mainframe, when all computing resources were housed in a single, highly specialized facility designed specifically to support processing operations.  Since there was little or no effort to classify data, these bastions of data processing were over-designed to ensure the most critical requirements were supported.  This model was well-suited for mainframes and centralized computing, but it falls well short of meeting the needs of our modern IT environments.

Traditional data center facilities provide a one-size-fits-all solution.  At an average $700 to $1500 per square foot, they are expensive to build.  They lack the scalability and flexibility to respond to dynamic market changes and shifts in technology.  Since these require massive investments of capital, they must be built not only to contain today’s IT equipment, but also satisfy growth requirements for 25-years or more.  The end result is a tremendous waste of capacity, corporate funds tied up for decades, making assumptions about the direction and needs of future IT technology, the build-out of a one-size-fits-all facility, and a price tag that makes disaster recovery redundancy well beyond the reach of most companies.

An excellent solution to this problem is already a proven technology – the Portable Modular Data Center.  These are typically self-contained data center modules that contain a comprehensive set of power, cooling, security, and internal infrastructure to support a dozen or more equipment racks per module with up to 30kW of power per rack.  These units are relatively inexpensive, highly scalable, simple to deploy, energy efficient (Green), and factory constructed to ensure consistent quality and reproducible technology.  As modules, they can be deployed incrementally as requirements dictate, avoiding major one-time capital expenditures for facilities.

Their inherent modularity and scalability make them an excellent choice for incrementally building out finely-tuned disaster recovery facilities.  Here is an example of how modular data centers can be leveraged to cost-effectively provide Disaster Recovery protection of an organization’s data assets.

      1. Mission Critical Operations (typically 10% to 15%)
        These are applications and data that might severely cripple the organization if they were not available for any significant period of time.
        Strategy – Deploy synchronous replication technology to maintain an up-to-date mirror image of the data that could be brought to operational status within a matter of minutes.
        Solution – Deploy one or more Portable Module Data Center units within 30-miles (to minimize latency) and run synchronous replication between the primary data center and the modular facility. Since 20-30 miles of separation would protect from a local disaster, but not a region-wide event, it might be worthwhile to replicate asynchronously from the modular data center to some remote (out-of-region) location.  A small amount of data might be lost in the event of a disaster (due to asynchronous delay), but processing could still be brought back on-line quickly with minimal loss of data and only a limited interruption to operations.
      2. Vital Operations (typically 20% to 25%)
        These applications and data are very important to the organization, but an outage of several hours would not financially cripple the business.
        Strategy – Deploy an asynchronous replication mechanism outside the region to ensure an almost-up-to-date copy of data is available for rapid recovery.
        Solution – Deploy one or more Portable Module Data Center units anywhere in the country and run asynchronous replication between the primary data center and the remote modular facility.  Since distance is not a limiting factor for asynchronous replication, the modular facility could be installed anywhere.  This protects from disasters occurring not only locally, but within the region as well.  A small amount of data might be lost in the event of a disaster (due to asynchronous delay), but applications and databases could still be recovered quickly with minimal loss of data and only a limited interruption to operations.
      3. Sensitive Operations (typically 20% to 30%)
        These applications and data are important to the organization, but an outage of several days to one week would have only a negligible financial impact on the business.
        Strategy – (same as above) Use the same asynchronous replication mechanism outside the region to ensure an almost-up-to-date copy of data is available for rapid recovery.
        Solution – Add one or more Portable Module Data Center units to the above facility (as required) and run asynchronous replication between the primary data center and the remote modular facility.
      4. Non-Critical Operations (typically 40% or more)These applications and data are incidental to the organization and can be recovered when time is available.  An outage of several weeks would have little impact on the business.
        Strategy – (same as above) Use the same asynchronous replication mechanism outside the region to ensure an almost-up-to-date copy of data is available for rapid recovery.
        Solution – Deploy one or more Portable Module Data Center units anywhere in the country and run asynchronous replication between the primary data center and a remote modular facility.
        Note:  Since non-critical applications and data tend to be passive, non-critical operations might also be a viable candidate for transitioning to an Infrastructure-as-a-Service (IaaS) provider.

Modular Data Centers are the obvious enabler for the above Disaster Recovery strategy.  They allow you to deploy only the data center resource you need, when you need it.  They are less expensive than either leased or build facilities, and can be scaled as required by the business.

It’s time for the IT industry to abandon their outdated concepts of what a data center should be and focus on what is needed by each class of data.  The day of raised-floor mainframe “bunkers” has passed.  It’s time to start managing data center resource deployment as carefully as we manage server and storage deployment.  Portable Modular Data Centers allow you to implement efficient, cost-effective IT production facilities in a logical sequence, without breaking the bank in the process.