Blog Archives
Drive Down Costs with a Storage Refresh
Like most other things, technology suffers from advancing age. That leading-edge wonder of just a few years ago is today’s mainstream system. This aging process creates great headaches for IT departments, who constantly see “the bar” being moved upward. Just when it seems like the computing environment is under control, equipment needs to be updated.
Unless a company is well disciplined in enforcing their technical refresh cycle, the aging process can also lure some organizations into a trap. The thinking goes something like this – “Why not put off a technology update by a year or two? Budgets are tight, the IT staff is overworked, and things seem to be going along just fine.” It makes sense, doesn’t it?
Well, not exactly. If you look beyond the purchase and migration expenses, there are other major cost factors to consider.
Power Reduction: There have been major changes in storage device energy efficiency over the past decade. Five years ago the 300GB, 15K RPM 3.5-inch drive was leading-edge technology. Today, that has disk been superseded by 2.5-inch disks of the same speed and capacity. Other than its physical size, other major changes are the disk’s interface (33% faster than Fibre Channel) and its power consumption (about 70% less than a 3.5-inch drive). For 100TB of raw storage, $3577 per year could be saved by reduced power consumption alone.
Cooling Cost Reduction: A by-product of converting energy to power is heat, and systems used to eliminate heat consume power too. The following chart compares the cost for cooling 100TB of 3.5-inch disks with the same capacity provided by 2.5-disks. Using 2.5-inch disks, cooling costs could be reduced by $3548 per year, per 100TB of storage.
Floor Space Reduction: Another significant data center cost is for floor space. This expense can vary widely, depending on the type resources provided and level of high availability guaranteed by the Service Level Agreement. For the purpose of cost comparison, we’ll take a fairly conservative $9600 per equipment rack per year. We will also assume fractional amounts are available, although in the real world full rack pricing might be required. Given the higher density provided by 2.5-inch disks, a cost savings of $9,371 would be achieved.
In the example above, simply replacing aging 300GB, 15K RPM 3.5-inch FC disk drives with the latest 300GB, 15K RPM 2.5-inch FC disk drives will yield the following operational costs (OPEX) savings:
Reduced power $ 3,577
Reduced cooling $ 3,548
Less floor space $ 9,371
. ========
Total Savings $ 16,496 per 100TB of storage
Over a storage array’s standard 5-year service cycle, OPEX savings could result in as much as $82K dollars or more.
Addition benefits from a storage refresh might also include tiering storage (typically yielding around a 30% savings over non-tiered storage), reduced support contract costs, and less time spent managing older, more labor-intensive storage subsystems. There is also an opportunity for capital expense (CAPEX) savings by cleverly designing cost-optimized equipment, but that’s a story for a future article.
Don’t be misled into thinking that a delay of your storage technical refresh cycle will save money. In the end it could be a very costly decision.
Disaster Recovery Strategy for the 21st Century
Blade servers, virtualization, solid state disks, and 16Gbps fibre channel – it’s challenging to keep up with today’s advanced technology. The complexity and sophistication of emerging products can be dizzying. In most cases we’ve learned how to cope with these changes, but there are a few areas where we still cling to vestiges of the past. One of these relics of past decades is the impenetrable, monolithic data center.
The data center traces its roots back to the mainframe, when all computing resources were housed in a single, highly specialized facility designed specifically to support processing operations. Since there was little or no effort to classify data, these bastions of data processing were over-designed to ensure the most critical requirements were supported. This model was well-suited for mainframes and centralized computing, but it falls well short of meeting the needs of our modern IT environments.
Traditional data center facilities provide a one-size-fits-all solution. At an average $700 to $1500 per square foot, they are expensive to build. They lack the scalability and flexibility to respond to dynamic market changes and shifts in technology. Since these require massive investments of capital, they must be built not only to contain today’s IT equipment, but also satisfy growth requirements for 25-years or more. The end result is a tremendous waste of capacity, corporate funds tied up for decades, making assumptions about the direction and needs of future IT technology, the build-out of a one-size-fits-all facility, and a price tag that makes disaster recovery redundancy well beyond the reach of most companies.
An excellent solution to this problem is already a proven technology – the Portable Modular Data Center. These are typically self-contained data center modules that contain a comprehensive set of power, cooling, security, and internal infrastructure to support a dozen or more equipment racks per module with up to 30kW of power per rack. These units are relatively inexpensive, highly scalable, simple to deploy, energy efficient (Green), and factory constructed to ensure consistent quality and reproducible technology. As modules, they can be deployed incrementally as requirements dictate, avoiding major one-time capital expenditures for facilities.
Their inherent modularity and scalability make them an excellent choice for incrementally building out finely-tuned disaster recovery facilities. Here is an example of how modular data centers can be leveraged to cost-effectively provide Disaster Recovery protection of an organization’s data assets.
- Mission Critical Operations (typically 10% to 15%)
These are applications and data that might severely cripple the organization if they were not available for any significant period of time.
Strategy – Deploy synchronous replication technology to maintain an up-to-date mirror image of the data that could be brought to operational status within a matter of minutes.
Solution – Deploy one or more Portable Module Data Center units within 30-miles (to minimize latency) and run synchronous replication between the primary data center and the modular facility. Since 20-30 miles of separation would protect from a local disaster, but not a region-wide event, it might be worthwhile to replicate asynchronously from the modular data center to some remote (out-of-region) location. A small amount of data might be lost in the event of a disaster (due to asynchronous delay), but processing could still be brought back on-line quickly with minimal loss of data and only a limited interruption to operations. - Vital Operations (typically 20% to 25%)
These applications and data are very important to the organization, but an outage of several hours would not financially cripple the business.
Strategy – Deploy an asynchronous replication mechanism outside the region to ensure an almost-up-to-date copy of data is available for rapid recovery.
Solution – Deploy one or more Portable Module Data Center units anywhere in the country and run asynchronous replication between the primary data center and the remote modular facility. Since distance is not a limiting factor for asynchronous replication, the modular facility could be installed anywhere. This protects from disasters occurring not only locally, but within the region as well. A small amount of data might be lost in the event of a disaster (due to asynchronous delay), but applications and databases could still be recovered quickly with minimal loss of data and only a limited interruption to operations. - Sensitive Operations (typically 20% to 30%)
These applications and data are important to the organization, but an outage of several days to one week would have only a negligible financial impact on the business.
Strategy – (same as above) Use the same asynchronous replication mechanism outside the region to ensure an almost-up-to-date copy of data is available for rapid recovery.
Solution – Add one or more Portable Module Data Center units to the above facility (as required) and run asynchronous replication between the primary data center and the remote modular facility. - Non-Critical Operations (typically 40% or more)These applications and data are incidental to the organization and can be recovered when time is available. An outage of several weeks would have little impact on the business.
Strategy – (same as above) Use the same asynchronous replication mechanism outside the region to ensure an almost-up-to-date copy of data is available for rapid recovery.
Solution – Deploy one or more Portable Module Data Center units anywhere in the country and run asynchronous replication between the primary data center and a remote modular facility.
Note: Since non-critical applications and data tend to be passive, non-critical operations might also be a viable candidate for transitioning to an Infrastructure-as-a-Service (IaaS) provider.
Modular Data Centers are the obvious enabler for the above Disaster Recovery strategy. They allow you to deploy only the data center resource you need, when you need it. They are less expensive than either leased or build facilities, and can be scaled as required by the business.
It’s time for the IT industry to abandon their outdated concepts of what a data center should be and focus on what is needed by each class of data. The day of raised-floor mainframe “bunkers” has passed. It’s time to start managing data center resource deployment as carefully as we manage server and storage deployment. Portable Modular Data Centers allow you to implement efficient, cost-effective IT production facilities in a logical sequence, without breaking the bank in the process.
Consultant, Contractor, or Staff Augmentation – Do You Know the Difference?
The world of IT is becoming remarkably complex, and companies grow increasingly reliant on outside knowledge and skills for assistance. But when you enter into really uncharted waters and need someone you can trust, who will you call? Unfortunately there are a lot of companies in the industry claiming to be technical experts for everything from “Big Data” to “Desktop Virtualization”. How do you identify the serious resources from the technical wannabe’s?
That is an interesting question. Every since the industry’s rush to identify and fix Y2K problems over a decade ago, the line between Consultant, Contractor, and Staff Augmentation has blurred. The recession of the past few years further masked the distinction between roles, since many laid-off IT employees simply re-branded themselves as “Independent Consultants” in an attempt to secure short-term project work.
So what are the differences between Staff Augmentation, Contractors, and Consultants? Let’s start with a definition. According to Wikipedia:
“Staff Augmentation is an outsourcing strategy which is used to staff a project and respond to business objectives. The technique consists of evaluating the existing staff and then determining which additional skills are required. One possible advantage of this approach is that it may leverage existing resources as well as utilize outsourced services and contract workers.”
“An Independent Contractor is a natural person, business, or corporation that provides goods or services to another entity under terms specified in a contract or within a verbal agreement. Unlike an employee, an independent contractor does not work regularly for an employer but works as and when required, during which time he or she may be subject to the Law of Agency. Independent contractors are usually paid on a freelance basis.”
“A Consultant (from Latin: consultare “to discuss”) is a professional who provides professional or expert advice in a particular area such as security (electronic or physical), management, accountancy, law (tax law, in particular), human resources, marketing (and public relations), finance, engineering, or any of many other specialized fields. A consultant is usually an expert or a professional in a specific field and has a wide knowledge of the subject matter.”
…Wikipedia Online Dictionary
Staff augmentation is based on the concept of a “faceless, replaceable skill” that is available for an entire category of labor (Administrator, Engineer, Programmer, Database Administrator, Web Designer, etc.). Since IT relies on a large labor pool of technical skills, these are relatively low priced roles. Since participants are required to have only prerequisite skills in their specialty and no other unique capabilities, they can be hired and released pretty much on demand. Rates are dictated by current market prices, and range from $35 – $95 per hour.
IT contractors are further up the scale in capabilities and value. They are typically companies that deliver a complete service or system to solve a clearly defined problem. This may be a particular operation, type of application, virtualized infrastructure, or network operation. In many instances it is delivered as a complete package, including hardware, software, utilities, installation, configuration, and testing. Contractor services may be purchased on a per-project or a time-and-materials basis and are consistent with similar projects. Bundled labor rates within a specified package or service are in the $125 to $185 per hour range.
At the top of the pyramid is IT consultant. This is a professional service offering highly developed skills and extensive experience in a specialized field. In addition to being a Subject Matter Expert for a particular technology or service, IT consultants typically have an extensive knowledge of related activities that include business operations, project management, associated technologies, industry best practices, quality assurance, security, and other operations. They are sought out by organizations for their comprehensive understanding of business-critical operations or other activity than can have industry-changing ramifications. Since these are highly specialized skills, they command rates from $225 to $450 per hour or more. Although consultants are expensive, they return value to the company that can far exceed their billable rate.
Clearly it’s in the client’s best interests to understand the differences and capabilities of each category. Unfortunately, these titles are frequently intermixed and tossed around somewhat indiscriminately by organizations. Unless due diligence is performed beforehand, occasionally some hapless company will think they landed a senior Consultant for $85 per hr. (plus expenses), when they actually contracted Staff Augmentation. This can quickly becomes the root cause of poor performance, lack-luster productivity, poor organization, missed objectives, and ultimately a failed project.
Technical personnel do not automatically become senior consultants just because that’s a label they’ve anointed themselves with. Buyer beware! Engaging the proper skill-set can either be a game-changer, or a “boat anchor” for the project.
Storage System Refresh – Making a Case for Mandatory Retirement
It’s hard to retire a perfectly good storage array. Budgets are tight, there’s a backlog of new projects in the queue, people are on vacation, and migration planning can be difficult. As long as there is not a compelling reason to take it out of service, it’is far easier to simply leave it alone and focus on more pressing issues.
While this may be the path of least resistance, it can come at a high price. There are a number of good reasons why upgrading storage arrays to modern technology may yield superior results and possibly save money too!
Capacity – When your aging disk array was installed several years ago, 300 GB, 10K RPM, FC disk drives were mainstream technology. It was amazing to realize you could squeeze up to 45 TB in a single 42U equipment rack! Times have changed. The same 10K RPM DISK drive has tripled in capacity, providing 900 GB in the same 3.5 inch disk drive “footprint”. It’s now possible to get 135 TB (a 300% capacity increase) into the same equipment rack configuration. Since data center rack space currently costs around $3000 per month, that upgrade alone will dramatically increase capacity without incurring any increase in floor-space cost.
Density – Previous generation arrays packaged from (12) to (15) 3.5 inch FC or SATA disk drives into a single rack-mountable 4U array. Modern disk arrays support from (16) 3.5 inch disks per 3U tray, to (25) 2.5 inch disks in a 2U tray. Special ultra-high density configurations may house up to (60) FC, SAS, or SATA DISK drives in a 4U enclosure. As above, increasing storage density within an equipment rack significantly increases capacity while requiring no additional data center floor-space.
Energy Efficiency – Since the EPA’s IT energy efficiency study in 2007 (Report to Congress on Server and Data Center Energy Efficiency, Public Law 109-431), IT manufacturers have increased efforts to improve the energy efficiency of their products. This has resulted in disk drives that consume from 25% to 33% less energy, and storage array controllers lowering power consumption by up to 30%. That has had a significant impact on energy costs, including not only the power to run the equipment, but also power to operate the cooling systems needed to purge residual heat from the environment.
Controller Performance – Storage array controllers are little more than specialized servers designed specifically to manage such functions as I/O ports, disk mapping, RAID and cache operations, and execution of array-centric internal applications (such as thin provisioning and snapshots). Like any other server, storage controllers have benefited from advances in technology over the past few years. The current generation of disk arrays contain storage controllers with from 3 to 5 times the processing power of their predecessors.
Driver Compatibility – As newer technologies emerge, they tend to focus on developing software compatibility with the most recently released products and systems on the market. With the passage of time, it becomes less likely for storage arrays to be supported by the latest and greatest technology on the market. This may not impact daily operations, but it creates challenges when a need arises to integrate aging arrays with state-of-the-art systems.
Reliability – Common wisdom used to be that disk failure characteristics could be accurately represented by a ”bathtub graph”. The theory was the potential for failure was high when a disk was new. It then flattened out at a low probability throughout the disk’s useful life, then took a sharp turn upswing as it approached end-of-life. This model implied that extending disk service life had no detrimental effects until it approached end-of-life for the disks.
However over the past decade, detailed studies by Google and other large organizations with massive disk farms have proven the “bathtub graph” model incorrect. Actual failure rates in the field indicate the probability of a disk failure increases by 10% – 20% for every year the disk is in service. It clearly shows the probability of failure increases in a linear fashion over the disk’s service life. Extending disk service-life greatly increases the risk for disk failure.
Service Contracts –Many popular storage arrays are covered by standard three-year warranties. This creates a dilemma, since the useful service life of most storage equipment is considered to be either four or five years. When the original warranty expires, companies must decide whether to extend the existing support contract (at a significantly higher cost), or transitioning to a time & materials basis for support (which can result in some very costly repairs).
Budgetary Impact – For equipment like disk arrays, it is far too easy to fixate on replacement costs (CAPEX), and ignore the ongoing cost of operational expenses (OPEX). This may avoid large upfront expenditures, but it slowly bleeds the IT budget to death by having to maintain increasingly inefficient, fault-prone, and power hungry equipment.
The solution is to establish a program of rolling equipment replenishment on a four- or five-year cycle. By regularly upgrading 20% to 25% of all systems each year, the IT budget is more manageable, equipment failures are controlled, and technical obsolescence remains in check.
Getting rid of familiar things can be difficult. But unlike your favorite slippers, the LazyBoy recliner, or your special coffee cup, keeping outdated storage arrays in service well beyond their prime can cost your organization plenty.
16 Gbps Fibre Channel – Do the Benefits Outweigh the Cost?
With today’s technology there can be no status quo. As the IT industry advances, so must each organization’s efforts to embrace new equipment, applications, and approaches. Without an ongoing process of improvement, IT infrastructures progressively become outdated and the business group they support grows incrementally less effective.
In September of 2010, the INCITS T11.2 Committee ratified the standard for 16Gbps Fibre Channel, ushering in the next generation of SAN fabric. Unlike Ethernet, Fibre Channel is designed for one specific purpose – low overhead transmission of block data. While this capability may be less important for smaller requirements where convenience and simplicity are paramount, it is critical for larger datacenters where massive storage repositories must be managed, migrated, and protected. For this environment, 16Gbps offers more than twice the bandwidth of the current 8Gbps SAN and 40% more bandwidth than the recently released 10Gbps Ethernet with FCoE (Fibre Channel over Ethernet).
But is an investment in 16Gbps Fibre Channel justified? If a company has reached a point where SAN fabric is approaching saturation or SAN equipment is approaching retirement, then definitely yes! Here is how 16Gbps stacks up against both slower fibre channel implementations and with 10Gbps Ethernet.
Emulex Model |
Port Speed | Protocol | Average HBA/NIC Price | Transfer Rate |
Transfer Time for 1TB | Bandwidth Cost per MB/sec. |
Bandwidth Difference |
LPE16002 | 16 Gbps | Fibre Channel | $1,808 | 1939 MB/sec. | 1.43 Hrs. | $0.93 | 160% |
OCe11102 | 10 Gbps | Ethernet | $1,522 | 1212 MB/sec. | 2.29 Hrs. | $1.26 | 100% |
LPe12002 | 8 Gbps | Fibre Channel | $1,223 | 800 MB/sec. | 3.47 Hrs. | $1.53 | 65% |
LPe11000 | 4 Gbps | Fibre Channel | $891 | 400 MB/sec. | 6.94 Hrs. | $2.23 | 32% |
This table highlights several differences between 4/8/16 Gbps fibre channel and 10Gbps Ethernet with FCoE technology (sometimes marketed as Unified Storage). The street prices for a popular I/O Controller manufacturer clearly indicates there are relatively small differences between controller prices, particularly for the faster controllers. Although the 16Gbps HBA is 40% quicker, it is only 17% more expensive!
However, a far more important issue is that 16Gbps fibre channel is backward compatible with existing 4/8 Gbps SAN equipment. This allows segments of the SAN to be gradually upgraded to leading-edge technology without having to suffer the financial impact of legacy equipment rip-and-replace approaches.
In addition to providing a robust, purpose-built infrastructure for migrating large blocks of data, it also offers lower power consumption per port, a simplified cabling infrastructure, and the ability to “trunk” (combine) channel bandwidth up to 128Gbps! It doubles the number of ports and available bandwidth in the same 4U rack space for edge switches, providing the potential for a saving of over $3300 per edge switch.
Even more significant is that 16Gbps provides the additional performance necessary to support the next generation of storage, which will be based on 6Gbps and 12Gbps SAS disk drives. Unlike legacy FC storage, which was based upon 4Gbps FC-AL arbitrated loops, the new SAS arrays are on switched connections. Switching provides a point-to-point connection for each disk drive, ensuring every 6Gbps SAS connection (or in the near future, 12Gbps SAS connection) will have a direct connection to the SAN fabric. This eliminates backend saturation of legacy array FC-AL shared busses, and will place far greater demand for storage channel performance on the SAN fabric.
So do the benefits of 16Gbps fibre channel outweigh its modest price premium? Like many things in life – it depends! Block-based 16Gbps fibre channel SAN fabric is not for every storage requirement, but neither is file-based 10Gbps FCoE or iSCSI. If it is a departmental storage requirement or an environment where NAS or iSCSI has previously been deployed, then replacing the incumbent protocol with 16Gbps fibre channel may or may not have merit. However, large SAN storage array are particularly dependent on high performance equipment specifically designed for efficient data transfers. This is an arena where the capabilities and attributes of 16Gbps fibre channel will shine.
In any case, the best protection against making a poor choice is to thoroughly research the strengths and weaknesses of each technology and seek out professional guidance from a vendor-neutral storage expert with a Subject Matter Expert level understanding of the storage industry and its technology.
Storage Tiers – Putting Data in It’s Place
I’m frequently surprised by the number of companies who haven’t transitioned to a tiered storage structure. All data is not created equal. While a powerful database may place extreme demand on storage, word processing documents do not.
As we move into a new world of “big data”, more emphasis needs to be focused on making good decisions about what class of disk this data should reside on. Although there are no universally accepted standards for storage tier designations, frequently the breakdown goes as follows:
Tier 0 – Solid state devices
Tier 1 – 15K RPM SAS or FC Disks
Tier 2 – 10K RPM SAS or FC Disks
Tier 3 – 7200 or 5400 RPM SATA (a.k.a. – NL-SAS) Disks
So why is a tiering strategy important for large quantities of storage? Let’s take a look at similar storage models for 1 petabyte of data:
The difference in disk drive expense alone is over $225,000 or around 30% of the equipment purchase price. In addition there other issues to consider.
Pros:
- Reduces the Initial purchase price by 25% or more
- Improving energy efficiency by 25% – 35% lowers operational cost and cooling requirements
- Substantial savings from reduced data center floorspace requirements
- Increased overall performance for all applications and databases
- Greater scalability and flexibility for matching storage requirements to business growth patterns
- Provides additional resources for performance improvements (an increased number of ports, cache, controller power, etc.)
- A high degree of modularity facilitates better avoidance of technical obsolescence
- May moderate the demand for technical staff necessary to manage continual storage growth
Cons:
- Requires automated, policy-based data migration software to operate efficiently.
- Should employ enterprise-class frames for Tiers 0/1 and midrange arrays for Tiers 2/3
- Incurs approximately a 15% cost premium for enterprise-class storage to support Tier 0/1 disks
- Implements a more complex storage architecture that requires good planning and design
- Needs at least a rudimentary data classification effort for maximum effectiveness
So does the end justify the effort? That is for each company to decide. If data storage growth is fairly stagnant, then it may be questionable whether the additional effort and expense is worth it. However if you are staggering under a 30% – 50% CAGR storage growth rate like most companies, the cost reduction, increased scalability, and performance improvements achieved may well justify the effort.