NOTE: My original article contained embedded calculation errors that significantly distorted the end results. These problems have since been corrected. I apologize to anyone who was accidentally misled by this information, and sincerely thank those diligent readers who brought the issues to my attention.
Some issues seem so obvious they’re hardly worth considering. Everyone knows that Solid State Drives (SSD) are more energy-efficient than spinning disk. They don’t employ rotating platters, electro-mechanical motors and mechanical head movement for data storage, so they must consume less power – right? However, everyone also knows the cost of SSD is so outrageous that they can only be deployed for super-critical high performance applications. But does the reputation of having exorbitant prices still apply?
While these considerations may seem intuitive, they are not entirely accurate. Comparing the Total Cost of Ownership (TCO) for traditional electro-mechanical disks vs. Solid State Disks provides a clearer picture of the comparative costs of each technology.
- For accuracy, this analysis compares the purchase price (CAPEX) and power consumption (OPEX) of only the disk drives, and does not include the expense of entire storage arrays, rack space, cooling equipment, etc.
- It uses the drive’s current “street price” for comparison. Individual vendor pricing may be significantly different, but the ratio between disk and SSD cost should remain fairly constant.
- The dollar amounts shown on the graph represent a 5-year operational lifecycle, which is fairly typical for production storage equipment.
- Energy consumption for cooling has also been included in the cost estimate, since it requires roughly the amount of energy as the drives consume to maintain them in an operational state.
- 100 TB of storage capacity was arbitrarily selected to illustrate the effect of cost on a typical mid-sized SAN storage array.
The following graph illustrates the combined purchase price, plus energy consumption costs for several popular electro-mechanical and Solid State Devices.
From the above comparison, several conclusions can be drawn:
SSDs are Still Expensive – Solid State Drives remain an expensive alternative storage medium, but the price differential between SSD and electro-mechanical drives is coming down. As of this writing there is only an x5 price difference between the 800GB SSD and the 600GB, 15K RPM drive. While this is still a significant gap, it is far less that the staggering x10 to x20 price differential seen 3-4 years ago.
SSDs are very “Green” – A comparison of the Watts consumed during a drive’s “typical operation” indicate that SSD consumes about 25% less energy than 10K RPM, 2.5-inch drives, and about 75% less power than 15K RPM, 3.5-inch disks. Given that a) each Watt used by the disk requires roughly 1 Watt of power for cooling to remove the heat that is produced, and b) the cost per Kwh continues to rise every year, this significant difference become a factor over a storage array-s 5-year lifecycle.
Extreme IOPS is a Bonus – Although more expensive, SSDs are capable of delivering from 10- to 20-times more I/O’s-per-second, potentially providing a dramatic increase in storage performance.
Electro-Mechanical Disks Cost Differential – There is a surprisingly small cost differential between 3.5 inch, 15K RPM drives and 2.5 inch 10K RPM drives. This may justify eliminating 10K disks altogether and deploying a less complex 2-tiered array using only 15K RPM disks and 7.2K disks.
Legacy 3.5 Inch Disks – Low capacity legacy storage devices (<146GB) in a 3-5-inch drive form-factor consume too much energy to be practical in a modern, energy-efficient data center (this includes server internal disks). Any legacy disk drive smaller than 300 GB should be retired.
SATA/NL-SAS Disks are Inexpensive – This simply re-affirms what’s already known about SATA/NL-SAS disks. They are specifically designed to be inexpensive, modest performance devices capable of storing vast amounts of low-demand content on-line.
The incursion of Solid State Disks into the industry’s storage mainstream will have interesting ramifications not only for the current SAN/NAS arrays, but also may impact a diverse set of technologies that have been designed to tolerate the limitations of an electro-mechanical storage world. As they say, “It’s a brave new world”.
Widespread deployment of SSD will have a dramatic impact on the storage technology itself. If SSDs can be implemented in a cost-effective fashion, why would anyone need an expensive and complex automated tiering system to decrement data across multiple layers of disk? Because of its speed, will our current efforts to reduce RAID rebuild times still be necessary? If I/O bottlenecks are eliminated at the disk drive, what impact will it have on array controllers, data fabric, and HBAs/NICs residing upstream of the arrays?
While it is disappointing to find SSD technology still commands a healthy premium over electro-mechanical drives, don’t expect that to remain the case forever. As the technology matures prices will decline when user acceptance grows and production volumes increase. Don’t be surprised to see SSD technology eventually eliminate the mechanical disk’s 40-year dominance over the computer industry.
For those of you interested in examining the comparison calculations, I’ve included the following spreadsheet excerpts contain detailed information used to create the graph.
Blade servers, virtualization, solid state disks, and 16Gbps fibre channel – it’s challenging to keep up with today’s advanced technology. The complexity and sophistication of emerging products can be dizzying. In most cases we’ve learned how to cope with these changes, but there are a few areas where we still cling to vestiges of the past. One of these relics of past decades is the impenetrable, monolithic data center.
The data center traces its roots back to the mainframe, when all computing resources were housed in a single, highly specialized facility designed specifically to support processing operations. Since there was little or no effort to classify data, these bastions of data processing were over-designed to ensure the most critical requirements were supported. This model was well-suited for mainframes and centralized computing, but it falls well short of meeting the needs of our modern IT environments.
Traditional data center facilities provide a one-size-fits-all solution. At an average $700 to $1500 per square foot, they are expensive to build. They lack the scalability and flexibility to respond to dynamic market changes and shifts in technology. Since these require massive investments of capital, they must be built not only to contain today’s IT equipment, but also satisfy growth requirements for 25-years or more. The end result is a tremendous waste of capacity, corporate funds tied up for decades, making assumptions about the direction and needs of future IT technology, the build-out of a one-size-fits-all facility, and a price tag that makes disaster recovery redundancy well beyond the reach of most companies.
An excellent solution to this problem is already a proven technology – the Portable Modular Data Center. These are typically self-contained data center modules that contain a comprehensive set of power, cooling, security, and internal infrastructure to support a dozen or more equipment racks per module with up to 30kW of power per rack. These units are relatively inexpensive, highly scalable, simple to deploy, energy efficient (Green), and factory constructed to ensure consistent quality and reproducible technology. As modules, they can be deployed incrementally as requirements dictate, avoiding major one-time capital expenditures for facilities.
Their inherent modularity and scalability make them an excellent choice for incrementally building out finely-tuned disaster recovery facilities. Here is an example of how modular data centers can be leveraged to cost-effectively provide Disaster Recovery protection of an organization’s data assets.
- Mission Critical Operations (typically 10% to 15%)
These are applications and data that might severely cripple the organization if they were not available for any significant period of time.
Strategy – Deploy synchronous replication technology to maintain an up-to-date mirror image of the data that could be brought to operational status within a matter of minutes.
Solution – Deploy one or more Portable Module Data Center units within 30-miles (to minimize latency) and run synchronous replication between the primary data center and the modular facility. Since 20-30 miles of separation would protect from a local disaster, but not a region-wide event, it might be worthwhile to replicate asynchronously from the modular data center to some remote (out-of-region) location. A small amount of data might be lost in the event of a disaster (due to asynchronous delay), but processing could still be brought back on-line quickly with minimal loss of data and only a limited interruption to operations.
- Vital Operations (typically 20% to 25%)
These applications and data are very important to the organization, but an outage of several hours would not financially cripple the business.
Strategy – Deploy an asynchronous replication mechanism outside the region to ensure an almost-up-to-date copy of data is available for rapid recovery.
Solution – Deploy one or more Portable Module Data Center units anywhere in the country and run asynchronous replication between the primary data center and the remote modular facility. Since distance is not a limiting factor for asynchronous replication, the modular facility could be installed anywhere. This protects from disasters occurring not only locally, but within the region as well. A small amount of data might be lost in the event of a disaster (due to asynchronous delay), but applications and databases could still be recovered quickly with minimal loss of data and only a limited interruption to operations.
- Sensitive Operations (typically 20% to 30%)
These applications and data are important to the organization, but an outage of several days to one week would have only a negligible financial impact on the business.
Strategy – (same as above) Use the same asynchronous replication mechanism outside the region to ensure an almost-up-to-date copy of data is available for rapid recovery.
Solution – Add one or more Portable Module Data Center units to the above facility (as required) and run asynchronous replication between the primary data center and the remote modular facility.
- Non-Critical Operations (typically 40% or more)These applications and data are incidental to the organization and can be recovered when time is available. An outage of several weeks would have little impact on the business.
Strategy – (same as above) Use the same asynchronous replication mechanism outside the region to ensure an almost-up-to-date copy of data is available for rapid recovery.
Solution – Deploy one or more Portable Module Data Center units anywhere in the country and run asynchronous replication between the primary data center and a remote modular facility.
Note: Since non-critical applications and data tend to be passive, non-critical operations might also be a viable candidate for transitioning to an Infrastructure-as-a-Service (IaaS) provider.
Modular Data Centers are the obvious enabler for the above Disaster Recovery strategy. They allow you to deploy only the data center resource you need, when you need it. They are less expensive than either leased or build facilities, and can be scaled as required by the business.
It’s time for the IT industry to abandon their outdated concepts of what a data center should be and focus on what is needed by each class of data. The day of raised-floor mainframe “bunkers” has passed. It’s time to start managing data center resource deployment as carefully as we manage server and storage deployment. Portable Modular Data Centers allow you to implement efficient, cost-effective IT production facilities in a logical sequence, without breaking the bank in the process.
Traditional brick and mortar datacenters have been a mainstay of enterprise computing since the day of the mainframe. IT systems were kept in isolation in windowless, highly secure facilities that provided a constant temperature and humidity environment on a 7×24 basis. Although the cost of building new datacenters continues to increase substantially, until now relatively few options have been available.
However, with the development of the portable modular datacenter, the day of the traditional datacenter may be coming to an end. While there are several variations on the market, the most promising appears to be the completely built out facility. New datacenter modules are built from ISO standard shipping containers. They incorporate chillers, power and communications buses, forced air cooling, equipment racks, and all other components necessary for a modern datacenter. These units can be trucked to any location, moved into position on a concrete pad, connected to external resources, and be ready for systems build-out on short notice. They can be configured to operate as a singular unit, multiple units, and even as stacked arrays of modular datacenter units.
In addition to serving as a modular replacement for traditional brick-and-mortar datacenter s, there are other possibilities for Portable Modular Datacenter s:
RAPID DEPLOYMENT MODULES – For situations where rapid implementation is a key driver, or when companies simply can’t wait the 18-24 months for a new datacenter build-out.
COST CONTAINMENT – Situations where minimizing the cost for building a new datacenter facility is a primary objective
DISASTER RECOVERY – A highly flexible, cost-effective IT environment that can be deployed remotely for a Disaster Recovery solution
CAPACITY-ON-DEMAND –Modular, self-contained units that permit companies to add new datacenter capacity only-as-required (Capacity-as-a-Service?)
TEMPORARY FACILITIES – Allows companies to continue to support ongoing IT operations while a permanent datacenter facility is built
SEGREGATED SYSTEMS – Enables complete isolation of specific IT operation in an otherwise shared environment (Community Cloud?)
DYNAMIC MARKETS – A solution for highly volatile markets where future capacity requirements are difficult to predict
EMERGENCY CAPACITY – Available for relatively rapid deployment when an organization’s primary datacenter runs out of floor space
SYNCRONOUS REPLICATION – Allows the implementation of a small nearby replication site within 40KM of the primary datacenter to support replication while maintaining database consistency
MOBILE SYSTEMS – A portable IT solution that could be relocated to a different region in response to changing corporate needs or an impending disaster (such as a major hurricane).
PREFABRICATED SUB-SYSTEMS – A transportable platform for high growth companies who must buy integrated sub-systems from an external vendor, rather than building the equipment themselves.
REPURPOSING OF BUILDINGS – Modular units may be installed within existing building that are sitting idle, as long as adequate resources (power and communications) are available.
Anotherbig benefit to portable mobile datacenter units is that they’re built in a factory to exact specification. As such, they benefit from repetitive manufacturing processes and ongoing quality assurance reviews. Each module features the same level of quality and reliability as its peers. This is in sharp contrast to traditional brick-and-mortar datacenters, which are normally built as one-off custom configurations.
The concept of portable mobile datacenter units is pretty clever. If there are any downsides to this technology they are not readily apparent. Although this represents a relatively new approach, it appears to be distinctly superior to what’s been done in the past. Don’t be surprised to see a new modular datacenter unit being installed on a concrete pad near you in the foreseeable future.
It’s hard to retire a perfectly good storage array. Budgets are tight, there’s a backlog of new projects in the queue, people are on vacation, and migration planning can be difficult. As long as there is not a compelling reason to take it out of service, it’is far easier to simply leave it alone and focus on more pressing issues.
While this may be the path of least resistance, it can come at a high price. There are a number of good reasons why upgrading storage arrays to modern technology may yield superior results and possibly save money too!
Capacity – When your aging disk array was installed several years ago, 300 GB, 10K RPM, FC disk drives were mainstream technology. It was amazing to realize you could squeeze up to 45 TB in a single 42U equipment rack! Times have changed. The same 10K RPM DISK drive has tripled in capacity, providing 900 GB in the same 3.5 inch disk drive “footprint”. It’s now possible to get 135 TB (a 300% capacity increase) into the same equipment rack configuration. Since data center rack space currently costs around $3000 per month, that upgrade alone will dramatically increase capacity without incurring any increase in floor-space cost.
Density – Previous generation arrays packaged from (12) to (15) 3.5 inch FC or SATA disk drives into a single rack-mountable 4U array. Modern disk arrays support from (16) 3.5 inch disks per 3U tray, to (25) 2.5 inch disks in a 2U tray. Special ultra-high density configurations may house up to (60) FC, SAS, or SATA DISK drives in a 4U enclosure. As above, increasing storage density within an equipment rack significantly increases capacity while requiring no additional data center floor-space.
Energy Efficiency – Since the EPA’s IT energy efficiency study in 2007 (Report to Congress on Server and Data Center Energy Efficiency, Public Law 109-431), IT manufacturers have increased efforts to improve the energy efficiency of their products. This has resulted in disk drives that consume from 25% to 33% less energy, and storage array controllers lowering power consumption by up to 30%. That has had a significant impact on energy costs, including not only the power to run the equipment, but also power to operate the cooling systems needed to purge residual heat from the environment.
Controller Performance – Storage array controllers are little more than specialized servers designed specifically to manage such functions as I/O ports, disk mapping, RAID and cache operations, and execution of array-centric internal applications (such as thin provisioning and snapshots). Like any other server, storage controllers have benefited from advances in technology over the past few years. The current generation of disk arrays contain storage controllers with from 3 to 5 times the processing power of their predecessors.
Driver Compatibility – As newer technologies emerge, they tend to focus on developing software compatibility with the most recently released products and systems on the market. With the passage of time, it becomes less likely for storage arrays to be supported by the latest and greatest technology on the market. This may not impact daily operations, but it creates challenges when a need arises to integrate aging arrays with state-of-the-art systems.
Reliability – Common wisdom used to be that disk failure characteristics could be accurately represented by a ”bathtub graph”. The theory was the potential for failure was high when a disk was new. It then flattened out at a low probability throughout the disk’s useful life, then took a sharp turn upswing as it approached end-of-life. This model implied that extending disk service life had no detrimental effects until it approached end-of-life for the disks.
However over the past decade, detailed studies by Google and other large organizations with massive disk farms have proven the “bathtub graph” model incorrect. Actual failure rates in the field indicate the probability of a disk failure increases by 10% – 20% for every year the disk is in service. It clearly shows the probability of failure increases in a linear fashion over the disk’s service life. Extending disk service-life greatly increases the risk for disk failure.
Service Contracts –Many popular storage arrays are covered by standard three-year warranties. This creates a dilemma, since the useful service life of most storage equipment is considered to be either four or five years. When the original warranty expires, companies must decide whether to extend the existing support contract (at a significantly higher cost), or transitioning to a time & materials basis for support (which can result in some very costly repairs).
Budgetary Impact – For equipment like disk arrays, it is far too easy to fixate on replacement costs (CAPEX), and ignore the ongoing cost of operational expenses (OPEX). This may avoid large upfront expenditures, but it slowly bleeds the IT budget to death by having to maintain increasingly inefficient, fault-prone, and power hungry equipment.
The solution is to establish a program of rolling equipment replenishment on a four- or five-year cycle. By regularly upgrading 20% to 25% of all systems each year, the IT budget is more manageable, equipment failures are controlled, and technical obsolescence remains in check.
Getting rid of familiar things can be difficult. But unlike your favorite slippers, the LazyBoy recliner, or your special coffee cup, keeping outdated storage arrays in service well beyond their prime can cost your organization plenty.
It’s a simple truth – “Big Data” produces big power bills. In many areas the cost of data center energy for ongoing operatons is equal to the purchase cost of IT equipment itself. In today’s economy “going green” offers some very attractive incentives for saving money through conservation practices, as well as a side benefit of helping save the planet we all live on.
The following is a collection of tips to save power in the data center. Some are simply common sense and others take time, knowledge, and a budgetary commitment to implement. As many of these as possible should be incorporated into an energy optimization culture that continually searches for ways to reduce power consumption and the associated cooling requirements.
1. Purchase Energy Efficient Disk Technology
A new generation of disk drives feature such advanced capabilities as optimized caching, intelligent control circuitry, energy optimized motors, and other power reduction techniques. Not only ask for energy efficient equipment for your projects, but ensure your purchasing department is aware of the differences and importance to your organization
2. Create a Tiered Storage System
Assigning data to different classes of disk subsystems, based on the value of the information can result in significant energy savings. Solid-state disks and lower RPM disks drives consume far less power-per-TB than standard disks.
3. Automated, Policy-Based Migration
This software utility is a major enabler for multi-tiered storage. It monitors file characteristics and will automatically migrate data “behind the scenes” to an appropriate class of disk once a specific set of criteria is met.
4. Implement Storage Virtualization
Virtualization creates an abstraction of physical storage and allows the servers to see available disk as one large storage pool. It provides access to all available storage, offers greater flexibility and simplifies the management of heterogeneous subsystems.
5. Employ Thin Provisioning
Databases and some applications require a contiguous storage space assigned for future growth. Thin provisioning facilitates the allocation of virtual storage, which will appear as a contiguous physical storage to the database.
6. Power Down Inactive Equipment
Unused systems and storage that has been left running in a data center will continue to consume power and generate heat without providing any useful work. An assumption that “someone might need to access it” is a poor reason for leaving inactive equipment up and running 365-days per year.
7. Retire Legacy Systems
Outdated equipment can be another big consumer of energy. Develop a program to annually retire aging storage that contains low-capacity disks, inefficient circuit components, and little or no power conservation circuitry.
8. Optimize Raid Array Configuration
Legacy RAID5 3+1 or high performance RAID10 configurations that are not warranted waste large amounts of capacity and power with little tangible benefit. Selective deployment of RAID technology increases usable space and reduces power/cooling requirements.
9. Clean Out Unwanted Data
Over time, systems become a retirement home for unused files, core dumps, outdated logs, roll-back files, non work-related content, and other unnecessary information. Files can be automatically scanned to identify and remove unwanted or outdated data that provides no value to the company.
10. Clean Up File Systems
Like data, file systems and directories should be periodically scanned to ensure that defunct applications, outdated directories, and temporary updates have been purged from storage.
11. Periodically Update I/O Firmware
Manufacturers regularly improve their firmware to ensure bugs are fixes, security holes are patched, and performance is optimized. Current firmware ensures that controllers work at optimal efficiency. Less work that must be done may translate into less power consumption.
12. Clean Up the Backup Process
Examine the backup schedule and exclusion lists to ensure all identified areas are still relevant. Your backup system may be regularly processing and backing up directories that contain obsolete files, irrelevant directories (i.e.- /temp), or system content that never changes.
13. Replace Missing Floor Tiles and Blank Panels
Missing floor tiles and equipment rack filler panels reduce the positive cooling pressure produced by the cooling system and can significantly disrupt airflow patterns through rack-mounted equipment.
14. Eliminate Air Pressure Blockage
Also check under the raised floor for collections of debris that can restrict airflow going to, or through equipment racks. The harder an air conditioning system must work to move air through a facility, the more energy will be consumed.
15. Increase Temperature and Humidity Settings
Confirm temperature and humidity are set to the correct levels. Evaluate equipment manufacturer’s specifications to ensure all settings do not go exceed manufacturer recommendations.
16. Turn Off Video Monitors
If video monitors are not in use, they should be turned off. Monitors are usually left on 24-hrs a day whether they’re being used or not, consuming power and generating heat without providing value.
17. Minimize/Eliminate Server Internal Disk Drives
Servers are usually purchased with internal disks installed for the operating system, binaries, swap space, and other system needs. Whenever practical, eliminate internal disks by using Boot-from-SAN technologies to better utilize capacity and more efficiently manage power consumption.
18. Reclaim Orphaned LUNS
Storage tends to collect areas of allocated, but unused or abandoned storage space over time. Periodic review and reclamation of these spaces can result in significant storage savings.
19. Revise Data Retention Policy
An organizational policy of “save everything” is usually the worst of approaches. Implement a program of saving only data that has verified business value, or is necessary to retain for litigation protection and regulatory compliance.
20. Increase User Consumption Awareness
End-users bad habits can have a significant impact on storage consumption. Educate users on the value of content management, space utilization, and data cleanup once a file is no longer needed.
21. Facilities Operational Staff Training
Every operational staff member should be trained in the proper operation of equipment, conservation methods, and the energy optimization objective established by the organization. Energy management must be a part of the corporate culture.
22. Require Periodic Performance Optimization
A poorly performing server, fabric, or storage structures will consume additional power and cooling. Periodic performance tuning efforts will optimize server and storage operations and achieve the same goals while requiring the systems to do less work..
23. Disk Spares Assignment
Over-provisioning of disk spares consumes storage resources without adding measurable value. Storage industry best practices recommend one disk spare for every 30-32 disks. RAID array selection may dictate more or less need to be allocated. Follow manufacturer recommendations for spares.
24. High Efficiency Power Supplies
High efficiency power supplies offer improved efficiency of 60-70% to over 90%. In most circumstances there are exact replacements for most popular systems and storage power supplies.
25. Channel Port Speed Optimization
Implementation of high speed ports and following recommended fan-out ratios allow you to provide an appropriate amount of bandwidth with a minimum number of resources, which translates into lower power and cooling demands.
26. High Capacity Disk Drives
Advanced disk development is dramatically increasing physical disk capacity. As long as IOPS (I/Os per second) is not a requirement, larger disks of the same rotational speed can be deployed to double or even triple capacity for the same energy consumption.
27. Centralize Storage Management
Over time, management tools offering point-solutions tend to proliferate, along with servers and storage. Centralization and consolidation of management tools into comprehensive suites can eliminate multiple under-utilized monitors and reduce excess power consumption.
28. Use Electronically Commutated Motors
Wherever possible, replace condensing units or fan powered boxes using mechanical brushes with electronically commutated motors. Eliminating the brush mechanism and adding automatic turn-down circuitry found in most EC Motors can yield a reduction in power consumption of up to 45%.
29. Equipment Consolidation
Legacy servers and storage systems have proliferated over the past two decades. Frequent over-provisioning of systems leads to servers and storage that are grossly underutilized. Consolidation permits additional legacy systems to be retired.
30. Deploy Arrays Built from 2.5 Inch Drives
Three or more 2.5 inch disk can fit in the same physical space as one 3.5 inch drive. They have a much smaller spinning mass, so they can provide twice the storage capacity for the same power consumption.
31. Real-Time Data Compression
Some primary storage systems can perform real-time compression on the data stream. For certain types of data this can produce a reduction of 2:1 or more in the amount of storage space consumed.
32. Manage Data Copy Proliferation
Without careful monitoring, duplicate copies of data proliferate like rabbits. IT management should review each department’s data requirements and ensure only a reasonable number of copies exist.
33. Data De-duplication
This backup technology identifies patterns in the data stream and replaces duplicate data with a pointer to the original copy. This can significantly reduce the amount of disk backup space required.
34. Data Classification
This is a process that categorizes different types of information by business value. Once this process has been completed, data can be assigned an appropriate levels of disk performance and cost.
35. Solid-State Drives
Solid-state disk dramatically reduce power consumption by eliminating electro-mechanical components and rotating platters. SSD power consumption is miniscule when compared to traditional disk drives.
36. MAID Technology
Maid technology powers down the storage array to an idle state if no activity has been detected within a specific period of time. They are valuable when infrequently accessed data is involved.
37. Use High-Capacity Tape Drives
High-capacity tape drives will hold larger amounts data and when installed in tape libraries, minimize the number of cartridge changes. Since a robotic arm is an electromechanical device, minimizing tape changes reduces the amount of energy consumed by the tape library.
38. Convert to Direct DC Power
Significant energy loss occurs when AC power goes through multiple conversion steps between the initial distribution point and the system power supply. Converting to (or designing for) direct DC power directly to the equipment racks can save up to 30% in power consumption.
39. Capacity on Demand
Avoid deploying Capacity-on-Demand capabilities unless absolutely required. Inactivated processors, memory, and other resources typically consume energy without providing additional business value until an activation license is provided.
40. Consider Storage-as-a-Service
If your operational model supports it, consider migrating some storage requirements to the Cloud. When storage is purchased from an external provider, organizations pay only for the storage they use and therefore are only charged for the energy necessary to run storage capacity they’ve purchased.
41. Consolidation of NAS Systems
NAS storage has proliferated within most organizations, due to their modest cost, installation flexibility, and ease of deployment. Consolidation of multiple stand-alone units into larger NAS storage will improve efficiency, simplify management, and minimize power consumption.
42. Greater use of Granular Scaling
Select storage equipment that facilitates scaling capacity in relatively small increments. Installing full frames of disk storage before its capacity is required consumes large amounts of power without adding any business value.
43. Consolidate SAN Fabrics
Consolidation multiple SAN fabrics into a single shared SAN fabric to eliminates switch/director duplication, simplifies manageability and increases device utilization.
44. Continuous Data Streaming to Tape
Ensure the backup streams sent to tape devices are robust enough to allow continuous streaming, rather than requiring frequent start and stops. Also configure disk pools to consolidate data and ensure tape drives can be driven at maximum speed for the shortest period possible. Streaming data to tape requires less energy and significantly reduces the backup period.
45. Back Up to Tape Media
Disk pools for backup are recommended for speed and efficiency, but inactive data should be off-loaded to tape media as soon as possible to minimize energy consumption. Once it has been written to tape, data can be archived for future use without consuming power or occupying spinning disk space.
46. Equipment Rack Height
Increasing equipment rack height by a few inches lowers the total number of installed power supplies installed in the racks. Replacing 42U racks with 45U racks will add 3U per frame and free up expensive data center floor-space for 6 additional frames per 100 racks.
47. Update Legacy Lighting Systems
Update legacy lighting systems to modern, energy efficient technology and install occupancy sensors in the data center to ensure lighting is only being used as required.
48. Adopt a Cold Aisle/Hot Isle Configuration
Creating a designated cold/hot isle system works more efficiently by preventing hot and cold air from mixing. With a Cold/Hot isle organization, cold air is directed into the equipment racks on one side while hot air is purged from the other.
49. Use Ambient Air For Cooling
Using ambient air for cooling takes advantage of the differential between local atmospheric temperature levels and the heat generated by electronic equipment. If relatively dry air is present and temperatures are moderate, it may be advantageous to leverage prevailing conditions for cooling, rather than being total dependent upon mechanical cooling systems.
50. Measure Your Power Consumption
According to Peter Drucker, ““What gets measured, gets managed.” If you don’t set clear objectives and deploy the proper measurement tools to track your progress, there is a very good chance you’ll never achieve your company’s energy reduction goals.
As with many things, “your mileage may vary” when implementing any of the above tips. Start with the easiest and most obvious, then work forward from there. And as mentioned above, make energy conservation a part of your operational culture. Escalating energy costs and higher power demands are problems that will probably not go away in the foreseeable future.