Category Archives: Storage
NOTE: My original article contained embedded calculation errors that significantly distorted the end results. These problems have since been corrected. I apologize to anyone who was accidentally misled by this information, and sincerely thank those diligent readers who brought the issues to my attention.
Some issues seem so obvious they’re hardly worth considering. Everyone knows that Solid State Drives (SSD) are more energy-efficient than spinning disk. They don’t employ rotating platters, electro-mechanical motors and mechanical head movement for data storage, so they must consume less power – right? However, everyone also knows the cost of SSD is so outrageous that they can only be deployed for super-critical high performance applications. But does the reputation of having exorbitant prices still apply?
While these considerations may seem intuitive, they are not entirely accurate. Comparing the Total Cost of Ownership (TCO) for traditional electro-mechanical disks vs. Solid State Disks provides a clearer picture of the comparative costs of each technology.
- For accuracy, this analysis compares the purchase price (CAPEX) and power consumption (OPEX) of only the disk drives, and does not include the expense of entire storage arrays, rack space, cooling equipment, etc.
- It uses the drive’s current “street price” for comparison. Individual vendor pricing may be significantly different, but the ratio between disk and SSD cost should remain fairly constant.
- The dollar amounts shown on the graph represent a 5-year operational lifecycle, which is fairly typical for production storage equipment.
- Energy consumption for cooling has also been included in the cost estimate, since it requires roughly the amount of energy as the drives consume to maintain them in an operational state.
- 100 TB of storage capacity was arbitrarily selected to illustrate the effect of cost on a typical mid-sized SAN storage array.
The following graph illustrates the combined purchase price, plus energy consumption costs for several popular electro-mechanical and Solid State Devices.
From the above comparison, several conclusions can be drawn:
SSDs are Still Expensive – Solid State Drives remain an expensive alternative storage medium, but the price differential between SSD and electro-mechanical drives is coming down. As of this writing there is only an x5 price difference between the 800GB SSD and the 600GB, 15K RPM drive. While this is still a significant gap, it is far less that the staggering x10 to x20 price differential seen 3-4 years ago.
SSDs are very “Green” – A comparison of the Watts consumed during a drive’s “typical operation” indicate that SSD consumes about 25% less energy than 10K RPM, 2.5-inch drives, and about 75% less power than 15K RPM, 3.5-inch disks. Given that a) each Watt used by the disk requires roughly 1 Watt of power for cooling to remove the heat that is produced, and b) the cost per Kwh continues to rise every year, this significant difference become a factor over a storage array-s 5-year lifecycle.
Extreme IOPS is a Bonus – Although more expensive, SSDs are capable of delivering from 10- to 20-times more I/O’s-per-second, potentially providing a dramatic increase in storage performance.
Electro-Mechanical Disks Cost Differential – There is a surprisingly small cost differential between 3.5 inch, 15K RPM drives and 2.5 inch 10K RPM drives. This may justify eliminating 10K disks altogether and deploying a less complex 2-tiered array using only 15K RPM disks and 7.2K disks.
Legacy 3.5 Inch Disks – Low capacity legacy storage devices (<146GB) in a 3-5-inch drive form-factor consume too much energy to be practical in a modern, energy-efficient data center (this includes server internal disks). Any legacy disk drive smaller than 300 GB should be retired.
SATA/NL-SAS Disks are Inexpensive – This simply re-affirms what’s already known about SATA/NL-SAS disks. They are specifically designed to be inexpensive, modest performance devices capable of storing vast amounts of low-demand content on-line.
The incursion of Solid State Disks into the industry’s storage mainstream will have interesting ramifications not only for the current SAN/NAS arrays, but also may impact a diverse set of technologies that have been designed to tolerate the limitations of an electro-mechanical storage world. As they say, “It’s a brave new world”.
Widespread deployment of SSD will have a dramatic impact on the storage technology itself. If SSDs can be implemented in a cost-effective fashion, why would anyone need an expensive and complex automated tiering system to decrement data across multiple layers of disk? Because of its speed, will our current efforts to reduce RAID rebuild times still be necessary? If I/O bottlenecks are eliminated at the disk drive, what impact will it have on array controllers, data fabric, and HBAs/NICs residing upstream of the arrays?
While it is disappointing to find SSD technology still commands a healthy premium over electro-mechanical drives, don’t expect that to remain the case forever. As the technology matures prices will decline when user acceptance grows and production volumes increase. Don’t be surprised to see SSD technology eventually eliminate the mechanical disk’s 40-year dominance over the computer industry.
For those of you interested in examining the comparison calculations, I’ve included the following spreadsheet excerpts contain detailed information used to create the graph.
With predictable regularity someone surfaces on the Web, claiming they have discovered a way to turn slow SATA arrays into high performance storage. Their method usually involves adding complex and sophisticated software to reallocate and optimize system resources. While there may a few circumstances where this might work, in reality it is usually just the opposite.
The problem with this concept is similar to the kit car world several decades ago. At the time, kit-build sports cars were all the rage. Automobile enthusiasts were intrigued by the idea of building a phenomenal sports car by mounting a sleek fiberglass body on the chassis of a humble Volkswagen Beetle. Done properly, the results were amazing! As long as their workmanship was good, the end results would rival the appearance of a Ferrari, Ford GT-40, or Lamborghini!
However, this grand illusion disappeared the minute its proud owner started the engine. Despite its stunning appearance, the kit car was still built on top of an anemic VW bug chassis, power train, and suspension!
Today we see a similar illusion being promoted by vendors claiming to offer “commodity storage” capable of delivering the same high performance as complex SAN and NAS systems. Overly enthusiastic suppliers push the virtues of cheap “commodity” storage arrays with amazing capabilities as a differentiator in this highly competitive market. The myth is perpetuated within the industry by a general lack of understanding of the underlying disk technology characteristics, and a desperate need to manage shrinking IT budgets, coupled with a growing demand for storage capacity.
According to this technical fantasy, underlying hardware limitations don’t count. In theory, if you simply run a bunch of complex software functions on the storage array controllers, you somehow repeal the laws of physics and get “something for nothing”.
That sounds appealing, but it unfortunately just doesn’t work that way. Like the kit car’s Achilles heel, hardware limitations of underlying disk technology govern the array’s capabilities, throughput, reliability, scalability, and price.
• Drive Latencies – the inherent latency incurred to move read/write heads and rotate disks until the appropriate sector address is available can vary significantly.
For example, comparing performance of a 300GB, 15K RPM SAS disk to a 3TB 7200 RPM SATA disk produces the following results:
• Controller Overhead – Masking SATA performance by adding processor capabilities may not be the answer either. Call it what you will – Controller, SP, NAS head, or something else. A storage controller is simply a dedicated server performing specialized storage operations. This means controllers can become overburdened by loading multiple sophisticated applications on them. More complex processes also means the controller consumes additional internal resources (memory, bandwidth, cache, I/O queues, etc.). As real-time capabilities like thin provisioning, automated tiering, deduplication and data compression applications are added, the array’s throughput will diminish.
• “Magic” Cache – This is another area where lots of smoke-and-mirrors can be found. Regardless of the marketing hype, cache is still governed by the laws of physics and has predictable characteristics. If you put a large amounts of cache in front of slow SATA disk, your systems will run really fast – as long as requested data is already located in cache. When it isn’t you must go out to slow SATA disk and utilize the same data retrieval process as every disk access. The same is true when cache is periodically flushed to disk to protect data integrity. Cache is a great tool that can significantly enhance the performance of a storage array. However, it is expensive, and will never act as a “black box” that somehow makes slow SATA disk perform like 15K RPM SAS disks.
• Other Differences – Additional differentiators between “commodity storage” and high performance storage include available I/Os per second, disk latency, RAID level selected, IOPS per GB capability, MTBF reliability, and the Bit Error Rate.
When citing the benefits of “tricked out” commodity storage, champions of this approach usually point to obscure white papers written by social media providers, universities, and research labs. These may serve as interesting reading, but seldom have much in common with production IT operations and “the real world”. Most Universities and research labs struggle with restricted funding, and must turn to highly creative (and sometimes unusual) methods to achieve specific functions from a less-than-optimal equipment. Large social media providers seldom suffer from budget constraints, but create non-standard solutions to meet highly specialized, stable, and predictable user scenarios. This may illustrate an interesting use of technology, but have little value for mainstream IT operations.
As with most things in life, “you can’t get something for nothing”, and the idea of somehow enhancing commodity storage to meet all enterprise data requirements is no exception.
Like most other things, technology suffers from advancing age. That leading-edge wonder of just a few years ago is today’s mainstream system. This aging process creates great headaches for IT departments, who constantly see “the bar” being moved upward. Just when it seems like the computing environment is under control, equipment needs to be updated.
Unless a company is well disciplined in enforcing their technical refresh cycle, the aging process can also lure some organizations into a trap. The thinking goes something like this – “Why not put off a technology update by a year or two? Budgets are tight, the IT staff is overworked, and things seem to be going along just fine.” It makes sense, doesn’t it?
Well, not exactly. If you look beyond the purchase and migration expenses, there are other major cost factors to consider.
Power Reduction: There have been major changes in storage device energy efficiency over the past decade. Five years ago the 300GB, 15K RPM 3.5-inch drive was leading-edge technology. Today, that has disk been superseded by 2.5-inch disks of the same speed and capacity. Other than its physical size, other major changes are the disk’s interface (33% faster than Fibre Channel) and its power consumption (about 70% less than a 3.5-inch drive). For 100TB of raw storage, $3577 per year could be saved by reduced power consumption alone.
Cooling Cost Reduction: A by-product of converting energy to power is heat, and systems used to eliminate heat consume power too. The following chart compares the cost for cooling 100TB of 3.5-inch disks with the same capacity provided by 2.5-disks. Using 2.5-inch disks, cooling costs could be reduced by $3548 per year, per 100TB of storage.
Floor Space Reduction: Another significant data center cost is for floor space. This expense can vary widely, depending on the type resources provided and level of high availability guaranteed by the Service Level Agreement. For the purpose of cost comparison, we’ll take a fairly conservative $9600 per equipment rack per year. We will also assume fractional amounts are available, although in the real world full rack pricing might be required. Given the higher density provided by 2.5-inch disks, a cost savings of $9,371 would be achieved.
In the example above, simply replacing aging 300GB, 15K RPM 3.5-inch FC disk drives with the latest 300GB, 15K RPM 2.5-inch FC disk drives will yield the following operational costs (OPEX) savings:
Reduced power $ 3,577
Reduced cooling $ 3,548
Less floor space $ 9,371
Total Savings $ 16,496 per 100TB of storage
Over a storage array’s standard 5-year service cycle, OPEX savings could result in as much as $82K dollars or more.
Addition benefits from a storage refresh might also include tiering storage (typically yielding around a 30% savings over non-tiered storage), reduced support contract costs, and less time spent managing older, more labor-intensive storage subsystems. There is also an opportunity for capital expense (CAPEX) savings by cleverly designing cost-optimized equipment, but that’s a story for a future article.
Don’t be misled into thinking that a delay of your storage technical refresh cycle will save money. In the end it could be a very costly decision.
One of the more promising technologies for improving applications and databases performance is the PCIe Flash card. This technology uses a standard half or full-height PCI-e interface card with a Solid State Disk mounted on it. It allows SSD to be added to a workstation or server by simply plugging a card into an available PCIe bus slot.
What makes PCIe Flash card approach superior to array-based SSD is its close proximity to the system memory and the elimination of latency-adding components to the I/O stream. In a normal SAN or NAS array, data is transferred to storage across the SAN fabric. Bytes of data move across the systems PCIe I/O bus, where it is read by the HBAs (or NICs if it’s NAS), translated into the appropriate protocol, converted to a serial stream, and sent across the SAN fabric. In most SANs the signal is read and retransmitted one or more times by edge switches and directors, then sent to disk array controllers. From there it is converted from a serial stream to a parallel data, translated from the SAN fabric protocol, given a block-level addressing, possibly stored in array cache, re-serialized for transmission to the disks, received and re-ordered by disks for efficient write processes, and finally written to the devices. For a data read, the process is reversed via a similar process.
Like other technologies, however, there are pros and cons to using PCIe Flash storage:
- Plugs directly into the PCIe bus, eliminating latency from the HBAs, network protocols, SAN fabric, array controller latencies, and disk tray connections.
- PCIe is a point-to-point architecture, so each device connects to the host with its own serial link
- PCIe Gen 2 supports 8Gbps/sec., which is 25% faster than the 6Gbps SAS interface
- Little or no additional infrastructure is required to capitalize on flash storage performance
- Very simple to deploy and configure
- Extremely low power consumption, as compared with traditional 3.5-inch hard disks.
- Positions data in very close proximity to the system processors and cache structure
- Requires no additional physical space in the storage equipment rack
- The number of SSD storage deployed is limited by the physical number of slots
- Some PCIe Flash cards are “tall” enough to block the availability of an adjacent slot
- Recent PCIe bus technology is required to support top performance (x4 or above)
- Internal PCIe storage cannot be shared by other servers like a shared SAN resource
- May require specialized software for the server to utilize it as internal cache or mass storage
- PCIe Flash may suffer quality issues if an Enterprise-Grade product is not purchased
- If the server goes down, content on the installed PCIe Flash becomes inaccessible
Like other SSD devices, Flash PCIe cards are expensive when compared to traditional disk storage. In Qtr4 of 2012, representative prices for 800GB of PCIe Flash storage are in the range of $3800 to $4500 each. Since a 15K RPM hard disk of similar capacity sells for $300 to $450 each, Flash memory remains about ten times more expensive on a cost-per-GB basis. However, since Solid State Disk (SSD) is about 21 times faster than electro-mechanical disk, it may be worth the investment if extremely fast performance is of upmost importance.
It’s hard to retire a perfectly good storage array. Budgets are tight, there’s a backlog of new projects in the queue, people are on vacation, and migration planning can be difficult. As long as there is not a compelling reason to take it out of service, it’is far easier to simply leave it alone and focus on more pressing issues.
While this may be the path of least resistance, it can come at a high price. There are a number of good reasons why upgrading storage arrays to modern technology may yield superior results and possibly save money too!
Capacity – When your aging disk array was installed several years ago, 300 GB, 10K RPM, FC disk drives were mainstream technology. It was amazing to realize you could squeeze up to 45 TB in a single 42U equipment rack! Times have changed. The same 10K RPM DISK drive has tripled in capacity, providing 900 GB in the same 3.5 inch disk drive “footprint”. It’s now possible to get 135 TB (a 300% capacity increase) into the same equipment rack configuration. Since data center rack space currently costs around $3000 per month, that upgrade alone will dramatically increase capacity without incurring any increase in floor-space cost.
Density – Previous generation arrays packaged from (12) to (15) 3.5 inch FC or SATA disk drives into a single rack-mountable 4U array. Modern disk arrays support from (16) 3.5 inch disks per 3U tray, to (25) 2.5 inch disks in a 2U tray. Special ultra-high density configurations may house up to (60) FC, SAS, or SATA DISK drives in a 4U enclosure. As above, increasing storage density within an equipment rack significantly increases capacity while requiring no additional data center floor-space.
Energy Efficiency – Since the EPA’s IT energy efficiency study in 2007 (Report to Congress on Server and Data Center Energy Efficiency, Public Law 109-431), IT manufacturers have increased efforts to improve the energy efficiency of their products. This has resulted in disk drives that consume from 25% to 33% less energy, and storage array controllers lowering power consumption by up to 30%. That has had a significant impact on energy costs, including not only the power to run the equipment, but also power to operate the cooling systems needed to purge residual heat from the environment.
Controller Performance – Storage array controllers are little more than specialized servers designed specifically to manage such functions as I/O ports, disk mapping, RAID and cache operations, and execution of array-centric internal applications (such as thin provisioning and snapshots). Like any other server, storage controllers have benefited from advances in technology over the past few years. The current generation of disk arrays contain storage controllers with from 3 to 5 times the processing power of their predecessors.
Driver Compatibility – As newer technologies emerge, they tend to focus on developing software compatibility with the most recently released products and systems on the market. With the passage of time, it becomes less likely for storage arrays to be supported by the latest and greatest technology on the market. This may not impact daily operations, but it creates challenges when a need arises to integrate aging arrays with state-of-the-art systems.
Reliability – Common wisdom used to be that disk failure characteristics could be accurately represented by a ”bathtub graph”. The theory was the potential for failure was high when a disk was new. It then flattened out at a low probability throughout the disk’s useful life, then took a sharp turn upswing as it approached end-of-life. This model implied that extending disk service life had no detrimental effects until it approached end-of-life for the disks.
However over the past decade, detailed studies by Google and other large organizations with massive disk farms have proven the “bathtub graph” model incorrect. Actual failure rates in the field indicate the probability of a disk failure increases by 10% – 20% for every year the disk is in service. It clearly shows the probability of failure increases in a linear fashion over the disk’s service life. Extending disk service-life greatly increases the risk for disk failure.
Service Contracts –Many popular storage arrays are covered by standard three-year warranties. This creates a dilemma, since the useful service life of most storage equipment is considered to be either four or five years. When the original warranty expires, companies must decide whether to extend the existing support contract (at a significantly higher cost), or transitioning to a time & materials basis for support (which can result in some very costly repairs).
Budgetary Impact – For equipment like disk arrays, it is far too easy to fixate on replacement costs (CAPEX), and ignore the ongoing cost of operational expenses (OPEX). This may avoid large upfront expenditures, but it slowly bleeds the IT budget to death by having to maintain increasingly inefficient, fault-prone, and power hungry equipment.
The solution is to establish a program of rolling equipment replenishment on a four- or five-year cycle. By regularly upgrading 20% to 25% of all systems each year, the IT budget is more manageable, equipment failures are controlled, and technical obsolescence remains in check.
Getting rid of familiar things can be difficult. But unlike your favorite slippers, the LazyBoy recliner, or your special coffee cup, keeping outdated storage arrays in service well beyond their prime can cost your organization plenty.
There’s a quiet revolution going on in large data centers. It’s not as visible or flashy as virtualization or deduplication, but at least equal in important.
As its name implies, SAN “fabric” is a dedicated network that allows servers, storage arrays, backup & recovery systems, replication devices, and other equipment to pass data between systems. Traditionally this has been comprised of 4Gbps Fibre Channel and 1Gbps Ethernet channels. However, a new family of 8Gbps and 16Gbps Fibre Channel, 6Gbps and 12Gbps SAS, and 10Gbps Ethernet are quietly replacing legacy fabric with links capable of 2 – 4 times the performance.
The following is a comparison of the maximum throughput rates of various SAN fabric links:
Performance ranges from the relatively outdated 1Gbps channel (Ethernet or FC) capable of supporting data transfers of up to 100 MB per second, to 16Gbps Fibre Channel capable of handling 1940 MB per second. Since all are capable of full duplex (bi-directional) operations, the sustainable throughput rate is actually twice the speed indicated in the chart. If these blazing new speeds are still insufficient, 10Gbps Ethernet, 12Gbps SAS, and 16Gbps Fibre Channel can be “trunked” – bundled together to produce an aggregate bandwidth equal to the number of individual channels tied together. (For example, eight 16Gbps FC channels can be bundled to create a 128Gbps “trunk”.)
In addition to high channel speeds, 10Gbps Ethernet and 16Gbps Fibre Channel both implement a 64b/66b encoding scheme, rather than the 8b/10b encoding scheme used by lower performance channels. The encoding process improves the quality of the data transmission, but at a cost. An 8b/10b encoding process decreases available bandwidth by 20%, while 64b/66b encoding only reduces bandwidth by 3.03%. This significantly increases data transfer efficiency.
While 8/16Gbps Fibre Channel and 10Gbps Ethernet are changing the game at the front-end, SAS is revolutionizing the back-end disk drive connections as well. For over a decade, enterprise-grade disks had 2Gbps or 4Gbps ports, and were attached to a Fiber Channel Arbitrated Loop (FC-AL). Like any technologies using loop technology, low traffic enjoyed maximum speed but performance dropped off as demand increased. Under heavy load conditions, the back-end bus could become a bottle-neck.
SAS will change that for two reasons. First it uses switched technology, so every device attached to the controller “owns” 100% of the bus bandwidth. The latency “dog leg pattern” found on busy FC-AL busses is eliminated. Secondly current SAS drives are shipping with 6Gbps ports, which are 50% faster than 4Gbps Fibre Channel. Just over the horizon are 12Gbps SAS speeds that will offer a 300% increase in bandwidth to the disks, and do it over switched (isolated) channels.
Recent improvements in fabric performance will support emerging SSD technology, and allow SANs to gracefully scale to support storage arrays staggering under a growth rate of 40% – 50% per year.
It’s easy to hold onto the concept that IT is all about systems, networks, and software. This has been accepted wisdom for the past 50-years. It’s a comfortable concept, but one that is increasing inaccurate and downright dangerous as we move into an era of “big data”! In today’s world not about systems, networks, applications, or the datacenter – it’s all about the data!
For decades accumulated data was treated as a simply bi-product of information processing activities. However, there is growing awareness that stored information is not just digital “raw material”, but a corporate asset containing vast amounts of innate value. Like any other high-value asset, it can be bought or sold, traded, stolen, enhanced, or destroyed.
A good analogy for today’s large-scale storage array is to that of a gold mine. Data is the nuggets of gold embedded in the mine. The storage arrays containing data are the “mine” that houses and protects resident data. Complex and sophisticated hardware, software, tools, and skill-sets are simply tools used to locate, manipulate, and extract the “gold” (data assets) from its surrounding environment. The presence of high value “nuggets” is the sole reason the mining operation exists. If there was no “gold”, the equipment used to extract and/or manipulate it would be of little value.
This presents a new paradigm. For years storage was considered some secondary peripheral that was considered only when new systems or applications were being deployed. Today storage has an identity of its own that is independent from the other systems and software in the environment.
Data is no longer just a commodity or some type of operational residue left over from the computing process. “Big Data” forces a shift in focus from IT assets deployment and administration to the management of high-value data assets. It dictates that data assets sit at the center of concentric rings, ensuring security, recoverability, accessibility, performance, data manipulation, and other aspects of data retention are addressed as abstract requirements with unique requirements. Now information must be captured, identified, valued, classified, assigned to resources, protected, managed according to policy, and ultimately purged from the system after its value to the organization has been expended.
This requires a fundamental change in corporate culture. As we move into an era of “big data” the entire organization must be aware of information’s value as an asset, and the shift from technology-centric approaches for IT management. Just like gold in the above analogy, users must recognize that all data is not “created equal” and delivers different levels of value to an organization for specific periods of time. For example, financial records typically have a high level of inherent value, and retain a level of value for some defined period of time. (The Sarbanes-Oxley act requires publicly-traded companies to maintain related audit documents for no less than seven years after the completion of an audit. Companies in violation of this can face fines of up to $10 million and prison sentences of 20 years for Executives.)
However, differences in value must be recognized and managed accordingly. Last week’s memo about the cafeteria’s luncheon specials must not be retained and managed in the same fashion as an employee’s personnel record. When entered into the system, information should be classified according to a well-defined set of guidelines. With that information it can be assigned to an appropriate storage tier, backed up on a regular schedule, kept available on active storage as necessary, later written to low-cost archiving media to meet regulatory and litigation compliance needs. Once data no longer delivers value to an organization, it can be expired by policy, freeing up expensive resources for re-use.
This approach moves IT emphasis away from building systems tactically by simply adding more-of-the-same, and replacing it with a focus on sophisticated management tools and utilities that automate the process. Clearly articulated processes and procedures must replace “tribal lore” and anecdotal knowledge for managing the data repositories of tomorrow.
“Big Data” ushers in an entirely new way of thinking about information as stored, high-value assets. It forces IT Departments to re-evaluate their approach for management of data resources on a massive scale. At a data growth rate of 35% to 50% per year, business-as-usual is no longer an option. As aptly noted in a Bob Dylan song, “the times they are a-changin”. We must adapt accordingly, or suffer the consequences.
It is somewhat surprising just how many skilled IT specialists still shy away from eliminating traditional internal boot disks with a Boot-from-SAN process. I realize old habits die hard and there’s something reassuring about having the O/S find the default boot-block without needing human intervention. However the price organizations pay for this convenience is not justifiable. It simply adds waste, complexity, and unnecessary expense to their computing environment.
Traditionally servers have relied on internal disk for initiating their boot-up processes. At start-up, the system BIOS executes a self-test, starts primitive services like the video output and basic I/O operations, then goes to a pre-defined disk block where the MBR (Master Boot Record) is located. For most systems, the Stage 1 Boot Loader resides on the first block of the default disk drive. The BIOS loads this data into system memory, which then continues to load Stage 2 Boot instructions and ultimately start the Operating System.
Due to the importance of the boot process and the common practice of loading the operating system on the same disk, two disks drives with a RAID1 (disk mirroring) configuration is commonly used to ensure high availability.
Ok, so far so good. Then what’s the problem?
The problem is the disks themselves. Unlike virtually every subsystem in the server, these are electro/mechanical devices with the following undesirable issues:
- Power & Cooling – Unlike other solid-state components, these devices take a disproportionately large amount of power to start and operate. A mirrored pair of 300GB, 15K RPM disks will consume around .25 amps of power and need 95.6 BTUs for cooling. Each system with internal disk has its own miniature “space heater” that aggravates efforts to keep sensitive solid state components cool.
- Physical Space – Each 3.5 inch drive is 1” x 4.0” x 5.76” (or 23.04 cubic inches) in size, so a mirrored pair of disks in a server represents an obstacle of 46.08 cubic inches that requires physical space, provisions for mounting, power connections, air flow routing, and vibration dampening to reduce fatigue on itself and other internal components.
- Under-utilized Capacity – As disk drive technology continues to advance, it becomes more economical to manufacture higher capacity disk drives than maintain an inventory of lower capacity disks. Therefore servers today are commonly shipped with 300GB or 450GB boot drives. The problem is that Windows Server 2008 (or similar) only needs < 100GB of space, so 66% of the disk’s capacity is wasted.
- Backup & Recovery – Initially everyone plans to keep only the O/S, patches and updates, log files, and related utilities on the boot disk. However, the local disk is far too convenient and eventually has other files “temporarily” put on it as well. Unfortunately some companies don’t include boot disks in their backup schedule, and risk losing valuable content if both disks are corrupted. (Note: RAID1 protects data from individual disk failures but not corruption.)
Boot-from-SAN does not involve a PXE or tftp boot over the network. It is an HBA BIOS setting that allows SAN disk to be recognized very early in the boot process as a valid boot device, then points the server to that location for the Stage 1 Boot Loader code. It eliminates any need for internal disk devices and moves the process to shared storage on the SAN. It also facilitates the rapid replacement of failed servers (all data and applications remain on the SAN), and is particularly useful for blade systems (where server “real-estate” is at a premium and optimal airflow is crucial).
The most common argument used against Boot-from-SAN is “what if the SAN is not available”. On the surface it sounds like a valid point, but what is the chance of that occurring with well-designed SAN storage? Why would that be any different than if the internal boot disk array failed to start? Even if the system started internally and the O/S loaded, how much work could a server do if it could not connect to the SAN? The consequences of any system failing to come up to an operational state are the same, regardless if it uses a Boot-from-SAN process or boots up from internal disks.
For a handful servers, this may not be a very big deal. However, when you consider the impact on a datacenter running thousands of servers the problem becomes obvious. For every thousand servers, Boot-from-SAN eliminates the expense of two thousand internal disks, 240 amps of current, the need for 655,300 BTUs of cooling, greatly simplifies equipment rack airflow, eliminates 200TB of inaccessible space, and measurably improves storage manageability and data backup protection.
Boot-from-SAN capability is built into most modern HBA BIOS’s and is supported by almost every operating system and storage array on the market. Implementing this valuable tool should measurably improve the efficiency of your data center operation.
I’m frequently surprised by the number of companies who haven’t transitioned to a tiered storage structure. All data is not created equal. While a powerful database may place extreme demand on storage, word processing documents do not.
As we move into a new world of “big data”, more emphasis needs to be focused on making good decisions about what class of disk this data should reside on. Although there are no universally accepted standards for storage tier designations, frequently the breakdown goes as follows:
Tier 0 – Solid state devices
Tier 1 – 15K RPM SAS or FC Disks
Tier 2 – 10K RPM SAS or FC Disks
Tier 3 – 7200 or 5400 RPM SATA (a.k.a. – NL-SAS) Disks
So why is a tiering strategy important for large quantities of storage? Let’s take a look at similar storage models for 1 petabyte of data:
The difference in disk drive expense alone is over $225,000 or around 30% of the equipment purchase price. In addition there other issues to consider.
- Reduces the Initial purchase price by 25% or more
- Improving energy efficiency by 25% – 35% lowers operational cost and cooling requirements
- Substantial savings from reduced data center floorspace requirements
- Increased overall performance for all applications and databases
- Greater scalability and flexibility for matching storage requirements to business growth patterns
- Provides additional resources for performance improvements (an increased number of ports, cache, controller power, etc.)
- A high degree of modularity facilitates better avoidance of technical obsolescence
- May moderate the demand for technical staff necessary to manage continual storage growth
- Requires automated, policy-based data migration software to operate efficiently.
- Should employ enterprise-class frames for Tiers 0/1 and midrange arrays for Tiers 2/3
- Incurs approximately a 15% cost premium for enterprise-class storage to support Tier 0/1 disks
- Implements a more complex storage architecture that requires good planning and design
- Needs at least a rudimentary data classification effort for maximum effectiveness
So does the end justify the effort? That is for each company to decide. If data storage growth is fairly stagnant, then it may be questionable whether the additional effort and expense is worth it. However if you are staggering under a 30% – 50% CAGR storage growth rate like most companies, the cost reduction, increased scalability, and performance improvements achieved may well justify the effort.