Blog Archives
Solid State Disks – Beyond the Sticker Shock!
NOTE: My original article contained embedded calculation errors that significantly distorted the end results. These problems have since been corrected. I apologize to anyone who was accidentally misled by this information, and sincerely thank those diligent readers who brought the issues to my attention.
Some issues seem so obvious they’re hardly worth considering. Everyone knows that Solid State Drives (SSD) are more energy-efficient than spinning disk. They don’t employ rotating platters, electro-mechanical motors and mechanical head movement for data storage, so they must consume less power – right? However, everyone also knows the cost of SSD is so outrageous that they can only be deployed for super-critical high performance applications. But does the reputation of having exorbitant prices still apply?
While these considerations may seem intuitive, they are not entirely accurate. Comparing the Total Cost of Ownership (TCO) for traditional electro-mechanical disks vs. Solid State Disks provides a clearer picture of the comparative costs of each technology.
Assumptions:
- For accuracy, this analysis compares the purchase price (CAPEX) and power consumption (OPEX) of only the disk drives, and does not include the expense of entire storage arrays, rack space, cooling equipment, etc.
- It uses the drive’s current “street price” for comparison. Individual vendor pricing may be significantly different, but the ratio between disk and SSD cost should remain fairly constant.
- The dollar amounts shown on the graph represent a 5-year operational lifecycle, which is fairly typical for production storage equipment.
- Energy consumption for cooling has also been included in the cost estimate, since it requires roughly the amount of energy as the drives consume to maintain them in an operational state.
- 100 TB of storage capacity was arbitrarily selected to illustrate the effect of cost on a typical mid-sized SAN storage array.
The following graph illustrates the combined purchase price, plus energy consumption costs for several popular electro-mechanical and Solid State Devices.
From the above comparison, several conclusions can be drawn:
SSDs are Still Expensive – Solid State Drives remain an expensive alternative storage medium, but the price differential between SSD and electro-mechanical drives is coming down. As of this writing there is only an x5 price difference between the 800GB SSD and the 600GB, 15K RPM drive. While this is still a significant gap, it is far less that the staggering x10 to x20 price differential seen 3-4 years ago.
SSDs are very “Green” – A comparison of the Watts consumed during a drive’s “typical operation” indicate that SSD consumes about 25% less energy than 10K RPM, 2.5-inch drives, and about 75% less power than 15K RPM, 3.5-inch disks. Given that a) each Watt used by the disk requires roughly 1 Watt of power for cooling to remove the heat that is produced, and b) the cost per Kwh continues to rise every year, this significant difference become a factor over a storage array-s 5-year lifecycle.
Extreme IOPS is a Bonus – Although more expensive, SSDs are capable of delivering from 10- to 20-times more I/O’s-per-second, potentially providing a dramatic increase in storage performance.
Electro-Mechanical Disks Cost Differential – There is a surprisingly small cost differential between 3.5 inch, 15K RPM drives and 2.5 inch 10K RPM drives. This may justify eliminating 10K disks altogether and deploying a less complex 2-tiered array using only 15K RPM disks and 7.2K disks.
Legacy 3.5 Inch Disks – Low capacity legacy storage devices (<146GB) in a 3-5-inch drive form-factor consume too much energy to be practical in a modern, energy-efficient data center (this includes server internal disks). Any legacy disk drive smaller than 300 GB should be retired.
SATA/NL-SAS Disks are Inexpensive – This simply re-affirms what’s already known about SATA/NL-SAS disks. They are specifically designed to be inexpensive, modest performance devices capable of storing vast amounts of low-demand content on-line.
The incursion of Solid State Disks into the industry’s storage mainstream will have interesting ramifications not only for the current SAN/NAS arrays, but also may impact a diverse set of technologies that have been designed to tolerate the limitations of an electro-mechanical storage world. As they say, “It’s a brave new world”.
Widespread deployment of SSD will have a dramatic impact on the storage technology itself. If SSDs can be implemented in a cost-effective fashion, why would anyone need an expensive and complex automated tiering system to decrement data across multiple layers of disk? Because of its speed, will our current efforts to reduce RAID rebuild times still be necessary? If I/O bottlenecks are eliminated at the disk drive, what impact will it have on array controllers, data fabric, and HBAs/NICs residing upstream of the arrays?
While it is disappointing to find SSD technology still commands a healthy premium over electro-mechanical drives, don’t expect that to remain the case forever. As the technology matures prices will decline when user acceptance grows and production volumes increase. Don’t be surprised to see SSD technology eventually eliminate the mechanical disk’s 40-year dominance over the computer industry.
For those of you interested in examining the comparison calculations, I’ve included the following spreadsheet excerpts contain detailed information used to create the graph.
Enhanced Commodity Storage – Do You Believe in Magic?
With predictable regularity someone surfaces on the Web, claiming they have discovered a way to turn slow SATA arrays into high performance storage. Their method usually involves adding complex and sophisticated software to reallocate and optimize system resources. While there may a few circumstances where this might work, in reality it is usually just the opposite.
The problem with this concept is similar to the kit car world several decades ago. At the time, kit-build sports cars were all the rage. Automobile enthusiasts were intrigued by the idea of building a phenomenal sports car by mounting a sleek fiberglass body on the chassis of a humble Volkswagen Beetle. Done properly, the results were amazing! As long as their workmanship was good, the end results would rival the appearance of a Ferrari, Ford GT-40, or Lamborghini!
However, this grand illusion disappeared the minute its proud owner started the engine. Despite its stunning appearance, the kit car was still built on top of an anemic VW bug chassis, power train, and suspension!
Today we see a similar illusion being promoted by vendors claiming to offer “commodity storage” capable of delivering the same high performance as complex SAN and NAS systems. Overly enthusiastic suppliers push the virtues of cheap “commodity” storage arrays with amazing capabilities as a differentiator in this highly competitive market. The myth is perpetuated within the industry by a general lack of understanding of the underlying disk technology characteristics, and a desperate need to manage shrinking IT budgets, coupled with a growing demand for storage capacity.
According to this technical fantasy, underlying hardware limitations don’t count. In theory, if you simply run a bunch of complex software functions on the storage array controllers, you somehow repeal the laws of physics and get “something for nothing”.
That sounds appealing, but it unfortunately just doesn’t work that way. Like the kit car’s Achilles heel, hardware limitations of underlying disk technology govern the array’s capabilities, throughput, reliability, scalability, and price.
• Drive Latencies – the inherent latency incurred to move read/write heads and rotate disks until the appropriate sector address is available can vary significantly.
For example, comparing performance of a 300GB, 15K RPM SAS disk to a 3TB 7200 RPM SATA disk produces the following results:
• Controller Overhead – Masking SATA performance by adding processor capabilities may not be the answer either. Call it what you will – Controller, SP, NAS head, or something else. A storage controller is simply a dedicated server performing specialized storage operations. This means controllers can become overburdened by loading multiple sophisticated applications on them. More complex processes also means the controller consumes additional internal resources (memory, bandwidth, cache, I/O queues, etc.). As real-time capabilities like thin provisioning, automated tiering, deduplication and data compression applications are added, the array’s throughput will diminish.
• “Magic” Cache – This is another area where lots of smoke-and-mirrors can be found. Regardless of the marketing hype, cache is still governed by the laws of physics and has predictable characteristics. If you put a large amounts of cache in front of slow SATA disk, your systems will run really fast – as long as requested data is already located in cache. When it isn’t you must go out to slow SATA disk and utilize the same data retrieval process as every disk access. The same is true when cache is periodically flushed to disk to protect data integrity. Cache is a great tool that can significantly enhance the performance of a storage array. However, it is expensive, and will never act as a “black box” that somehow makes slow SATA disk perform like 15K RPM SAS disks.
• Other Differences – Additional differentiators between “commodity storage” and high performance storage include available I/Os per second, disk latency, RAID level selected, IOPS per GB capability, MTBF reliability, and the Bit Error Rate.
When citing the benefits of “tricked out” commodity storage, champions of this approach usually point to obscure white papers written by social media providers, universities, and research labs. These may serve as interesting reading, but seldom have much in common with production IT operations and “the real world”. Most Universities and research labs struggle with restricted funding, and must turn to highly creative (and sometimes unusual) methods to achieve specific functions from a less-than-optimal equipment. Large social media providers seldom suffer from budget constraints, but create non-standard solutions to meet highly specialized, stable, and predictable user scenarios. This may illustrate an interesting use of technology, but have little value for mainstream IT operations.
As with most things in life, “you can’t get something for nothing”, and the idea of somehow enhancing commodity storage to meet all enterprise data requirements is no exception.
Rethinking “Big Data” – Not All Content Has Value
For the past several years the business community and IT industry has been buzzing about “Big Data”. The Holy Grail of business is to become a “data driven Enterprise” by efficiently mining vast amounts of internal and external data. Identifying unforeseen relationships is considered to be an excellent method to drive sales growth and extend a company’s market share. While there may be value in this approach, “Big Data” analysis will only be as successful as the value of the stored content it examines.
Since the beginning of the computer industry, organizations have collected and stored amounts of information well beyond what the law requires. Management in general holds the belief that legacy data may contain vast treasure troves of unidentified residual value. In some cases it has been justified, since an ability to recall and examine historical content has proven to identify valuable relationships. However, in some situations it is questionable just how significant the recently discovered patterns and associations may be.
Recently a new wave of analytical tools and data structures has emerged to capitalize on the growing pool of stored data. They provide new capabilities to combine and analyze dissimilar information, produce associations between obscure facts, and allow vast quantities of data to be inspected for unexpected relationships. The application of these tools provides new methods for analyzing customer needs, trends, and buying patterns.
While some of retained data may yield valuable insight into an organization’s market, expecting everything in the archive to hold such nuggets can be unrealistic, problematic, and prohibitively expensive to maintain.
Changing Customer Priorities – Today’s markets are highly dynamic, with major change occurring randomly on frequent basis. Much of the captured information has a finite shelf life. Over time customers experience life changing events, families mature and disburse, personal finances may improve or decline, and individual priorities shift. Critical buying patterns of a decade ago may have little relevance in today’s market.
The Impact of External Events – Recent political, economic, and natural phenomena have re-shaped our society. Dramatic changes in our travel patterns occurred after 9/11. Hurricane Katrina, along with the Indonesia and Japanese tsunamis effected our thinking about preparations for natural disasters. Senseless killings at a theater in Aurora CO, a shopping mall in Tucson AZ, and a Sikh Temple in Milwaukee WI make us reconsider our attendance at social events and modify entertainment plans. The impact of a protracted global recession negatively impacts spending trends, financial investments, retirement plans, and even our expectations for our future. Key indicators of a decade ago may provide marginal value today.
Attrition of Value – Another issue with analyzing vast quantities of stored legacy data is the problem of long-term retention of content with questionable business value. All data is not created equal! Details about receivables may hold a level of value for many years, while a file about last week’s cafeteria specials is almost worthless by the following week. A good example this is a management PowerPoint sent to all employees. The original copy may retain its importance for an extended period of time, but dozens of identical copies kept in user accounts provide little incremental value.
Content Duplication –In any given SAN it is typical to find several outdated copies of the same data, abandon “clone” files once needed for testing, data with expired business value, unnecessary copies of temp files, orphaned directories from departed users, and residue left from ancient applications and databases. Unless a continuous process is in place to preen and update active storage, licensing costs for “Big Data” analytical tools and systems may be prohibitive. Even if the IT budget can absorb the cost, performance will suffer from having to load, filter, and index huge quantities of irrelevant data.
While valuable insight may be gained from customer content buried deep within an organization’s data repository, due diligence should be performed on existing content to verify its value and uniqueness. The old saying “garbage in-garbage out” is just as valid in today’s “Big Data” world as it was in the heyday of the mainframe.
Modular Datacenter Units – The End of Traditional Enterprise Datacenters?
Traditional brick and mortar datacenters have been a mainstay of enterprise computing since the day of the mainframe. IT systems were kept in isolation in windowless, highly secure facilities that provided a constant temperature and humidity environment on a 7×24 basis. Although the cost of building new datacenters continues to increase substantially, until now relatively few options have been available.
However, with the development of the portable modular datacenter, the day of the traditional datacenter may be coming to an end. While there are several variations on the market, the most promising appears to be the completely built out facility. New datacenter modules are built from ISO standard shipping containers. They incorporate chillers, power and communications buses, forced air cooling, equipment racks, and all other components necessary for a modern datacenter. These units can be trucked to any location, moved into position on a concrete pad, connected to external resources, and be ready for systems build-out on short notice. They can be configured to operate as a singular unit, multiple units, and even as stacked arrays of modular datacenter units.
In addition to serving as a modular replacement for traditional brick-and-mortar datacenter s, there are other possibilities for Portable Modular Datacenter s:
RAPID DEPLOYMENT MODULES – For situations where rapid implementation is a key driver, or when companies simply can’t wait the 18-24 months for a new datacenter build-out.
COST CONTAINMENT – Situations where minimizing the cost for building a new datacenter facility is a primary objective
DISASTER RECOVERY – A highly flexible, cost-effective IT environment that can be deployed remotely for a Disaster Recovery solution
CAPACITY-ON-DEMAND –Modular, self-contained units that permit companies to add new datacenter capacity only-as-required (Capacity-as-a-Service?)
TEMPORARY FACILITIES – Allows companies to continue to support ongoing IT operations while a permanent datacenter facility is built
SEGREGATED SYSTEMS – Enables complete isolation of specific IT operation in an otherwise shared environment (Community Cloud?)
DYNAMIC MARKETS – A solution for highly volatile markets where future capacity requirements are difficult to predict
EMERGENCY CAPACITY – Available for relatively rapid deployment when an organization’s primary datacenter runs out of floor space
SYNCRONOUS REPLICATION – Allows the implementation of a small nearby replication site within 40KM of the primary datacenter to support replication while maintaining database consistency
MOBILE SYSTEMS – A portable IT solution that could be relocated to a different region in response to changing corporate needs or an impending disaster (such as a major hurricane).
PREFABRICATED SUB-SYSTEMS – A transportable platform for high growth companies who must buy integrated sub-systems from an external vendor, rather than building the equipment themselves.
REPURPOSING OF BUILDINGS – Modular units may be installed within existing building that are sitting idle, as long as adequate resources (power and communications) are available.
Anotherbig benefit to portable mobile datacenter units is that they’re built in a factory to exact specification. As such, they benefit from repetitive manufacturing processes and ongoing quality assurance reviews. Each module features the same level of quality and reliability as its peers. This is in sharp contrast to traditional brick-and-mortar datacenters, which are normally built as one-off custom configurations.
The concept of portable mobile datacenter units is pretty clever. If there are any downsides to this technology they are not readily apparent. Although this represents a relatively new approach, it appears to be distinctly superior to what’s been done in the past. Don’t be surprised to see a new modular datacenter unit being installed on a concrete pad near you in the foreseeable future.
Storage System Refresh – Making a Case for Mandatory Retirement
It’s hard to retire a perfectly good storage array. Budgets are tight, there’s a backlog of new projects in the queue, people are on vacation, and migration planning can be difficult. As long as there is not a compelling reason to take it out of service, it’is far easier to simply leave it alone and focus on more pressing issues.
While this may be the path of least resistance, it can come at a high price. There are a number of good reasons why upgrading storage arrays to modern technology may yield superior results and possibly save money too!
Capacity – When your aging disk array was installed several years ago, 300 GB, 10K RPM, FC disk drives were mainstream technology. It was amazing to realize you could squeeze up to 45 TB in a single 42U equipment rack! Times have changed. The same 10K RPM DISK drive has tripled in capacity, providing 900 GB in the same 3.5 inch disk drive “footprint”. It’s now possible to get 135 TB (a 300% capacity increase) into the same equipment rack configuration. Since data center rack space currently costs around $3000 per month, that upgrade alone will dramatically increase capacity without incurring any increase in floor-space cost.
Density – Previous generation arrays packaged from (12) to (15) 3.5 inch FC or SATA disk drives into a single rack-mountable 4U array. Modern disk arrays support from (16) 3.5 inch disks per 3U tray, to (25) 2.5 inch disks in a 2U tray. Special ultra-high density configurations may house up to (60) FC, SAS, or SATA DISK drives in a 4U enclosure. As above, increasing storage density within an equipment rack significantly increases capacity while requiring no additional data center floor-space.
Energy Efficiency – Since the EPA’s IT energy efficiency study in 2007 (Report to Congress on Server and Data Center Energy Efficiency, Public Law 109-431), IT manufacturers have increased efforts to improve the energy efficiency of their products. This has resulted in disk drives that consume from 25% to 33% less energy, and storage array controllers lowering power consumption by up to 30%. That has had a significant impact on energy costs, including not only the power to run the equipment, but also power to operate the cooling systems needed to purge residual heat from the environment.
Controller Performance – Storage array controllers are little more than specialized servers designed specifically to manage such functions as I/O ports, disk mapping, RAID and cache operations, and execution of array-centric internal applications (such as thin provisioning and snapshots). Like any other server, storage controllers have benefited from advances in technology over the past few years. The current generation of disk arrays contain storage controllers with from 3 to 5 times the processing power of their predecessors.
Driver Compatibility – As newer technologies emerge, they tend to focus on developing software compatibility with the most recently released products and systems on the market. With the passage of time, it becomes less likely for storage arrays to be supported by the latest and greatest technology on the market. This may not impact daily operations, but it creates challenges when a need arises to integrate aging arrays with state-of-the-art systems.
Reliability – Common wisdom used to be that disk failure characteristics could be accurately represented by a ”bathtub graph”. The theory was the potential for failure was high when a disk was new. It then flattened out at a low probability throughout the disk’s useful life, then took a sharp turn upswing as it approached end-of-life. This model implied that extending disk service life had no detrimental effects until it approached end-of-life for the disks.
However over the past decade, detailed studies by Google and other large organizations with massive disk farms have proven the “bathtub graph” model incorrect. Actual failure rates in the field indicate the probability of a disk failure increases by 10% – 20% for every year the disk is in service. It clearly shows the probability of failure increases in a linear fashion over the disk’s service life. Extending disk service-life greatly increases the risk for disk failure.
Service Contracts –Many popular storage arrays are covered by standard three-year warranties. This creates a dilemma, since the useful service life of most storage equipment is considered to be either four or five years. When the original warranty expires, companies must decide whether to extend the existing support contract (at a significantly higher cost), or transitioning to a time & materials basis for support (which can result in some very costly repairs).
Budgetary Impact – For equipment like disk arrays, it is far too easy to fixate on replacement costs (CAPEX), and ignore the ongoing cost of operational expenses (OPEX). This may avoid large upfront expenditures, but it slowly bleeds the IT budget to death by having to maintain increasingly inefficient, fault-prone, and power hungry equipment.
The solution is to establish a program of rolling equipment replenishment on a four- or five-year cycle. By regularly upgrading 20% to 25% of all systems each year, the IT budget is more manageable, equipment failures are controlled, and technical obsolescence remains in check.
Getting rid of familiar things can be difficult. But unlike your favorite slippers, the LazyBoy recliner, or your special coffee cup, keeping outdated storage arrays in service well beyond their prime can cost your organization plenty.
SAN Fabric for the Next Generation
There’s a quiet revolution going on in large data centers. It’s not as visible or flashy as virtualization or deduplication, but at least equal in important.
As its name implies, SAN “fabric” is a dedicated network that allows servers, storage arrays, backup & recovery systems, replication devices, and other equipment to pass data between systems. Traditionally this has been comprised of 4Gbps Fibre Channel and 1Gbps Ethernet channels. However, a new family of 8Gbps and 16Gbps Fibre Channel, 6Gbps and 12Gbps SAS, and 10Gbps Ethernet are quietly replacing legacy fabric with links capable of 2 – 4 times the performance.
The following is a comparison of the maximum throughput rates of various SAN fabric links:
Performance ranges from the relatively outdated 1Gbps channel (Ethernet or FC) capable of supporting data transfers of up to 100 MB per second, to 16Gbps Fibre Channel capable of handling 1940 MB per second. Since all are capable of full duplex (bi-directional) operations, the sustainable throughput rate is actually twice the speed indicated in the chart. If these blazing new speeds are still insufficient, 10Gbps Ethernet, 12Gbps SAS, and 16Gbps Fibre Channel can be “trunked” – bundled together to produce an aggregate bandwidth equal to the number of individual channels tied together. (For example, eight 16Gbps FC channels can be bundled to create a 128Gbps “trunk”.)
In addition to high channel speeds, 10Gbps Ethernet and 16Gbps Fibre Channel both implement a 64b/66b encoding scheme, rather than the 8b/10b encoding scheme used by lower performance channels. The encoding process improves the quality of the data transmission, but at a cost. An 8b/10b encoding process decreases available bandwidth by 20%, while 64b/66b encoding only reduces bandwidth by 3.03%. This significantly increases data transfer efficiency.
While 8/16Gbps Fibre Channel and 10Gbps Ethernet are changing the game at the front-end, SAS is revolutionizing the back-end disk drive connections as well. For over a decade, enterprise-grade disks had 2Gbps or 4Gbps ports, and were attached to a Fiber Channel Arbitrated Loop (FC-AL). Like any technologies using loop technology, low traffic enjoyed maximum speed but performance dropped off as demand increased. Under heavy load conditions, the back-end bus could become a bottle-neck.
SAS will change that for two reasons. First it uses switched technology, so every device attached to the controller “owns” 100% of the bus bandwidth. The latency “dog leg pattern” found on busy FC-AL busses is eliminated. Secondly current SAS drives are shipping with 6Gbps ports, which are 50% faster than 4Gbps Fibre Channel. Just over the horizon are 12Gbps SAS speeds that will offer a 300% increase in bandwidth to the disks, and do it over switched (isolated) channels.
Recent improvements in fabric performance will support emerging SSD technology, and allow SANs to gracefully scale to support storage arrays staggering under a growth rate of 40% – 50% per year.
“Big Data” Challenges our Perspective of Technology
It’s easy to hold onto the concept that IT is all about systems, networks, and software. This has been accepted wisdom for the past 50-years. It’s a comfortable concept, but one that is increasing inaccurate and downright dangerous as we move into an era of “big data”! In today’s world not about systems, networks, applications, or the datacenter – it’s all about the data!
For decades accumulated data was treated as a simply bi-product of information processing activities. However, there is growing awareness that stored information is not just digital “raw material”, but a corporate asset containing vast amounts of innate value. Like any other high-value asset, it can be bought or sold, traded, stolen, enhanced, or destroyed.
A good analogy for today’s large-scale storage array is to that of a gold mine. Data is the nuggets of gold embedded in the mine. The storage arrays containing data are the “mine” that houses and protects resident data. Complex and sophisticated hardware, software, tools, and skill-sets are simply tools used to locate, manipulate, and extract the “gold” (data assets) from its surrounding environment. The presence of high value “nuggets” is the sole reason the mining operation exists. If there was no “gold”, the equipment used to extract and/or manipulate it would be of little value.
This presents a new paradigm. For years storage was considered some secondary peripheral that was considered only when new systems or applications were being deployed. Today storage has an identity of its own that is independent from the other systems and software in the environment.
Data is no longer just a commodity or some type of operational residue left over from the computing process. “Big Data” forces a shift in focus from IT assets deployment and administration to the management of high-value data assets. It dictates that data assets sit at the center of concentric rings, ensuring security, recoverability, accessibility, performance, data manipulation, and other aspects of data retention are addressed as abstract requirements with unique requirements. Now information must be captured, identified, valued, classified, assigned to resources, protected, managed according to policy, and ultimately purged from the system after its value to the organization has been expended.
This requires a fundamental change in corporate culture. As we move into an era of “big data” the entire organization must be aware of information’s value as an asset, and the shift from technology-centric approaches for IT management. Just like gold in the above analogy, users must recognize that all data is not “created equal” and delivers different levels of value to an organization for specific periods of time. For example, financial records typically have a high level of inherent value, and retain a level of value for some defined period of time. (The Sarbanes-Oxley act requires publicly-traded companies to maintain related audit documents for no less than seven years after the completion of an audit. Companies in violation of this can face fines of up to $10 million and prison sentences of 20 years for Executives.)
However, differences in value must be recognized and managed accordingly. Last week’s memo about the cafeteria’s luncheon specials must not be retained and managed in the same fashion as an employee’s personnel record. When entered into the system, information should be classified according to a well-defined set of guidelines. With that information it can be assigned to an appropriate storage tier, backed up on a regular schedule, kept available on active storage as necessary, later written to low-cost archiving media to meet regulatory and litigation compliance needs. Once data no longer delivers value to an organization, it can be expired by policy, freeing up expensive resources for re-use.
This approach moves IT emphasis away from building systems tactically by simply adding more-of-the-same, and replacing it with a focus on sophisticated management tools and utilities that automate the process. Clearly articulated processes and procedures must replace “tribal lore” and anecdotal knowledge for managing the data repositories of tomorrow.
“Big Data” ushers in an entirely new way of thinking about information as stored, high-value assets. It forces IT Departments to re-evaluate their approach for management of data resources on a massive scale. At a data growth rate of 35% to 50% per year, business-as-usual is no longer an option. As aptly noted in a Bob Dylan song, “the times they are a-changin”. We must adapt accordingly, or suffer the consequences.
16 Gbps Fibre Channel – Do the Benefits Outweigh the Cost?
With today’s technology there can be no status quo. As the IT industry advances, so must each organization’s efforts to embrace new equipment, applications, and approaches. Without an ongoing process of improvement, IT infrastructures progressively become outdated and the business group they support grows incrementally less effective.
In September of 2010, the INCITS T11.2 Committee ratified the standard for 16Gbps Fibre Channel, ushering in the next generation of SAN fabric. Unlike Ethernet, Fibre Channel is designed for one specific purpose – low overhead transmission of block data. While this capability may be less important for smaller requirements where convenience and simplicity are paramount, it is critical for larger datacenters where massive storage repositories must be managed, migrated, and protected. For this environment, 16Gbps offers more than twice the bandwidth of the current 8Gbps SAN and 40% more bandwidth than the recently released 10Gbps Ethernet with FCoE (Fibre Channel over Ethernet).
But is an investment in 16Gbps Fibre Channel justified? If a company has reached a point where SAN fabric is approaching saturation or SAN equipment is approaching retirement, then definitely yes! Here is how 16Gbps stacks up against both slower fibre channel implementations and with 10Gbps Ethernet.
Emulex Model |
Port Speed | Protocol | Average HBA/NIC Price | Transfer Rate |
Transfer Time for 1TB | Bandwidth Cost per MB/sec. |
Bandwidth Difference |
LPE16002 | 16 Gbps | Fibre Channel | $1,808 | 1939 MB/sec. | 1.43 Hrs. | $0.93 | 160% |
OCe11102 | 10 Gbps | Ethernet | $1,522 | 1212 MB/sec. | 2.29 Hrs. | $1.26 | 100% |
LPe12002 | 8 Gbps | Fibre Channel | $1,223 | 800 MB/sec. | 3.47 Hrs. | $1.53 | 65% |
LPe11000 | 4 Gbps | Fibre Channel | $891 | 400 MB/sec. | 6.94 Hrs. | $2.23 | 32% |
This table highlights several differences between 4/8/16 Gbps fibre channel and 10Gbps Ethernet with FCoE technology (sometimes marketed as Unified Storage). The street prices for a popular I/O Controller manufacturer clearly indicates there are relatively small differences between controller prices, particularly for the faster controllers. Although the 16Gbps HBA is 40% quicker, it is only 17% more expensive!
However, a far more important issue is that 16Gbps fibre channel is backward compatible with existing 4/8 Gbps SAN equipment. This allows segments of the SAN to be gradually upgraded to leading-edge technology without having to suffer the financial impact of legacy equipment rip-and-replace approaches.
In addition to providing a robust, purpose-built infrastructure for migrating large blocks of data, it also offers lower power consumption per port, a simplified cabling infrastructure, and the ability to “trunk” (combine) channel bandwidth up to 128Gbps! It doubles the number of ports and available bandwidth in the same 4U rack space for edge switches, providing the potential for a saving of over $3300 per edge switch.
Even more significant is that 16Gbps provides the additional performance necessary to support the next generation of storage, which will be based on 6Gbps and 12Gbps SAS disk drives. Unlike legacy FC storage, which was based upon 4Gbps FC-AL arbitrated loops, the new SAS arrays are on switched connections. Switching provides a point-to-point connection for each disk drive, ensuring every 6Gbps SAS connection (or in the near future, 12Gbps SAS connection) will have a direct connection to the SAN fabric. This eliminates backend saturation of legacy array FC-AL shared busses, and will place far greater demand for storage channel performance on the SAN fabric.
So do the benefits of 16Gbps fibre channel outweigh its modest price premium? Like many things in life – it depends! Block-based 16Gbps fibre channel SAN fabric is not for every storage requirement, but neither is file-based 10Gbps FCoE or iSCSI. If it is a departmental storage requirement or an environment where NAS or iSCSI has previously been deployed, then replacing the incumbent protocol with 16Gbps fibre channel may or may not have merit. However, large SAN storage array are particularly dependent on high performance equipment specifically designed for efficient data transfers. This is an arena where the capabilities and attributes of 16Gbps fibre channel will shine.
In any case, the best protection against making a poor choice is to thoroughly research the strengths and weaknesses of each technology and seek out professional guidance from a vendor-neutral storage expert with a Subject Matter Expert level understanding of the storage industry and its technology.
Storage Tiers – Putting Data in It’s Place
I’m frequently surprised by the number of companies who haven’t transitioned to a tiered storage structure. All data is not created equal. While a powerful database may place extreme demand on storage, word processing documents do not.
As we move into a new world of “big data”, more emphasis needs to be focused on making good decisions about what class of disk this data should reside on. Although there are no universally accepted standards for storage tier designations, frequently the breakdown goes as follows:
Tier 0 – Solid state devices
Tier 1 – 15K RPM SAS or FC Disks
Tier 2 – 10K RPM SAS or FC Disks
Tier 3 – 7200 or 5400 RPM SATA (a.k.a. – NL-SAS) Disks
So why is a tiering strategy important for large quantities of storage? Let’s take a look at similar storage models for 1 petabyte of data:
The difference in disk drive expense alone is over $225,000 or around 30% of the equipment purchase price. In addition there other issues to consider.
Pros:
- Reduces the Initial purchase price by 25% or more
- Improving energy efficiency by 25% – 35% lowers operational cost and cooling requirements
- Substantial savings from reduced data center floorspace requirements
- Increased overall performance for all applications and databases
- Greater scalability and flexibility for matching storage requirements to business growth patterns
- Provides additional resources for performance improvements (an increased number of ports, cache, controller power, etc.)
- A high degree of modularity facilitates better avoidance of technical obsolescence
- May moderate the demand for technical staff necessary to manage continual storage growth
Cons:
- Requires automated, policy-based data migration software to operate efficiently.
- Should employ enterprise-class frames for Tiers 0/1 and midrange arrays for Tiers 2/3
- Incurs approximately a 15% cost premium for enterprise-class storage to support Tier 0/1 disks
- Implements a more complex storage architecture that requires good planning and design
- Needs at least a rudimentary data classification effort for maximum effectiveness
So does the end justify the effort? That is for each company to decide. If data storage growth is fairly stagnant, then it may be questionable whether the additional effort and expense is worth it. However if you are staggering under a 30% – 50% CAGR storage growth rate like most companies, the cost reduction, increased scalability, and performance improvements achieved may well justify the effort.
Big Data – Data Preservation or Simply Corporate Hoarding?
Several years ago my Mother passed away. As one of her children, I was faced with the challenge of helping clean out her home prior to it being put up for sale. As we struggled to empty out each room, I was both amazed and appalled by what we found. There were artifacts from almost every year in school, bank statements from the 1950s, yellowing newspaper clippings, and greeting cards of all types and vintages. Occasionally we’d find a piece that was worth our attention, but the vast majority of saved documents were just waste – pieces of useless information tucked away “just in case” they might someday be needed again.
Unfortunately many corporations engage in the same sort of “hording”. Vast quantities of low-value data and obsolete information are retained on spinning disk or archived on tape media forever, “just in case” they may be needed. Multiple copies of databases, outdated binaries from application updates, copies of log files, ancient directories and files that were undeleted – all continue to consume capacity and resources.
Perhaps this strategy worked in years past, but it has long outlived its usefulness. At the average industry growth rate, the 2.5 Petabyte of storage you struggle with today will explode to over 1.0 Exabytes within 15-yrs! That’s a 400 times increase in your need for storage capacity, backup and recovery, SAN fabric bandwidth, data center floor space, power and cooling, storage management, staffing, disaster recovery, and related support items. The list of resources impacted by storage growth is extensive. In a previous post I’d identified (46) separate areas that are directly affected by storage growth, and must be scaled accordingly. An x400 expansion will result in a simply stunning amount of hardware, software, facilities, support services, and other critical resources needed to support this rate of growth. Deduplication, compression, and other size reduction methods may provide temporary relief but in most cases they simply defer the problem, not eliminate it.
The solution is obvious – reduce the amount of data being saved. Determine what is truly relevant and save only information that has demonstrable residual value. This requires a system of data classification, and a method for managing, migrating, and ultimately expiring files.
Unfortunately that is much easier said than done. Attempt to perform data categorization manually and you’ll quickly be overwhelmed by the tsunami of data flooding the IT department. Purchase one of the emerging commercial tools for data categorization, and you may be frustrated by how much content is incorrectly evaluated and assigned to incorrect categories.
Regardless of the challenges, there are very few viable alternatives to data classification for maintaining massive amounts of information. Far greater emphasis should be placed on identifying and destroying low or no-value files. (Is there really sound justification for saving last Thursday’s cafeteria menu or knowing who won Employee-of-the-Month last July?). Invest in an automated policy-based management product that allows data to be demoted backward through the storage tiers and ultimately destroyed, based on pre-defined company criteria. Something has to “give” or the quantity of retained data will eventually outpace future IT budget allocations for storage.
In the end the winning strategy will be to continually manage information retention, establishing an equilibrium and working toward a goal of near-zero storage growth. It’s time to make data classification by value and projected “shelf-life” a part of the organizations culture.