Enhanced Commodity Storage – Do You Believe in Magic?
With predictable regularity someone surfaces on the Web, claiming they have discovered a way to turn slow SATA arrays into high performance storage. Their method usually involves adding complex and sophisticated software to reallocate and optimize system resources. While there may a few circumstances where this might work, in reality it is usually just the opposite.
The problem with this concept is similar to the kit car world several decades ago. At the time, kit-build sports cars were all the rage. Automobile enthusiasts were intrigued by the idea of building a phenomenal sports car by mounting a sleek fiberglass body on the chassis of a humble Volkswagen Beetle. Done properly, the results were amazing! As long as their workmanship was good, the end results would rival the appearance of a Ferrari, Ford GT-40, or Lamborghini!
However, this grand illusion disappeared the minute its proud owner started the engine. Despite its stunning appearance, the kit car was still built on top of an anemic VW bug chassis, power train, and suspension!
Today we see a similar illusion being promoted by vendors claiming to offer “commodity storage” capable of delivering the same high performance as complex SAN and NAS systems. Overly enthusiastic suppliers push the virtues of cheap “commodity” storage arrays with amazing capabilities as a differentiator in this highly competitive market. The myth is perpetuated within the industry by a general lack of understanding of the underlying disk technology characteristics, and a desperate need to manage shrinking IT budgets, coupled with a growing demand for storage capacity.
According to this technical fantasy, underlying hardware limitations don’t count. In theory, if you simply run a bunch of complex software functions on the storage array controllers, you somehow repeal the laws of physics and get “something for nothing”.
That sounds appealing, but it unfortunately just doesn’t work that way. Like the kit car’s Achilles heel, hardware limitations of underlying disk technology govern the array’s capabilities, throughput, reliability, scalability, and price.
• Drive Latencies – the inherent latency incurred to move read/write heads and rotate disks until the appropriate sector address is available can vary significantly.
For example, comparing performance of a 300GB, 15K RPM SAS disk to a 3TB 7200 RPM SATA disk produces the following results:
• Controller Overhead – Masking SATA performance by adding processor capabilities may not be the answer either. Call it what you will – Controller, SP, NAS head, or something else. A storage controller is simply a dedicated server performing specialized storage operations. This means controllers can become overburdened by loading multiple sophisticated applications on them. More complex processes also means the controller consumes additional internal resources (memory, bandwidth, cache, I/O queues, etc.). As real-time capabilities like thin provisioning, automated tiering, deduplication and data compression applications are added, the array’s throughput will diminish.
• “Magic” Cache – This is another area where lots of smoke-and-mirrors can be found. Regardless of the marketing hype, cache is still governed by the laws of physics and has predictable characteristics. If you put a large amounts of cache in front of slow SATA disk, your systems will run really fast – as long as requested data is already located in cache. When it isn’t you must go out to slow SATA disk and utilize the same data retrieval process as every disk access. The same is true when cache is periodically flushed to disk to protect data integrity. Cache is a great tool that can significantly enhance the performance of a storage array. However, it is expensive, and will never act as a “black box” that somehow makes slow SATA disk perform like 15K RPM SAS disks.
• Other Differences – Additional differentiators between “commodity storage” and high performance storage include available I/Os per second, disk latency, RAID level selected, IOPS per GB capability, MTBF reliability, and the Bit Error Rate.
When citing the benefits of “tricked out” commodity storage, champions of this approach usually point to obscure white papers written by social media providers, universities, and research labs. These may serve as interesting reading, but seldom have much in common with production IT operations and “the real world”. Most Universities and research labs struggle with restricted funding, and must turn to highly creative (and sometimes unusual) methods to achieve specific functions from a less-than-optimal equipment. Large social media providers seldom suffer from budget constraints, but create non-standard solutions to meet highly specialized, stable, and predictable user scenarios. This may illustrate an interesting use of technology, but have little value for mainstream IT operations.
As with most things in life, “you can’t get something for nothing”, and the idea of somehow enhancing commodity storage to meet all enterprise data requirements is no exception.
Posted on July 24, 2013, in Storage and tagged 16Gbps, Big Data, data growth, disk pools, enterprise-it, Fibre Channel, iSCSI, maximum throughput, SAN, SATA disk, Storage, storage array, storage growth, throughput rates, Unified Computing. Bookmark the permalink. 1 Comment.
I would suggest you take a look at some of the newer storage array solutions on the market today that leverage a revamped log-structured file system with SSD’s to truely overcome spindle speed limitations. Sounds too good to be true but the numbers being seen by benchmark tests and actual customer “heavy IOP” applications are proving that vendors like Nimble Storage are for real. You shouldn’t make such broad generalizations before vetting the newest solutions on the market and see for your self if they are indeed smoke and mirrors. Because if you stick to your guns and make this claim, all the while vendors like Nimble are overcoming this barrier, it makes you look out of touch with reality.