Blog Archives

Enhanced Commodity Storage – Do You Believe in Magic?

With predictable regularity someone surfaces on the Web, claiming they have discovered a way to turn slow SATA arrays into high performance storage. Their method usually involves adding complex and sophisticated software to reallocate and optimize system resources. While there may a few circumstances where this might work, in reality it is usually just the opposite.

The problem with this concept is similar to the kit car world several decades ago. At the time, kit-build sports cars were all the rage. Automobile enthusiasts were intrigued by the idea of building a phenomenal sports car by mounting a sleek fiberglass body on the chassis of a humble Volkswagen Beetle. Done properly, the results were amazing! As long as their workmanship was good, the end results would rival the appearance of a Ferrari, Ford GT-40, or Lamborghini!

However, this grand illusion disappeared the minute its proud owner started the engine. Despite its stunning appearance, the kit car was still built on top of an anemic VW bug chassis, power train, and suspension!

Today we see a similar illusion being promoted by vendors claiming to offer “commodity storage” capable of delivering the same high performance as complex SAN and NAS systems. Overly enthusiastic suppliers push the virtues of cheap “commodity” storage arrays with amazing capabilities as a differentiator in this highly competitive market. The myth is perpetuated within the industry by a general lack of understanding of the underlying disk technology characteristics, and a desperate need to manage shrinking IT budgets, coupled with a growing demand for storage capacity.

According to this technical fantasy, underlying hardware limitations don’t count. In theory, if you simply run a bunch of complex software functions on the storage array controllers, you somehow repeal the laws of physics and get “something for nothing”.

That sounds appealing, but it unfortunately just doesn’t work that way. Like the kit car’s Achilles heel, hardware limitations of underlying disk technology govern the array’s capabilities, throughput, reliability, scalability, and price.

Drive Latencies – the inherent latency incurred to move read/write heads and rotate disks until the appropriate sector address is available can vary significantly.

For example, comparing performance of a 300GB, 15K RPM SAS disk to a 3TB 7200 RPM SATA disk produces the following results:

SAS vs SATA Comparison

Controller Overhead – Masking SATA performance by adding processor capabilities may not be the answer either. Call it what you will – Controller, SP, NAS head, or something else. A storage controller is simply a dedicated server performing specialized storage operations. This means controllers can become overburdened by loading multiple sophisticated applications on them. More complex processes also means the controller consumes additional internal resources (memory, bandwidth, cache, I/O queues, etc.). As real-time capabilities like thin provisioning, automated tiering, deduplication and data compression applications are added, the array’s throughput will diminish.

“Magic” Cache – This is another area where lots of smoke-and-mirrors can be found. Regardless of the marketing hype, cache is still governed by the laws of physics and has predictable characteristics. If you put a large amounts of cache in front of slow SATA disk, your systems will run really fast – as long as requested data is already located in cache. When it isn’t you must go out to slow SATA disk and utilize the same data retrieval process as every disk access. The same is true when cache is periodically flushed to disk to protect data integrity. Cache is a great tool that can significantly enhance the performance of a storage array. However, it is expensive, and will never act as a “black box” that somehow makes slow SATA disk perform like 15K RPM SAS disks.

Other Differences – Additional differentiators between “commodity storage” and high performance storage include available I/Os per second, disk latency, RAID level selected, IOPS per GB capability, MTBF reliability, and the Bit Error Rate.

When citing the benefits of “tricked out” commodity storage, champions of this approach usually point to obscure white papers written by social media providers, universities, and research labs. These may serve as interesting reading, but seldom have much in common with production IT operations and “the real world”. Most Universities and research labs struggle with restricted funding, and must turn to highly creative (and sometimes unusual) methods to achieve specific functions from a less-than-optimal equipment. Large social media providers seldom suffer from budget constraints, but create non-standard solutions to meet highly specialized, stable, and predictable user scenarios. This may illustrate an interesting use of technology, but have little value for mainstream IT operations.

As with most things in life, “you can’t get something for nothing”, and the idea of somehow enhancing commodity storage to meet all enterprise data requirements is no exception.

 

Rumors of Fibre Channel’s Death are Greatly Exaggerated

If you believed all the media hype and vendor pontifcations three years ago, you would have thought for sure that Fibre Channel was teetering on the edge of oblivion. According to industry hype, 10Gbps Ethernet and the FCoE protocol were certain to be the demise of Fibre Channel. One Analyst even went so far as to state, “IP based storage networking technologies represent the future of storage”.  Well as they say, “Don’t believe everything you read”.

In spite of a media blitz designed to convince everyone that Fibre Channel was going extinct, industry shipments and FC implementation by IT storage professionals continued to blossom.  As 16Gbps Fibre Channel rapidly grew in acceptance, the excitement around 10GbE diminished. In a Dell’Oro Group report for 4Q12, fibre channel Director, switch, and adapter revenues surpassed $650 million, while FCoE champion Cisco suffered through soft quarterly results.

So what makes Fibre Channel network technology so resilient?

Simplicity – FCP was designed with a singular purpose in mind, and does not have to contend with a complex protocol stack.
Performance – a native 16Gbps FC port is 40% faster than a 10GbE network, and it too can be trunked to provide aggregate ISL bandwidth up to 128 Gbps.
Low Latency – FC fabric is not penalized by the additional 2-hop latency imposed by routing data packets through a NAS server before it’s written to disk.
Parity of Cost – The dramatic reduction in expense promised by FCoE has failed to materialize. The complexity and cost of pushing data at NN_Ghz is fairly consistent, regardless of what protocol it used.
Efficiency – Having a Fibre Channel back-end network supports such capabilities as LAN-less backup technology, high speed data migration, block-level storage virtualization, and in-fabric encryption.

An excellent indicator that Fibre Channel is not falling from favor is Cisco’s recent announcement of their new 16Gbps MDS 9710 Multilayer Director and MultiService Fabric Switch. Cisco was a major proponent of 10GbE and the FCoE protocol, and failed to update their aging MDS 9500 family of Fibre Channel Directors and FC switches. (http://searchstorage.techtarget.com/news/2240182444/Cisco-FC-director-and-switch-moves-to-16-Gbps-new-chassis) This left Brocade with a lion’s share of a rapidly growing 16 Gbps Fibre Channel market. For Brocade, it produced a record quarter for FC switch revenues, while Cisco struggled with sagging sales.

Another influencing factor in FC longevity of is the average IT department’s need for extremely high-bandwidth storage network capabilities. Prior to 10GbE technology, Ethernet LANs performed quite well at 1GbE (or some trunked variation of 1GbE). The majority of the fibre channel world still depends upon 4Gbps FC, with 8Gbps technology recently starting to make significant inroads in the data center. Given the fairly leisurely pace of migration to higher performance for the SAN and NAS fabric technology. Except for a fairly small percent of IT departments that actually require high performance / high throughput, the lure of a faster interface alone has a limited amount of allure.

So which network technology will win?  Who knows (or even cares)?  There are usually bigger issues to overcome than what the back-end “plumbing” is made of.  It’s far more important to implement the most appropriate technology for the task at hand.  That could be Ethernet, Fibre Channel, Infiniband, or some other future network scheme.  The key is to select your approach based on functionality and efficiency, not what is being hyped as “the next great thing” in the industry.  In spite of all the hyperbole, Fibre Channel isn’t going away any time soon.

As Samuel Clemens (aka Mark Twain) said after hearing that his obituary had been published in the New York Journal, “The reports of my death are greatly exaggerated”.

PCIe Flash – The Ultimate Performance Tool?

One of the more promising technologies for improving applications and databases performance is the PCIe Flash card.  This technology uses a standard half or full-height PCI-e interface card with a Solid State Disk mounted on it.  It allows SSD to be added to a workstation or server by simply plugging a card into an available PCIe bus slot.

What makes PCIe Flash card approach superior to array-based SSD is its close proximity to the system memory and the elimination of latency-adding components to the I/O stream.  In a normal SAN or NAS array, data is transferred to storage across the SAN fabric.  Bytes of data move across the systems PCIe I/O bus, where it is read by the HBAs (or NICs if it’s NAS), translated into the appropriate protocol, converted to a serial stream, and sent across the SAN fabric.  In most SANs the signal is read and retransmitted one or more times by edge switches and directors, then sent to disk array controllers.  From there it is converted from a serial stream to a parallel data, translated from the SAN fabric protocol, given a block-level addressing, possibly stored in array cache, re-serialized for transmission to the disks, received and re-ordered by disks for efficient write processes, and finally written to the devices.  For a data read, the process is reversed via a similar process.

 Like other technologies, however, there are pros and cons to using PCIe Flash storage:

Pros –

  • Plugs directly into the PCIe bus, eliminating latency from the HBAs, network protocols, SAN fabric, array controller latencies, and disk tray connections.
  • PCIe is a point-to-point architecture, so each device connects to the host with its own serial link
  • PCIe Gen 2 supports 8Gbps/sec., which is 25% faster than the 6Gbps SAS interface
  • Little or no additional infrastructure is required to capitalize on flash storage performance
  • Very simple to deploy and configure
  • Extremely low power consumption, as compared with traditional 3.5-inch hard disks.
  • Positions data in very close proximity to the system processors and cache structure
  • Requires no additional physical space in the storage equipment rack

 Cons – 

  • The number of SSD storage deployed is limited by the physical number of slots
  • Some PCIe Flash cards are “tall” enough to block the availability of an adjacent slot
  • Recent PCIe bus technology is required to support top performance (x4 or above)
  • Internal PCIe storage cannot be shared by other servers like a shared SAN resource
  • May require specialized software for the server to utilize it as internal cache or mass storage
  • PCIe Flash may suffer quality issues if an Enterprise-Grade product is not purchased
  • If the server goes down, content on the installed PCIe Flash becomes inaccessible

Like other SSD devices, Flash PCIe cards are expensive when compared to traditional disk storage.  In Qtr4 of 2012, representative prices for 800GB of PCIe Flash storage are in the range of $3800 to $4500 each.  Since a 15K RPM hard disk of similar capacity sells for $300 to $450 each, Flash memory remains about ten times more expensive on a cost-per-GB basis.  However, since Solid State Disk (SSD) is about 21 times faster than electro-mechanical disk, it may be worth the investment if extremely fast performance is of upmost importance.

SAN Fabric for the Next Generation

There’s a quiet revolution going on in large data centers.  It’s not as visible or flashy as virtualization or deduplication, but at least equal in important.

As its name implies, SAN “fabric” is a dedicated network that allows servers, storage arrays, backup & recovery systems, replication devices, and other equipment to pass data between systems.  Traditionally this has been comprised of 4Gbps Fibre Channel and 1Gbps Ethernet channels.  However, a new family of 8Gbps and 16Gbps Fibre Channel, 6Gbps and 12Gbps SAS, and 10Gbps Ethernet are quietly replacing legacy fabric with links capable of 2 – 4 times the performance.

The following is a comparison of the maximum throughput rates of various SAN fabric links:

A comparison of available SAN channel speeds.

Performance ranges from the relatively outdated 1Gbps channel (Ethernet or FC) capable of supporting data transfers of up to 100 MB per second, to 16Gbps Fibre Channel capable of handling 1940 MB per second.  Since all are capable of full duplex (bi-directional) operations, the sustainable throughput rate is actually twice the speed indicated in the chart.  If these blazing new speeds are still insufficient, 10Gbps Ethernet, 12Gbps SAS, and 16Gbps Fibre Channel can be “trunked” – bundled together to produce an aggregate bandwidth equal to the number of individual channels tied together.  (For example, eight 16Gbps FC channels can be bundled to create a 128Gbps “trunk”.)

In addition to high channel speeds, 10Gbps Ethernet and 16Gbps Fibre Channel both implement a 64b/66b encoding scheme, rather than the 8b/10b encoding scheme used by lower performance channels.  The encoding process improves the quality of the data transmission, but at a cost.  An 8b/10b encoding process decreases available bandwidth by 20%, while 64b/66b encoding only reduces bandwidth by 3.03%.  This significantly increases data transfer efficiency.

While 8/16Gbps Fibre Channel and 10Gbps Ethernet are changing the game at the front-end, SAS is revolutionizing the back-end disk drive connections as well.  For over a decade, enterprise-grade disks had 2Gbps or 4Gbps ports, and were attached to a Fiber Channel Arbitrated Loop (FC-AL).  Like any technologies using loop technology, low traffic enjoyed maximum speed but performance dropped off as demand increased.  Under heavy load conditions, the back-end bus could become a bottle-neck.

SAS will change that for two reasons.  First it uses switched technology, so every device attached to the controller “owns” 100% of the bus bandwidth.  The latency “dog leg pattern” found on busy FC-AL busses is eliminated.  Secondly current SAS drives are shipping with 6Gbps ports, which are 50% faster than 4Gbps Fibre Channel.  Just over the horizon are 12Gbps SAS speeds that will offer a 300% increase in bandwidth to the disks, and do it over switched (isolated) channels.

Recent improvements in fabric performance will support emerging SSD technology, and allow SANs to gracefully scale to support storage arrays staggering under a growth rate of 40% – 50% per year.