Storage Tiers – Putting Data in It’s Place

I’m frequently surprised by the number of companies who haven’t transitioned to a tiered storage structure.  All data is not created equal.  While a powerful database may place extreme demand on storage, word processing documents do not. 

As we move into a new world of “big data”, more emphasis needs to be focused on making good decisions about what class of disk this data should reside on.  Although there are no universally accepted standards for storage tier designations, frequently the breakdown goes as follows:

Tier 0 – Solid state devices

Tier 1 – 15K RPM SAS or FC Disks

Tier 2 – 10K RPM SAS or FC Disks

Tier 3 – 7200 or 5400 RPM SATA (a.k.a. – NL-SAS) Disks

So why is a tiering strategy important for large quantities of storage?  Let’s take a look at similar storage models for 1 petabyte of data:

The difference in disk drive expense alone is over $225,000 or around 30% of the equipment purchase price.  In addition there other issues to consider.

Pros:  

  • Reduces the Initial purchase price by 25% or more
  • Improving energy efficiency by 25% – 35%  lowers operational cost and cooling requirements
  • Substantial savings from reduced data center floorspace requirements
  • Increased overall performance for all applications and databases
  • Greater scalability and flexibility for matching storage requirements to business growth patterns
  • Provides additional resources for performance improvements (an increased number of ports, cache, controller power, etc.)
  • A high degree of modularity facilitates better avoidance of technical obsolescence
  • May moderate the demand for technical staff necessary to manage continual storage growth                                                                              

Cons: 

  • Requires automated, policy-based data migration software to operate efficiently.
  • Should employ enterprise-class frames for Tiers 0/1 and midrange arrays for Tiers 2/3
  • Incurs approximately a 15% cost premium for enterprise-class storage to support Tier 0/1 disks
  • Implements a more complex storage architecture that requires good planning and design
  • Needs at least a rudimentary data classification effort for maximum effectiveness

So does the end justify the effort?  That is for each company to decide.  If data storage growth is fairly stagnant, then it may be questionable whether the additional effort and expense is worth it.  However if you are staggering under a 30% – 50% CAGR storage growth rate like most companies, the cost reduction, increased scalability, and performance improvements achieved may well justify the effort.

About Big Data Challenges

Mr. Randy Cochran is a Senior Storage Architect at Data Center Enhancements Inc.. He has over 42-years of experience as an IT professional, with specific expertise in large and complex SAN/NAS/DAS storage architectures. He is recoginzed as a Subject Matter Expert in the enterprise storage field. For the past five years his primary focus has been on addressing the operational requirements and challenges presented by petabyte-level storage.

Posted on April 23, 2012, in Storage and tagged , , , , , , , , . Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: