Archive

Archive for the ‘Price’ Category

Update from the Field – SQL PASS SUMMIT 2012

November 8, 2012 Leave a comment

This week the Professional Association for SQL Server (PASS) has its annual conference in Seattle.  It is always a good show to get updates on new developments for SQL Server and to get a feeling about the adoption rate of various features.  At this show LSI is demonstrating a benchmark that is running on a SQL Server 2012 AlwaysOn Availability Group (a good article discussing this feature is available here).  In the demo we are showing a database in an AlwaysOn Availability Group Cluster (in synchronous mode) running on local storage.  This is a type of database clustering solution that uses log shipping between databases to create a High Availability (HA) solution. Both sides of the cluster having a complete copy of the database rather than a single copy on shared fault tolerant storage.   LSI also has a second setup with a database running on SAN storage – the traditional storage for a clustered architecture.  The AlwaysOn Availability Group Cluster is running at almost three times the transaction rate of the SAN system and at a fraction of the response time.

There is an important different between the two configurations.  The local storage setup is not using disks alone, but rather Nytro MegaRAID controllers.  These are high performance RAID controllers with flash integrated as a large cache to accelerate the disks.  Mellanox was also kind enough to let us borrow a couple of Infiniband HCAs for a high performance connection between the clustered databases.  With both high performance local storage and high performance networking a better performing HA SQL server solution can be created very cost effectively compared to alternative SAN based architectures.   The full setup is illustrated below:

Prior to the AlwaysOn Availability Groups, SQL Server had another technology called Database Mirroring that used log shipping to create a HA database using local storage.  It worked, but the applications had to be aware of the backup server and the management of the HA solution was very different than the management of a traditional SQL HA solution using Microsoft failover clustering.  This different management was targeted at database administrators more than server or storage administrations. Database mirroring was a solution that worked technically, but because of its different management, it introduced serious risk of human error.  The administrators tasked with availability and database administration had to use the same tool rather than separate ones and the application team had to make sure that the connections to the database were setup properly for everything to work.

An enterprise application that requires HA typically supports very important applications where the cost of downtime dwarfs all other considerations and management risks are taken very seriously.  One of the huge advancements with Availability Groups is the management is done through the Microsoft cluster manager and applications no longer need to be aware of whether the database is clustered or not.  This change has made a huge difference to the perception of the solution and it is great to talk with customers about how they are using Availability Groups & how they plan to.  With applications getting better and better at providing robust solutions that can leverage local storage; the future is very bright for high performance local storage solutions.

Categories: Architectures, Price

eMLC Part 2: It’s About Price per GB

September 1, 2011 Leave a comment

I have had the chance to meet with several analysts over the past couple of weeks and have raised the position that with eMLC the long awaited price parity of Tier-1 disks and SSDs is virtually upon us.  I had a mixed set of reactions, from “nope, not yet” to “sorry if I don’t act surprised, but I agree.”  For the skeptics I promised that I would compile some data to back up my claim.

For years the mantra of the SSD vendor was to look at the price per IOPS rather than the price per GB.  The Storage Performance Council provides an excellent source of data that facilitates that comparison in an audited forum with their flagship SPC-1 benchmark.  The SPC requires quite a bit of additional information to be reported for the result to be accepted, which provides an excellent data source when you want to examine the enterprise storage market.   If you bear with me I will walk through a few ways that I look through the data, and I promise that this is not a rehash of the cost per IOPS argument.

First, if you dig through the reports you can see how many disks are included in each solution as well as the total cost.  The chart below is an aggregation of the HDD based SPC-1 submissions showing the reported Total Tested Storage Configuration Price (including three-year maintenance) divided by the number of HDDs reported in the “priced storage configuration components” description.  It covers data from 12/1/2002 to 8/25/2011:

Now, let’s take it as a given that SSD can deliver much higher IOPS than an HDD of equivalent capacity, and price per GB is the only advantage disks bring to the table.  The historical way to get higher IOPS from HDDs was to use lots of drives and short stroke them.  The modern day equivalent is using low capacity, high performance HDDs rather than cheaper high capacity HDDs.  With the total cost of enterprise disk at close to $2,000 per HDD, the $/ GB of enterprise SSDs determines the minimum logical capacity of an HDD.  Here is an example of various SSD $/GB levels and the associated minimum disk capacity points:

Enterprise SSD $/ GB

Minimum HDD capacity

$ 30

67 GB

$ 20

100 GB

$ 10

200 GB

$7

286 GB

$5

400 GB

$3

667 GB

$1

2,000 GB

To get to the point that 300 GB HDD no longer make sense, the enterprise price per GB just needs to be around $7/GB and 146 GB HDDs are gone at around $14/GB.  Keep in mind that this is the price of the SSD capacity before redundancy and overhead to make it comparable to the HDD case.

It’s not fair (or permitted use) to compare audited SPC-1 data with data that has not gone through the same rigorous process, so I won’t make any comparisons here.  However, I think that when looking at the trends, it is clear that the low capacity HDDs that are used for Tier-1 one storage are going away sooner rather than later.

About the Storage Performance Council (SPC)

The SPC is a non-profit corporation founded to define, standardize and promote storage benchmarks and to disseminate objective, verifiable storage performance data to the computer industry and its customers. The organization’s strategic objectives are to empower storage vendors to build better products as well as to stimulate the IT community to more rapidly trust and deploy multi-vendor storage technology.

The SPC membership consists of a broad cross-section of the storage industry. A complete SPC membership roster is available at http://www.storageperformance.org/about/roster/.

A complete list of SPC Results is available at http://www.storageperformance.org/results.

SPC, SPC-1, SPC-1 IOPS, SPC-1 Price-Performance, SPC-1 Results, are trademarks or registered trademarks of the Storage Performance Council (SPC)

Categories: Disk, Price, SPC, SSD

Where are Disks Headed?

August 3, 2011 3 comments

I started my career at a different Texas company – Texas Instruments – and I remember the 1” drive division aimed at the mobile device market.   It didn’t take very long for this to get end-of-lifed.  It was a neat product and a serious feat of engineering, but it just couldn’t compete with Flash.  At first it was because Flash was smaller, more rugged, and used less power. However, it was ultimately just because Flash was cheaper!  (Compare the disk-based iPod Mini and the Flash-based iPod Nano.)

Disks have a high fixed cost per unit and a small marginal cost per GB .  Physically bigger disks have a lower cost per GB than smaller ones.   This is very different from the other storage media like Flash and tape.  So it was bothering me recently – since each generation of disks takes on a smaller form factor than before, why are mainstream disks still shrinking, from 3.5” to 2.5”?  If the disk market was just concerned with cost per GB and “tape is dead”, this is crazy – disk should be getting bigger!  Why do disks continue their march towards smaller form factors when that just makes SSDs more competitive?

I originally thought that this was just a holdover from the attempts to make disks faster.  Bigger disks are harder to spin at a high speed, so as the RPM rate marched forward disks had to get smaller.  The advent of cost effective SSDs, however, has stopped the increase in RPMs. (Remember the news in 2008 of the 20k RPM disk?)  The market for performance storage at a premium has been ceded to SSDs.

After spending some time thinking on it I think there are a few basic reasons disks continue their march:

  • The attempt to have a converged enterprise, desktop, and laptop standard.
  • The need for smaller units to compose RAID sets, so that during a rebuild the chance of a second failure is not too high. I understand this, but RAID-6 is an alternate solution.
  • Disks are not just for storage, they are for both performance and long term storage.

Simply because disks store data on a circular platter, every time the bit density increases, the capacity grows by a power of 2 function, but the ability to access randomly doesn’t change, and the bandwidth only grows by a power of 1 function.    At some point the need for capacity is more than adequately met so the performance need takes over and disks shrink to get the performance and capacity more in sync.

Neither tape cartridges nor Flash suffer the fixed cost problem or geometry induced accessibility issues of disks.  With the new high density cartridges coming online tape continually avoids being supplanted by disk for pure capacity requirements.  TMS even recently had a customer that was able to leverage a Tape + SSD deployment and skip disks altogether.

Is the future of storage SSD + tape?

No.  While this works for a streamlined processing application, tape just isn’t ever going to be fast enough for data that needs to feel like it is instantly available.  There is just too much data that probably won’t be needed much, but when it is, it must be instantly available.  However, disks are much faster than the ~1 second response time needed for a user facing application.

With SSD handling more and more of the performance storage requirements  it will be interesting to see if disks stop their march toward smaller form factors and head in the other direction by becoming bigger and slower and fully cede the “tier 1 storage” title to SSDs.

Categories: Disk, Price

SSDs and the Cloud

From time to time I become involved in discussions about where SSDs are making an impact in the consumer market and what I think is going to happen.  The biggest knock that I hear about SSDs making serious inroads is: Consumers buy computers based on specs, most just won’t accept a computer with less storage at the same price as one that has more.   The details on the SSD benefits are lost on this mainstream market and disks can maintain a price per GB advantage for a long time in the future.

I attended a marketing presentation by David Kenyon from AMD recently and he pointed out something about computer marketing trends that I found insightful. The use of specs as a computer differentiator is becoming less prominent.  The look and feel and the fitness for a particular use case are becoming more important selling points. The reduced prominence of specs started when the clock rates of CPUs stopped being promoted and instead the family name and model number were used.  If you look at Apple products, it is hard to even find the specs until after you have selected the make you want and are trying to decide on a model.

Part of the shift towards use case based computing is a fragmentation of computing resources into multiple devices.  People have many devices – laptops, work desktops, home desktops, tablets, and smart phones.  Having devices that are accessible and convenient for a particular use is a wonderful thing.  But there is one major headache that comes with this – having access to your data from a particular device that you would like to use is a pain.  A Kindle is great to dive into a book on a quiet afternoon, but it is relatively inconvenient to take with you all the time.  Being able to pull out a smartphone in a waiting room and pick up reading where you had left off is what people want. Multiple device shared access has already happened with email and it is just a matter of time until the rest of private data goes the same route.  The access to data without physical device dependence is what cloud storage is all about.

So what does this have to do with SSDs?  Besides the lower prominence of specs, using more and more devices makes it clear that having a bunch of storage on any particular device just isn’t valuable.  The data needs to be accessible from the other devices.  Consumers are not going the go through the hard work of setting up data synchronization though.  They will eventually pay to have it done for them, by whoever wins a monopoly over access to user’s data – Microsoft, Google, Facebook, or someone new.  Soon, their data is going to end up in a datacenter somewhere that all of the devices can access.  There may be a full copy of everything on the computer at home, but even this could fall by the wayside.  In this environment, having a disk in any of the devices is just crazy.  This is simply because at low capacities, SSDs are cheaper than disks!  They are also higher performance, lower power, and have a malleable form factor.

There has to be a pretty robust high-speed network available almost everywhere for this to work, but that is clearly not that far off.  Once the network is in place the service offerings and vendors will coalesce to develop a clear standard and price model.  At that point consumer disks will move to the datacenter.  This may sound like too much complexity to occur quickly, but the benefits that come from easy access to your data and the profits that will be bestowed on the vendor that becomes the gatekeeper are just too great to prevent it from happening.

If this framework develops, the total disk capacity will grow more slowly as the efficiencies that have developed in the enterprise storage arena are brought to bear – just in time provisioning, deduplication, and compression.  (Just imagine how much unused capacity is isolated on all of the disks in the consumer computers today.)  In the not too distant future, having a disk in your computing device will be the exception to the norm.

An aside on cloud computing frameworks, data, and network bandwidth

The biggest issue with cloud storage is the network bandwidth.  I don’t mean to suggest network bandwidth needs to be high enough to use cloud storage remotely – that may never happen. Today, the data in successful cloud services is being fragmented by application.  Keeping the data and the compute resources close gives a big benefit in reducing the network traffic needed for processing.  The drawback is that data behind the scenes is handled differently by each service provider and managing credentials for each separate service is difficult.  In effect it is creating a data management nightmare for the user.  This fragmentation is bad for users – they want an easy way to access and control all of the data that belongs to them.

The efficiency of having the data near the compute resource is huge, but there is no real reason that the data has to fragment and move to the service providers.  With the proper cloud computing framework, the applications could just as easily move to the data’s location and run in the same datacenter.  This would provide the same benefit but make it easy for the user to see and mange the data that belongs to them.  I don’t see an easy way to separate cloud storage from cloud computing – but at the end of the day the data is what everything else depends on and frameworks have to account for this.

Categories: Cloud, Disk, Price, SSD

The Real Price of Enterprise Storage

One of the pet peeves that come with the territory when deploying SSD systems is being compared to the price of consumer disks.  I might be bothered in particular because I have seen how rapidly the price has declined since Flash entered the field.   I remember that it was not very long ago (2004) that SSDs were thousands of dollars per GB!  Now, as the price of SSDs comes much closer to what high performance enterprise disk systems cost, the difference does not seem that bad to SSD veterans.

There is a general disconnect between what hard drives cost in the consumer market and what the disk based enterprise storage systems cost per GB.  I am sure that IT administrators get offers from end users all the time to personally buy a 1 TB drive for $80 to increase the size of their exchange mailbox.

So where can you find what enterprise storage systems cost?

The best source available is the Storage Performance Council’s (www.storageperformance.org) published data on the benchmark results of various enterprise storage systems, and one of the requirements is that the full costs must be disclosed.  When you look at this data in a few different ways you can draw some general conclusions. First, the obvious one, disks are rapidly getting cheaper per GB (below is some historical $/GB data on test results from systems with more than 100 disks):

However disks are not getting cheaper – they are just getting bigger.  Enterprise disks are very expensive once you include the costs of the storage controller, switching, and maintenance.  Below is the cost of a solution divided by the number of disks:

From these costs it is easy to see how there is a business case for deploying a solid state solution to eliminate 20 disk drives (or more).  You can always get more capacity with disks at a lower price point than SSDs and that will continue for a long time.  However, since the price per disk is so high, for smaller capacity, high performance workloads, SSDs are just cheaper.  The price point of a 15K RPM drive behind a storage controller is so high that you don’t have to be at the extreme end of the performance curve anymore to justify SSDs.

Realistically, once you are putting in 2-3 times as many drives for performance as you need for capacity, a serious investigation of SSDs should follow.

Categories: Disk, Price, SPC