Update from the Field: VMworld 2011
This week in TMS’s booth (booth #258) at VMworld we have a joint demo with our partner Datacore that shows an interesting combination of VMware, SSDs, and storage virtualization. We are using Datacore’s SANsymphony -V software to create an environment with the RamSan-70 as tier 1 storage and SATA disks as tier 2. The SANsymphony-V software handles the tiering, high-availability mirroring, snapshots, replication, and other storage virtualization features
Iometer is running on four virtual machines within a server and handling just north of 140,000 4 KB read IOPS. A screen shot from Iometer on the master manager is show below:
Running 140,000 IOPS is a healthy workload, but the real benefit of this configuration is its simplicity. It uses just two 1U servers and hits all of the requirements for a targeted VMware deployment. Much of the time, RamSans are deployed in support of a key database application where exceptionally high performance shared SSD capacity is the driving requirement. RamSan systems implement a highly parallel hardware design to achieve an extremely high performance level at exceptionally low latency. This is an ideal solution for a critical database environment where the database has all of the tools integrated that are normally “outsourced” to a SAN array (such as clustering, replication, snapshots, backup, etc.). However, in a VMware environment many physical and virtual servers are leveraging the SAN, so pushing the data management to each application is impractical.
Caching vs. Tiering
One of the key use cases of SSDs in VMware environments is automatically accelerating the most accessed data as new VMs are brought online, grow over time, and retire. The benefit of a flexible virtual infrastructure makes seamless automatic access to SSD capacity more important. There are two accepted approaches to properly integrating an SSD in a virtual environment; I’ll call them caching and tiering. Although, similar on the surface, there are some important distinctions.
In a caching approach, the data remains in place (in its primary location) and a cached copy is propagated to SSD. This setup is best suited to heavily accessed read data because write-back caches break all of the data management storage features running on the storage behind it (Woody Hutsell discusses this in more depth in this article). This approach is effective for frequently accessed static data, but it is not ideal for frequently changing data.
In tiering, the actual location of the data moves from one type of persistent storage to another. In the read-only caching case it is possible to create a transparent storage cache layer that is managed outside of the virtualized layer, but when tiering with the SSD, tiering and storage virtualization need to be managed together.
SSDs have solved the boot-storm startup issues that plague many virtual environments, but VMware’s recent license model updates sparked increased interest in other SSD use cases. With VMware moving to a memory-based licensing model there is interest in using SSDs to accelerate VMs with a smaller memory footprint. In a tiering model, if VMDK are created on LUNs that leverages SSDs, the virtual guest will automatically move the internal paging mechanisms within the VM to low latency SSDs. Paging is write-heavy, so the tiering model is important to ensure that the page files are leveraging SSD as they are modified(and that the less active storage doesn’t use the SSD).
We are showing this full setup at our booth (#258) at VMworld. If you are attending I would be happy to show you the setup.