All Storage Solutions Are Not Created Equal: Hardware vs Software
I work for an organization that is a hardware design company at its core and I often end up describing what being a hardware company actually means. I’ll attempt here to capture this distinction and describe why it is important. I am going to start at a very high level, so bear with me.
In order to deliver a storage product, you need storage space, a way for hosts to access it, and features that manipulate the storage space that the host sees. In developing the product there are two basic design methods: hardware centric or software centric. In hardware centric design, the focus is on the physical embodiment of the product, and with software centric the focus is on the features.
If the idea that a product is implementing is new and does not have a dependence on the physical components, then software design methodology makes sense. A good recent example is data de-duplication, where the idea is to find multiple copies of the same data and only store one copy. It doesn’t matter if you have one disk or many, if the data is local or on a NAS or a SAN. A product has to focus on certain areas of course, but the core idea of data de-duplication is not dependant on the physical embodiment.
Designing a software solution is where most startups begin; in the early stages of a company it is much easier to develop on an x86 platform and outsource the manufacturing and hardware design to a server vendor. The product that is sold may look like a hardware solution by the addition of a custom faceplate on a standard server, but it is really just software.
If on the other hand your product idea is not a new method of doing something, but a method of making a product that is faster, cheaper, or more efficient (in terms of power and space), then hardware is the way to go.
This is because, while software solutions provide extreme flexibility in what ideas can be implemented quickly, the trade-off is less efficiency and performance. By contrast, hardware design involves deciding what signal goes where, why, and when. It allows repeatable unrelated tasks to be handled by parallel circuits to hit any arbitrary performance level. The ability to design exactly what a chip will do allows much higher levels of throughput. Hardware is particularly good at data movement operations, encoding/decoding, and processing that can leverage deep pipelines for parallel execution.
A good recent example of designing exactly what a signal will do and when is illustrated in a patent that my employer just announced: “Patent No. 7,928,791 – Method and apparatus for clock calibration in a clocked digital device.” This is a fairly interesting patent if you are an electrical engineer. It basically covers a method that allows TMS to run chips with a higher degree of signal integrity than would otherwise be possible by continually adjusting the sampling time for data pins to compensate for the changes in timing that occur when the environmental conditions (temperature, voltage, etc) slightly change. Normally chips just have a defined setup and hold time that accounts for this variability, and the length of these times directly determine how fast you can move data in and out of a chip; by sampling at the center of this signal window, we can move data more robustly than we would otherwise be capable of.
The tagline of Texas Memory Systems is The World’s Fastest Storage®. From this perspective, it is easy to understand why we choose to be a hardware company. The focus is on creating storage products that are faster than any others available.