First, if you aren’t familiar with PernixData FVP I’ll point you to this post here where I did an overview and a demo. But quickly, FVP (Flash Virtualization Platform) is software that you install on each vSphere host in a cluster along with some flash storage (SSD, PCIe) and allows you to utilize that flash for both read and write caching to/from storage. It can greatly increase storage performance for far less than the cost of adding more back end disk to an array…and that’s exactly the point of this press release.
Cardinal’s applications and end users demanded very high performance. As the release said, they need to stay under 2ms response time for storage. Initially we spec’d out a replicated pair of EMC VNX2 arrays that would do the job and give them the necessary performance that they need. But, as you can imagine, this proved to be costly. Guaranteeing 2ms response time requires a fairly robust backend array but to make it worse Cardinal needed that same level of performance in a DR situation. This kept us from utilizing a smaller, less costly, array at their DR site.
We had several other options to consider. An “easy” one would be utilizing a hybrid array to give them very good performance on the flash tier with cheaper storage on slower disk. But…that’s a risk. If their workload increases and the working set got too large they could exceed that flash layer and performance would greatly suffer. That coupled with the fact they wanted an array that could do CIFS, NFS, and FC it just didn’t work. That’s why we went with the combined EMC/PernixData solution.
PernixData FVP is also a risk. You can run in to the same working set growth problems here as with a hybrid. But, there is a big difference. If the working set started to exceed the local flash cache in the server we could simply add more SSDs to the hosts. This gave them a lot more flexibility, future risk avoidance, and peace of mind without a costly change later should they outgrow the initial deployment.
And that’s where solutions like this shine. Sure, we could have thrown more SSDs in the VNX2 but at some point you run out of FASTCache capacity and need a larger array, which again drives the cost higher. It’s not a problem with these types of arrays, it just shows that flash is causing disruption all through the storage stack.
In the end Cardinal Innovations got a very high performing storage environment that is easy to manage and grow. They got many times the IOPS they initially specified to allow them to continue to grow all at a cost that was less than the initial build. Everyone wins.