A Look at PernixData’s Flash Virtualization Platform

A Look at PernixData’s Flash Virtualization Platform

The server-side flash cache space has really started to heat up.  Once primarily the domain of Fusion I/O we’re now seeing other contenders…such as PernixData.

Screen Shot 2013-06-24 at 4.31.36 PM

At VMworld last year I had the pleasure to meet with Satyam Vaghani, CTO and Poojan Kumar, CEO.  Back then they were still in stealth mode and weren’t even called PernixData.  Since then they’ve been steadily working on development of the product and are getting very close to a GA release.  They’ve also made some very good hires including my good friends Charlie Gautreaux, Andy Daniel, and Frank Denneman.  It says a lot when you’re a startup in a field that’s becoming competitive and you can attract the right people.  It means you have the right things in place and a good future strategy.

So What Is It?

PernixData’s FVP (Flash Virtualization Platform) provides server-side solid state caching.  ”So what?  We’ve seen this before.” you may ask.  True…but PernixData offers some very different advantages we’ll get in to in a minute.  But…let’s do a quick review.  The purpose of this solution is to accelerate storage performance while still being very cost effective.  This is done by putting flash memory in the server and using it for caching data going between the host and the storage system.  For the first release PernixData will accelerate block storage…meaning iSCSI and FibreChannel.  NFS will arrive a bit later I am told.  Remember, I don’t work for PernixData so anything I say may be completely made up!

screen-shot-2013-06-24-at-4-30-19-pm1

There are some significant benefits to caching inside the server.  Your cache is close.  Really, really close.  I/Os don’t need to cross the storage fabric.  This gives you faster response with lower latency as well as takes the load off both the storage array and your storage fabric.  To me this is often an understated benefit.  Sure….local flash is great for handling requests but your array could respond to other hosts even faster if it never got those requests from your PernixData enabled hosts in the first place.  That array that may be creeping up on utilization and load may suddenly get more life when I/Os are being serviced locally right from the hosts.  You may very well be able to push off that storage fabric upgrade if you can remove a lot of I/Os and bandwidth going across….plus you also get really fast response time to the VMs.

You win all the way around.

How Is PernixData Different?

First, they are flash agnostic.  PCIe card?  Great.  SSDs in servers?  Fantastic.  Whatever you want.  Your performance may vary, of course, depending on the type of flash you use but you have a lot of options.  It gives you a great deal more flexibility to balance form factor, cost, and performance.  An example of this is the Varrow lab in Charlotte where we have 6 Cisco B200M2 UCS blades.  They can’t take the new Fusion I/O mezzanine card like the M3s can.  Instead we put in some nice Intel SSDs and began testing PernixData.  Works great.

Another advantage is they offer both read and write (optional) caching.  Previously everyone has done read caching and just did pass-through for writes.  Now you can enable write caching for all or specific VMs.  The big question here has always been around how you protect data that gets written to local flash.  What happens if that host fails before the write is destaged to back-end storage?  PernixData allows you to define your protection.  You can say “No thanks!” and there is no secondary copy or you can choose one or more “Write Back Peers”.  When a VM writes data and it gets cached locally that write is also synchronously written to one or more other peers, if configured.  This protects against a local host failure.  It does make you factor in additional throughput for those interconnects between hosts.  To do this PernixData uses vMotion interfaces on your vSphere hosts.

The third differentiator with PernixData is that it just works and doesn’t impact any other features in your vSphere environment.  vMotion, DRS, HA…all that stuff works.  No changes needed to the VMs or how you connect to storage.  Plus it’s all managed using a vCenter plugin.

What All Do I Install?

There isn’t a lot of things to install before you start taking advantage of FVP.  You will install the management system on a Windows host that supplies information to the vCenter Plugin.  In my lab I just installed it on vCenter which works fine.  Probably not a great idea in production…but it’s a lab so there.  Then you install the PernixData VIB on each host using your method of choice.  The host has to be in maintenance mode but no reboot required.

Then you go through and create what is known as a Flash Cluster (will see more in the demo), add the host’s flash devices, and enable datastores and VMs for acceleration.  The PernixData FVP installs a new set of PSPs (Path Selection Policy) that get enabled.  So you end up with like PRNX_PSP_MRU or PRNX_PSP_RR.  You’ll have one for each of the standard PSPs.  Note here that if you use PowerPath/VE you can’t use that and PernixData’s FVP.  It’s one or the other.

Let’s See It?

Below is a video that I did to show you how you configure and manage PernixData FVP.  In the demo I’ll talk more about the architecture, data flows, etc.  I think it’s easier to do that in the demo than here on the post.

I’m not going to do benchmarks using this setup.  The reason is that my lab isn’t built for that.  This is my home lab using 3 SuperMicro hosts and flash is being delivered using Samsung 840 Pro 128GB SSDs connected to the onboard SATA II controller.  This is consumer grade stuff which does really speed up access in my lab, but not something I’d be bragging about.  PernixData can supply benchmark numbers and methodology.

Click through to Y ouTube for the 720p version if you’d like.

1 Comment

Trackbacks/Pingbacks

  1. Cardinal Innovations: Power of Flash, PernixData, and EMC | Jason Nash's Blog - […] if you aren’t familiar with PernixData FVP I’ll point you to this post here where I did an overview …
  2. PernixData and Dell – first test results | Burdweiser.com - […] Before I go to deep into the layout of this benchmark, let me say that Frank Denneman came out with some …
  3. Native Multi-Pathing Settings for ESXi 5.0 and EMC SAN’s | Jason Girard's Blog - […] set to RR then Pernix is going to show as PRNX_PSP_RR. My co-worker has an in-depth blog on Pernix …

Submit a Comment