Today EMC is releasing the newest generation of the VNXe. This new model is the VNXe 3200. We have had one of these units in our lab for a while and have been very pleased with what we’ve seen. Unlike the original VNXe that really wasn’t a “baby” VNX, this one really is and should make for a great multi-function small storage array.
This new VNXe is a completely new architecture for the entry level array line. It very much resembles the VNX bigger brother. The base system is very small. Both storage processors (SPs) are included in a 12 or 25-disk Data Processor Enclosure (DPE). Instead of the SPs being split out they are built in to the first disk shelf to reduce the physical footprint. Overall specifications include:
It runs the MCx operating environment, just like the current generation VNX. This is important as it allows for all of the multicore performance enhancements seen on the VNX as well as many of the same features, such as FAST VP.
Like the current VNX, the new VNXe received a significant software update. MCx allows for far better utilization of modern CPUs with many cores and that allows for some very powerful software features never before seen in an array of this size or price.
FAST VP (Fully Automated Storage Tiering Virtual Provisioning) allows you to create pools containing different types of disks. The array will then move data “up” and “down” within the pool to faster (SSD) or slower (NL-SAS) disks as needed. It’s very common to see pools made up of SSD, SAS, and NL-SAS and letting the array handle putting data on the proper tier.
Another enhancement that comes with the move to MCx code is that data is moved in smaller 256MB slices. That means if you have a large database, say 500GB, it won’t move that entire 500GB up and down as a set. It will move pieces of the database up and down as needed in 256MB chunks.
Caching on the new VNXe is completely different than the old generation. The first major change is that the front-end RAM cache is now dynamic. In previous arrays we would set a percentage of available memory to be Read Cache and Write Cache. Depending on the I/O profile of your workload you may need to adjust those. But not now. Now we have Adaptive Cache. Basically, the array monitors I/Os going in and out and adjusts the thresholds in realtime as needed. Spikes on incoming I/Os will cause it to lower the read cache and dedicate more to write, and vice versa. Another new feature is Write Throttling. With this the array will hold back acknowledgements to the host to help manage the write cache and align it with the performance capabilities of the underlying disks.
The idea is to minimize forced flushes of front-end cache which disrupts I/O and can cause further problems. You want smooth incoming I/O. You want I/O to come in just as fast as the back-end disks can handle it and Write Throttling provides that.
One of, if not the best, feature on a VNX is FAST Cache. With FAST Cache you can use SSDs as front-end read AND write cache. It’s not uncommon for us to pull performance stats off of customer arrays and see the majority, often the vast majority, of I/Os serviced from those SSDs. It greatly increases performance and reduces the load on the back-end spindles. This lets us build smaller arrays that are cheaper…yet faster. Since the introduction of FAST Cache I don’t think we’ve sold a VNX without atleast two SSDs for cache.
Multi-Core FAST Cache (MCF) is the next iteration of that. The first big change is how the cache is warmed. With FLARE a 64KB block had to be accessed 3 times before it was put in to that SSD cache. There are very good reasons for that…but SSDs are now getting larger and capable of handling more of the working dataset. Now it’s just 1. The first time you access data it goes in to those SSDs…until they are 80% full. Then it goes back to the caching blocks that have been read 3 times. How much space in the FAST Cache each LUN gets is also different with MCx. All LUNs equally share 50% of the FAST Cache pages (capacity). The other 50% is available to any LUN on an as-needed basis. The idea is to keep a couple of busy LUNs from using all of the cache and denying that use by lower utilized LUNs. MCF is best suited for random I/O that is smaller than 64KB. It doesn’t work well with large sequential I/O as that isn’t needed to be held in cache on SSDs. That can be quickly sent down to the underlying spinning disks. The new VNXe will use a maximum of two SSDs for FAST Cache. It is also limited to the smaller 200GB drives, unlike its bigger brothers. Note that there are two models of SSDs available. SLC (FAST Cache Optimized) drives, can be used for FAST Cache and eMLC (FAST VP Optimized) is normally used for regular data access.
As with the other features the new VNXe is now on parity with the VNX for RAID and resiliency The first enhancement is Permanent Sparing. Traditionally when a drive in a RAID set failed the array would grab a designated hot spare and use that to rebuild the RAID set. When you replaced the failed drive the array would copy the data from the hot spare to the new drive, and then make the hot spare a hot spare yet again. That’s no longer the case. Now the array keeps using the hot spare drive. Big deal? I don’t think so. Just be aware. How hot spares are specified has also changed..and by changed I mean gone away. You no longer specify drives as hot spares. Any unbound drive is capable of being a hot spare. The array is smart in how it chooses which drive to use (capacity, rotation, bus, etc) so that it doesn’t pick an odd drive on a different bus, unless it has to do so. The ratio of spares to data drives is set at 1:30. MCx also has a timeout for RAID rebuilds. If a drive goes offline, or fails, or you pull it out for some reason the array now waits 5 minutes before activating a spare and rebuilding the set. It does this to make sure you didn’t accidentally do something or that you’re not moving drives around. Wait. What? Moving drives around? Yes. MCx supports Drive Mobility.
You can now pull drives from a slot and put them in another slot and the array will detect it and put it back online without activating a rebuild..as long as you do it within 5 minutes. You can also shut down the array and re-cable the backend buses if you want and it will still know which drives belong to what. Let’s be clear here. Don’t just do this without planning. You’re still moving drives and changing things. Do it for a purpose. Also, you can’t move drives, or whole RAID groups, between arrays…even between MCx arrays. It’s only within the same array. Use caution. MCx does parallel rebuilds on RAID6, if you lose two drives. FLARE would rebuild the set with one drive…then rebuild it again for the second drive. MCx is more intelligent and if you fail two drives it will rebuild both parity sets at once.
Another additional feature is file compression and deduplication. The advantages here are obvious in that you get more efficient use of your storage.
Snapshots are now “unified”. This means that snapshot space comes out of the pool of the originating LUN or filesystem. You are not required to dedicate capacity for snapshot usage. Snapshots can be set to auto delete at a certain time or at a certain storage capacity usage. You can have up to 256 snapshots.
The VNXe 3200 checks all the boxes you’d expect for VMware integration.
EMC also has the VSI (Virtual Storage Integrator) framework which allows for visibility, provisioning, and management of storage from vCenter.
One very big complaint that we had about the original VNXe was the lack of metrics and reporting. It made our job harder when it came to analyzing performance problems or looking to expand an array. The VNXe 3200 has significantly enhanced reporting including: Allowing users to enable/disable metrics collection
As noted earlier the storage processors are now contained in the first enclosure, called the Data Processor Enclosure. Each SP has both 10Gb Ethernet and 8Gb FC ports. Note that the 10Gb ports are 10GBase-T so they will require, obviously, a 10GBase-T switch instead of one that uses SFPs and/or TwinAx cables. My guess is that EMC thought that smaller data centers would move to this cabling instead of switches that use SFP. This is something that I’m not sure I agree with but time will tell. It wouldn’t surprise me to see an SFP option offered later.
When the first VNXe was released we all assumed we were getting a “Baby VNX”, but that wasn’t exactly the case. This second generation fixes that. It’s an amazing entry-level or small site array that includes almost every feature that the bigger VNX models have. That’s a good thing, especially for those with VNX in the primary location(s) and have a need for smaller sites. They won’t have to make any large compromises here. The real question is around how well the new VNXe will compete against the surge of cheaper hybrids that are out there. To me, that depends greatly on your requirements. The hybrids provide very good performance for many use cases but don’t always offer the flexibility needed by many customers. To get NAS services, for example, you often have to run a VM and let it handle those responsibilities. They also lack any real tuning. If your workload fits them well, they work great. But if yours don’t there isn’t much you can do. This is where the VNXe is different. Yes, it’s still very much a “traditional” array from a distance but it’s one that offers very flexible and configurable options. It also has many features that will provide great performance for almost any workload. Options are a good thing, especially with storage where use cases and requirements can change quickly and often.