One great new feature in vSphere 5.5 is the built-in flash read caching system. This allows you to use pretty much any SSD as a storage read cache. This can have a great positive impact on many workloads as you’re caching data right on the host in very fast flash memory. This gives you great throughput and very low latency. While using Flash Read Cache you can continue to use your normal VMware features such as vMotion, DRS, and HA as before without any chance.
Each host can have up to 8 SSDs, 4TB each, for a maximum of 32TB right now. When you assign SSDs to the cache pool you can still use some of their capacity for host memory caching, just like we’ve done with vSphere 5.1 in the past. Therefore it doesn’t have to be either/or.
The one downside to VMware’s implementation of this is you must manage it at a per-VM level. It does not just start caching reads for all VMs. You must go through and define how much space each VM gets in the cache pool. Note, you also must upgrade the VM Hardware to v10. When you specify the size of the cache for that VM you also specify a block size. The block size dictates how much cache the VM can use.
- 4K Block – Up to 4GB
- 8K Block – Up to 8GB
- 16K Block – Up to 16GB
- 32K Block – Up to 32GB
- 1MB Block – Up to 1TB
Other than that it’s very easy to enable and manage. Below is a video walking you through the process.