Swap to Host Cache (wrongly aka. Swap to SSD)


Hi All

vSphere 5.0 and later came with so many enhancement for Virtual Infrastructure and Swap to Host Cache is one of these. This feature allows you to use any SSD datastore -some or all of it- as Write-back Cache to swap to it ESXi memory pages in case of Hard State memory contention. Many of vExperts wrote about it and its technical how-to configuration, like Duncan Epping in his complete-guide blog post here.
Now, you’re asking: “So, why do you write this blog post?? Do you wanna to copy-paste??!”  The answer is unfortunately, NO!!

I write this blog post to answer a question that may come to your mind while reading about this awesome feature: “What is the difference between using SSD Datastore as a Host Cache and just configure the host to put VMs .vswp files in this datastore???”

While reading many sources about Host Cache feature, I didn’t find anyone stated clearly any answer to this question, but the following screenshot gave me the first ray of light. It’s a screenshot -from Duncan’s post- of comments between him and one of his visitors who compares and asks the same question nearly.

Duncan.jpg
Matt van Mater -the visitor- compared the two features: Host Cache and Dedicating certain SSD Datastore for VMs .vswp files. Duncan’s answer clearly stated that the space usage would be really lower if you use Host Cache. I began to search about Write-back Cache technologies (It was my first time to deal deeply with Cache technologies) and I found that simple diagram from Wikipedia:

Write-back_with_write-allocation.svg.jpg
This simple diagram also indicates how Write-back Cache works and why it really uses small space size and gives high response. All of these things, gave me the following answer to the question above nad it was all in the underlined word Write-back Cache:
1-) Host Cache is a Write-back cache, which means that it makes both Read/Write operations fast, as it reads and writes mainly to SSD Drive. That improves reading from swap and changing after warming up period. Only some blocks of swap files are written back to the .vswp files reside with VMs folders.
2-) Host Cache is shared between VMs, as it doesn’t create a specific file for each VM like normal .vswp file. It only creates a bunch of files on Hose Cache that ESXi host will just swap to it. That makes any Read/Write operation from any VM on the host configured will benefit from a sharing probability of its memory page with any other VM (same concept like Transparent Page Sharing). This greatly reduces the chance to access .vswp file location and improves performance if the shared block under operation is on the cache, thus Host Cache needs some Warming up period.
3-) SSD Datastore size for placing swap files of N VMs= N*Size of single Swapfile (assuming equal .vswp file sizes). Using Host Cache, and due to sharing memory pages of it, this size is greatly reduced (same concept like Transparent Page Sharing).
4-) In case of using Network/FC-based SSD Datastore for placing swap files, the network latency -even when using FC SAN- is much greater than SSD access latency and hence, Host Cache -which should only configured on local SSD disks for the same reason- always gives higher performance.

I hope this clears this small mystery about the difference between Swap to Host Cache and setting SSD Datastore as a .vswp files location. Waiting for your feedback and comments.

Special Thanks to: Duncan EppingMatt van Mater

 


2 Comments

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.