It is important you make the distinction between memory overcommitment and memory allocated.

Memory overcommitment is a awesome! It allows us to have hundreds VMs assigned a total amount of RAM larger than what we actually have on the host. It doesn’t mean however that the entire allocation is actually used right away.

Though at some stage, as your environment grows, the ESXi host will eventually run out of available physical memory. When that happens, not everything is lost!

ESXi makes use of a few, very clever techniques for reclaiming back unused memory from VMs.


TPS allows the ESXi host to share memory blocks between different VMs. For instance, in case you are running more than two Windows VMs, it is very likely that some memory blocks could be shared between the two. If a VM is trying to write/change the content of a shared memory page, the ESXi will make a private copy of that memory page and allow the change to happen on the actual copy – VMware calls this technique CoW – Copy On Write. 

This process can be tweaked using the following Advanced Settings properties:

      • Mem.ShareScanTime – guest physical memory scan rate
      • Mem.ShareScanGHz – maximum number of scanned pages per second
      • Mem.ShareRateMax – number of scanned pages per VM


When the ESXi host is under memory contention, it has to free memory. Ballooning overcomes the problem of the ESXi host not being able to free memory blocks. As previously mentioned, this is because the ESXi host has no access to the memory free-list within the guest OS.

Ballooning is provided through the balloon driver – available through VMware Tools installation on the Guest OS.

The Ballon Driver works as a proxy between the Guest OS and the ESXi host. It uses a clever technique forcing the Guest OS to expose its free memory pages to the ESXi host.

How does it do that?


Source: VMware Website :: stared blocks = free memory pages | “pinned” blocks = unswappable memory pages
  1. The Ballooning driver installed within the guest OS will periodically poll the ESXi host for a target balloon size
  2. Assuming that the ESXi host needs to reclaim two memory pages for another Virtual Machine, it will set the target balloon size to two memory pages
  3. The ballon driver makes a special requests the Guest OS to allocate *unswappable* memory (these memory pages once allocated, the guest OS will never swap them to disk!) matching the specified target size
  4. The guest OS will make the memory allocation from it’s available free memory and will allocate it to the balloon driver
  5. At this stage the ballon driver “knows” of two memory pages which are not being used by the guest OS! It will therefore provide the memory addresses to the ESXi host
  6. The ESXi host will free-up (release) the matching memory pages from physical memory
  7. At last, the balloon driver releases the allocated memory back to the Guest OS free-list

At step (4) above, should there be no free-memory, the guest OS will use built-in techniques (such as guest OS swapping) in order to free up guest physical memory.


Host Memory Swapping is very similar to Guest OS Swapping. When the host is running out of memory, using a fancy algorithm it will randomly select memory pages from memory and save them to disk. When done within the Guest OS, this doesn’t cause problems apart from performance degradation. However, we need to keep in mind that when the ESXi host starts swapping, it won’t know what the Guest OS is doing with those pages – they could be protected pages, they could be reserved memory pages, etc. By swapping them to disk, the chance of causing memory corruption and OS instability, increase.

In regards to host level swapping, the following is true:

  • When swapping, ESXi will first attempt to compress memory pages instead
  • Only pages with a minimum 50% compression ratio will be considered for compression; the memory pages get swapped otherwise.
  • Compressed memory pages are saved in the compression cache which is part of the guest physical memory. The compression-cache memory size is dynamic but it would never exceed a maximum of 10% of the configured VM memory. This value can also be changed by editing the advanced property Mem.MemZipMaxPct
  • Due to the negative performance impact, this technique will be chosen as a last resort since it could severely impact VM performance due to:
      • Page selection – the ESXi host is not aware of privileged/reserved memory areas within the Guest OS; it could happen these pages to be swapped causing instability within the guest OS
      • Double swapping – if coincidentally the guest OS is also under memory pressure, it could happen that the same page is  swapped twice – by the Guest OS and by the ESXi host
      • Latency – swapping introduces high latency for it requires disk I/O
  • To minimise the negative effects, the host make use of three methods:
      • Random selection of memory pages to be swapped out to disk
      • Memory page compression to minimise the number of pages needed to be swapped out
      • Swapping to SSD disks when available – this is a configurable option not enabled by default



Transparent Page Sharing (TPS) is enabled by default. However, Ballooning and Swapping (including compression) will be employed based on the free-memory status:

Free Mem. State   Threshold  Reclamation technique & Comments
 High  min. 6%  TPS (Transparent Page Sharing) – this is enabled by default and is attempted proactively.
 Soft  towards 4%  Balooning – in fact, ballooning starts before 4% threshold is reached in an attempt maintain the host at the Soft level
 Hard  <= 2%  Swapping & Balooning
 Low  < 1%  Swapping & stops VM execution when consumed memory is higher than target memory allocation

Below you can see a screenshot of the ESXTOP output from my ESXi host:


When VM memory limits are specified, Ballooning and Swapping will still be employed whenever memory usage is higher than than the specified limit – see Part III

When should I worry?

Awesome: No ballooning/no host swapping/typical guest OS swapping. You should really see high free memory state

Good: Some ballooning / no host swapping / ballooning is not causing excessive guest OS swapping, if any at all – high free memory state

Hmmm! Ballooning is high / lot of memory compression & some host level swapping, if any / guest memory under high pressure; there will also be a lot of guest OS swapping – free memory state should be soft

Oooops! This is when your free-memory state is Hard/Low. This is really bad – this means ballooning is not enough; neither is memory compression. Swapping kicks in so host will start to “struggle”. The host will eventually freeze VMs execution in order to free up memory.

On a future blog, I will definitely be tackling Troubleshooting – so stay tuned!

Go back to Index

Thank you,
View Rafael A Couto Cabral's profile on LinkedIn

Comments are closed.