When I first built my ESXi server, I thought I’d invest in a good server to allow me to really get my hands “dirty” with VMware. When it came to acquiring disks, I quickly realise that SSD drives were very expensive … so I ended up with SAS drives.
Didn’t take long to find out it is possible to optimise Memory Resources by using SSD drives. I have therefore decided to invest in two 128 GB SSD drives but never really got the chance to actually put them to test. I connected and used them as local storage.
Recently, prices have come down quite a bit so I thought it’s good time to upgrade my local ESXi local storage. A recent eBuyer deal, came in handy so I managed to get myself two additional 1 TB SSD drives.
At the moment, my local storage runs on 4 x SSD Sandisk drives and 2 x SAS drives (can’t remember the vendor name):
Before actually doing anything fancy with the SSD disks, I wanted to have a better picture regarding the performance of an SSD vs SAS disk.
How Did I Test?
- VM Specs
- 1 x vCPU
- 1 GB RAM
- 2 x vDisks – 25 GB for system partition; 15 GB as testing disk
- OS – Windows 2003
- Tool used: IOmeter
- 4 x workers
- all-in-one test
- Duration: 20 minutes
- Ran the same test three times
- VM on 128 GB SSD
- VM on 1 TB SSD
- VM on 1 TB SAS drive
I think the results below need no further comments:
I don’t know what your reaction is … but I was impressed. Clearly, SSD are indeed much faster. Let me also say that I’ve had the 128 GB SSDs for over a year now and my ESXi server is always on. These disks currently host Cisco and Juniper virtual routers, Linux and Windows VMs, few other open-source virtual appliances … The point is, these disks have been fairly busy and yet, I don’t think they performed too badly in these tests.
As far as I’m concerned, my next NAS will run with SSD disks! 🙂