I have recently deployed a nested vSphere v6 cluster inside my v5 environment. As I was going through the new features, I came across the possibility of having additional TCP/IP Stacks inside my hosts.
Whilst this feature has been around vSphere 5.1, I have never really used it so, I decided to give it a go. To start off, why not configure a separate TCP/IP for my iSCSI storage. Furthermore, I wanted to also setup MPIO (multipath) to my iSCSI target.
VMware Configuration iSCSI MPIO Requirements:
- Storage presents itself as a target on one IP only
- in my case, 10.10.10.51
- Each iSCSI VMkernel interfaces, on each host, must be setup on the same vSwitch (either distributed/standard); obviously, for MPIO you need to have more than one Uplink
- I’m using a distributed vSwitch with two uplinks
- There should be a one-to-one connection between the portgroups and each of the uplinks
- in my setup, each of the two uplinks are linked to one, and one only portgroup respectively
- Access should be made over the same broadcast domain
- here I’m using the actual iSCSI network 10.10.10.0 /24
More details can be found here (VMware KB article).
For details on how to configure FreeNAS, see a previous blog of mine here.
OK … so here is the bad news: it doesn’t work! I have battled this for almost a week and as far as I can tell, in my search for inspiration, I couldn’t find anyone who got this working. Basically, when you get to the stage of doing the iSCSI port binding, the vmkernel adapters do not show up in the list – apparently, they are not “compliant”.
Trust me – I tried this in many setup variations; I’ve also rebuilt it from scratch a few times. Lastly, I have also updated my vSphere to the latest Update1 patch.
Who knows … maybe it works in a non-nested environment.
The only thing which remained to be tested was the MPIO. Since I needed successful iSCSI connectivity, I had to go back to “old ways” and forget about the additional tcp/ip stack … at least for now.
The only thing I had to change is delete the VMkernel adapters and recreate them within the default TCP/IP stack.
OK … so MPIO works; I’ve also set it to Round-Robin as the default policy to “Most Recent Used” (MRU). Below is screenshot for one of the virtual (nested) ESXi hosts:
The bottom line: Additional TCP/IP stacks could certainly be configured but one must be careful and fully test before bringing it in production. In fact, I have not seen this feature being used yet in production environments, small or large.
Do you want MPIO with iSCSI – no problem; even with nested Storage and ESXi hosts.