I have recently deployed a nested vSphere v6 cluster inside my v5 environment. As I was going through the new features, I came across the possibility of having additional TCP/IP Stacks inside my hosts.

A quick intro on TCP/IP Stacks:  your host has a default tcp/ip stack already – this is where your default routing table is; it is where you will configure your initial tcp/ip settings (DNS, IP, Gateway, etc.). However, an additional TCP/IP stack allows you to have VMkernel interfaces assigned to different stacks, essentially, creating virtual routers inside your host.

Whilst this feature has been around vSphere 5.1, I have never really used it so, I decided to give it a go. To start off, why not configure a separate TCP/IP for my iSCSI storage. Furthermore, I wanted to also setup MPIO (multipath) to my iSCSI target.

iscsi-diagram

VMware Configuration iSCSI MPIO Requirements:

  • Storage presents itself as a target on one IP only
    • in my case, 10.10.10.51
  • Each iSCSI VMkernel interfaces, on each host, must be setup on the same vSwitch (either distributed/standard); obviously, for MPIO you need to have more than one Uplink
    • I’m using a distributed vSwitch with two uplinks
  • There should be a one-to-one connection between the portgroups and each of the uplinks
    • in my setup, each of the two uplinks are linked to one, and one only portgroup respectively
  • Access should be made over the same broadcast domain
    • here I’m using the actual iSCSI network 10.10.10.0 /24

More details can be found here (VMware KB article).

For details on how to configure FreeNAS, see a previous blog of mine here.


OK … so here is the bad news: it doesn’t work! I have battled this for almost a week and as far as I can tell, in my search for inspiration, I couldn’t find anyone who got this working. Basically, when you get to the stage of doing the iSCSI port binding, the vmkernel adapters do not show up in the list – apparently, they are not “compliant”.

Trust me – I tried this in many setup variations; I’ve also rebuilt it from scratch a few times. Lastly, I have also updated my vSphere to the latest Update1 patch.

Who knows … maybe it works in a non-nested environment.

Frustrating!


The only thing which remained to be tested was the MPIO. Since I needed successful iSCSI connectivity, I had to go back to “old ways” and forget about the additional tcp/ip stack  … at least for now.

The only thing I had to change is delete the VMkernel adapters and recreate them within the default TCP/IP stack.

I should add here that for some reason alien to me, I could not delete the VMkernel adapters using the web-client (I haven’t tried the windows client) – I was getting a warning stating the host would loose connectivity and I should first ensure the host has at least another backup connection – which it had … two more!

The CLI came to good rescue. For the record, I’ve during this experiment, I’ve used different variations of the following two ESXCLI commands:

  • esxcli network ip netstack {add | remove}
  • esxcli network ip interface {add | remove}

OK … so MPIO works; I’ve also set it to Round-Robin as the default policy to “Most Recent Used” (MRU). Below is screenshot for one of the virtual (nested) ESXi hosts:

mpio-iscsi

The bottom line: Additional TCP/IP stacks could certainly be configured but one must be careful and fully test before bringing it in production. In fact, I have not seen this feature being used yet in production environments, small or large.

Do you want MPIO with iSCSI – no problem; even with nested Storage and ESXi hosts.


Thank you,
Signature
View Rafael A Couto Cabral's profile on LinkedIn



Comments are closed.