Constantin (staff) wrote:Currently I`m out of office so can`t go to our tech department responsible for tech papers and ask them to update to papers, and why.
Also in your case I don`t see any problems too: all VMKernel and VMs too can be in one subnet, it`s not a problem, but you simply should ACL in StarWind for LUN masking - thus you`ll hide datastore targets from VMs and VMs disks from ESX(i).
Hello Constantin,
We use ACL in starwind to allow only certain initiators access to certain targets, but...
ACL in starwind do not work to prevent vmware vmkernels from scanning targets that are on different iscsi networks.
ACL in starwind do work to prevent vmware vmkernels from accessing targets that are restricted.
ACL in starwind do work to prevent other initiators from acceessing targets that are restricted.
I put in a support ticket with vmware wondering why it takes 20-30 minutes for our vmware servers to start and I included a little overview of our iscsi setup
We have 4 vmkernels and 4 subnets for iscsi, I told them that during startup the logs show that the vmkernels are accessing targets in their respective subnet, but also trying to access targets in the other subnets and failing and this is what is causing host server startup delay.
Code: Select all
- vmk1/10.0.0.18 connects to 10.0.0.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) successfully
- vmk1/10.0.0.18 tries to connect to 10.0.1.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) and fails
- vmk1/10.0.0.18 tries to connect to 10.0.2.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) and fails
- vmk1/10.0.0.18 tries to connect to 10.0.3.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) and fails
- vmk2/10.0.1.18 connects to 10.0.1.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) successfully
- vmk2/10.0.1.18 tries to connect to 10.0.0.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) and fails
- vmk2/10.0.1.18 tries to connect to 10.0.2.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) and fails
- vmk2/10.0.1.18 tries to connect to 10.0.3.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) and fails
- vmk3/10.0.2.18 connects to 10.0.2.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) successfully
- vmk3/10.0.2.18 tries to connect to 10.0.0.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) and fails
- vmk3/10.0.2.18 tries to connect to 10.0.1.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) and fails
- vmk3/10.0.2.18 tries to connect to 10.0.3.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) and fails
- vmk4/10.0.3.18 connects to 10.0.3.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) successfully
- vmk4/10.0.3.18 tries to connect to 10.0.0.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) and fails
- vmk4/10.0.3.18 tries to connect to 10.0.1.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) and fails
- vmk4/10.0.3.18 tries to connect to 10.0.2.220 (vmvol00iso, vmvol01, vmvol02, vmvol03, vmvol04, vmvol05, vmvol06, vmvol07) and fails
So vmware support came back and told me that this is standard when using vm software iscsi inititator and provided the KB below:
http://kb.vmware.com/selfservice/micros ... Id=1024476
So...if we can run mpio with vmware and mpio with vm guests using ms iscsi Initiator with 1 iscsi subnet instead of 4, then I can scale back our network.
When you have time let me know if this is feasible, if not, then we will have to live with the current limitations of starwind requiring multiple subnets for mpio to work and vmware's software iscsi initiator limitations of scanning multiple networks and timing out.
Thanks,
Mark