Very Confused about different setups
Posted: Sat Mar 29, 2014 2:04 am
Hello. I have been trying to read through the posts, but some information seems to be conflicting to me.
On one hand, there are posts that say we need to setup MPIO in Windows, but others say nothing to set up on Windows side, just setup on ESX side.
Here is the setup I am trying to build:

Storage
Windows Server 2008 R2 Enterprise SP1
8x Gigabit NIC ports (dedicated for storage connections)
2x Gigabit NIC ports (for management)
8x 1TB SATA Drives for storage
16GB RAM
ESXi host 1
ESXi 5.5
32GB RAM
2x Gigabit NIC ports (dedicated for storage traffic)
2x Gigabit NIC ports (dedicated for vm traffic)
1x Gigabit NIC port (dedicated for management)
1x Gigabit NIC port (dedicated for vMotion)
ESXi host 2
ESXi 5.5
32GB RAM
2x Gigabit NIC ports (dedicated for storage traffic)
2x Gigabit NIC ports (dedicated for vm traffic)
1x Gigabit NIC port (dedicated for management)
1x Gigabit NIC port (dedicated for vMotion)
Switches
2x Cisco 3750X switches connected via Cisco Stack Cable
Here are the questions I had:
1. Is the above diagram correct, or should I be connecting some other way? Am I trying to use too many ports for the storage?
2. Anton always says in the forums that we should not use NIC teaming, but how can I get the maximum available bandwidth?
3. Should I be setting MPIO and RoundRobin (along with the iSCSI target setup) in Windows only, or do I also have to do it on the ESX side (with Route Based IP Hash)?
4. Do I need to create EtherChannels on the switches for the iSCSI connections? (Or do I need it only for VM Traffic and vMotion?)
5. Should I be enabling Jumbo Frames for all ports?
Thanks
On one hand, there are posts that say we need to setup MPIO in Windows, but others say nothing to set up on Windows side, just setup on ESX side.
Here is the setup I am trying to build:

Storage
Windows Server 2008 R2 Enterprise SP1
8x Gigabit NIC ports (dedicated for storage connections)
2x Gigabit NIC ports (for management)
8x 1TB SATA Drives for storage
16GB RAM
ESXi host 1
ESXi 5.5
32GB RAM
2x Gigabit NIC ports (dedicated for storage traffic)
2x Gigabit NIC ports (dedicated for vm traffic)
1x Gigabit NIC port (dedicated for management)
1x Gigabit NIC port (dedicated for vMotion)
ESXi host 2
ESXi 5.5
32GB RAM
2x Gigabit NIC ports (dedicated for storage traffic)
2x Gigabit NIC ports (dedicated for vm traffic)
1x Gigabit NIC port (dedicated for management)
1x Gigabit NIC port (dedicated for vMotion)
Switches
2x Cisco 3750X switches connected via Cisco Stack Cable
Here are the questions I had:
1. Is the above diagram correct, or should I be connecting some other way? Am I trying to use too many ports for the storage?
2. Anton always says in the forums that we should not use NIC teaming, but how can I get the maximum available bandwidth?
3. Should I be setting MPIO and RoundRobin (along with the iSCSI target setup) in Windows only, or do I also have to do it on the ESX side (with Route Based IP Hash)?
4. Do I need to create EtherChannels on the switches for the iSCSI connections? (Or do I need it only for VM Traffic and vMotion?)
5. Should I be enabling Jumbo Frames for all ports?
Thanks