Hi
I am testing the StarWind solution with Vsphere 5.5.
Downloaded and installed StarWind 8 on WIndows 2008R2.
Starwind server is Dual Xeon E5420/16 GB RAM/LSI9261 RAID
Vsphere Server is Dual Xeon X5620/192 GB RAM
iSCSI network is dual port Mellanox Infiniband 20GB with IPoIB; management network is Ethernet 1GB.
Pure network performance between hosts via Infiniband network is about 12 gb/sec per link (tested with ntttcp utility; Infiniband network utilization is about that 60%.).
Created RAM based drive with size 10GB, assigned iSCSI target, connected target to Vsphere and created Vsphere store. Added disk to virtual CentOS 7 based machine, created ext4 filesystem and mounted it.
The maximum read/write speed that I got with DD utility is 550Mbyte/sec for write and 750Mbyte/sec for read (Infiniband network utilization is less that 20%.). To got it I configured MPIO with round robing and IOOperation Limit = 1. Any other setup ( fixed path, bigger IOOperation Limit etc) performed less speed. Local RAM drive speed on Windows server is more than 2 Gbyte/sec for write and read.
Any idea why iSCSI connection provide poor performance even for RAM based target?
After this I tried to test iSCSI performance with direct connection from CentOS iSCSI initiator (on virtual machine) to Starwind target. Disconnected it from Vsphere, setup iSCSI initiator on CentOS, created partition. made etx4 filesystem and mounted it. As result I found that any attempt of access to this mounted partition (for example ls /mnt command) is falling of StarWind process on WIndows Server. So, was unable to perform any speed test at all. Looks very-very odd...
Thanks for any ideas.
E.
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software