Page 1 of 1

StarWind iSCSI poor performance with Vsphere 5.5 and crash with CentOS

Posted: Sat Jan 07, 2017 11:51 pm
by softmaster
Hi
I am testing the StarWind solution with Vsphere 5.5.
Downloaded and installed StarWind 8 on WIndows 2008R2.
Starwind server is Dual Xeon E5420/16 GB RAM/LSI9261 RAID
Vsphere Server is Dual Xeon X5620/192 GB RAM
iSCSI network is dual port Mellanox Infiniband 20GB with IPoIB; management network is Ethernet 1GB.
Pure network performance between hosts via Infiniband network is about 12 gb/sec per link (tested with ntttcp utility; Infiniband network utilization is about that 60%.).
Created RAM based drive with size 10GB, assigned iSCSI target, connected target to Vsphere and created Vsphere store. Added disk to virtual CentOS 7 based machine, created ext4 filesystem and mounted it.
The maximum read/write speed that I got with DD utility is 550Mbyte/sec for write and 750Mbyte/sec for read (Infiniband network utilization is less that 20%.). To got it I configured MPIO with round robing and IOOperation Limit = 1. Any other setup ( fixed path, bigger IOOperation Limit etc) performed less speed. Local RAM drive speed on Windows server is more than 2 Gbyte/sec for write and read.
Any idea why iSCSI connection provide poor performance even for RAM based target?

After this I tried to test iSCSI performance with direct connection from CentOS iSCSI initiator (on virtual machine) to Starwind target. Disconnected it from Vsphere, setup iSCSI initiator on CentOS, created partition. made etx4 filesystem and mounted it. As result I found that any attempt of access to this mounted partition (for example ls /mnt command) is falling of StarWind process on WIndows Server. So, was unable to perform any speed test at all. Looks very-very odd...

Thanks for any ideas.

E.

Re: StarWind iSCSI poor performance with Vsphere 5.5 and crash with CentOS

Posted: Tue Jan 10, 2017 9:50 am
by softmaster
Hm..
Nobody has information about real StarWind iSCSI performance?

I performed some new installations, tuning and tested disk performance with CrystalDiskMark

1) The local test disk test (directly on StarWind Win2016 based server) shows 1350MB/sec read and 1520 MB/sec write.
2) Test from virtual Windows 2012 server on Vsphere 5.5 to RAM drive based iSCSI target on StarWInd - 880 MB/s read and 810 MB/sec write
3) Test from virtual Windows 2012 server on Vsphere 5.5 to DIskBridge based iSCSI target on StarWInd - 660MB/s read and 515 MB/sec write
4) Simultaneous test from 2 virtual Windows 2012 server on different Vsphere 5.5 hosts to same DIskBridge based iSCSI target on StarWInd - 670MB/s read and 560 MB/sec write
5) Test from virtual Windows 2012 server on Vsphere 5.5 to file based iSCSI target on Windows 2016 storage server (MS iSCSI target, not Starwind) - 1050MB/s read and 480 MB/sec write

The maximum summary network throughput during iSCSI tests was about 6GB/sec on both lines together; The maximum tested network throughput was about 12 GB/sec on one line via nttcp utility, so practically both line in MPIO mode may provide about 24 gb/sec. iSCSI was able to use only 25% of this bandwidth.

Any comments from Starwind specialists? Is it possible to provide 1gbyte/sec and more throughput of real read/write traffic to StarWind iSCSI?

Re: StarWind iSCSI poor performance with Vsphere 5.5 and crash with CentOS

Posted: Fri Jan 13, 2017 4:00 pm
by Michael (staff)
Hello Softmaster,
Thank you for your interest in StarWind solution.
Could you please provide us with underlying storage settings, i.e. type and number of the disks, RAID type and its settings (Disk cache, Read\write policies, Stripe Size)?
Please check storage performance one more time with DISKSPD utility: https://gallery.technet.microsoft.com/D ... e-6cd2f223
I would recommend you to test local disk performance, create Standalone device, connect it in MS iSCSI Initiator on StarWind host (by 127.0.0.1 ip address), create Volume and test its performance. Then connect StarWind device on Vsphere host and test it from the Virtual Machine.
Please share the results with us, thus we could compare them.

Re: StarWind iSCSI poor performance with Vsphere 5.5 and crash with CentOS

Posted: Wed Jan 18, 2017 3:21 am
by dwright1542
This has been discussed before, actually. There are apparently issues with the Windows stack that cap out performance. I guess that's why they're moving to linux?

We had to add Pernix on top of SW to make it perform like we thought it needed to.

-D

Re: StarWind iSCSI poor performance with Vsphere 5.5 and crash with CentOS

Posted: Wed Jan 18, 2017 5:20 pm
by Michael (staff)
Hello Softmaster,
May I wonder about the progress of your tests?
Please do not hesitate to log a support case here https://www.starwindsoftware.com/support-form, thus we could get to the root of the issue quicker.
Thank you!

Re: StarWind iSCSI poor performance with Vsphere 5.5 and crash with CentOS

Posted: Wed Mar 01, 2017 10:48 am
by softmaster
Hi
I upgraded the storage server a little bit and downloaded and installed the last StarWind release. But results still not good... 8-(

Storage server configuration:
- Dial Xeon X5460
- RAM 32 GB
- 12 HDD SATA Seagate Enterprise 5TB 7200 in RAID 6
- RAID card LSI 9261 512MB cache
- Windows 2016
- Starwind 8.0.10695
-Thick disk 10TB with 8GB RAM cache

Client:
- Dual Xeon 5620
- RAM 144GB
- Vsphere ESXi 5.5
- Guest OS Windows 2016 server
- Round robing between 2 iSCSI path with correctly configured IOOperation Limit (1).


Network:
Dual Infiniband cards 20GB/sec. Jumbo MTU 2040 (maximum supported by Cisco Infiniband switch)

Network performance between hosts is about 9B/sec per connection, tested with Microsoft NTttcp utility.


Performance results from CrystalDiskMark 5.2.1 x64.

1) Local performance of system:

Sequential Read (Q= 32,T= 1) : 1285.580 MB/s
Sequential Write (Q= 32,T= 1) : 1578.838 MB/s
Random Read 4KiB (Q= 32,T= 1) : 294.455 MB/s [ 71888.4 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 247.914 MB/s [ 60525.9 IOPS]
Sequential Read (T= 1) : 1119.866 MB/s
Sequential Write (T= 1) : 986.341 MB/s
Random Read 4KiB (Q= 1,T= 1) : 107.043 MB/s [ 26133.5 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 93.879 MB/s [ 22919.7 IOPS]


2) iSCSI performance from virtual Win 2016 on ESXi

Sequential Read (Q= 32,T= 1) : 639.099 MB/s
Sequential Write (Q= 32,T= 1) : 408.713 MB/s
Random Read 4KiB (Q= 32,T= 1) : 78.180 MB/s [ 19086.9 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 68.150 MB/s [ 16638.2 IOPS]
Sequential Read (T= 1) : 415.488 MB/s
Sequential Write (T= 1) : 92.897 MB/s
Random Read 4KiB (Q= 1,T= 1) : 15.534 MB/s [ 3792.5 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 14.690 MB/s [ 3586.4 IOPS]

3) Local iSCSI performance (separate target connected locally on the storage server where StarWind installed via 127.0.0.1 address):

Sequential Read (Q= 32,T= 1) : 257.920 MB/s
Sequential Write (Q= 32,T= 1) : 348.431 MB/s
Random Read 4KiB (Q= 32,T= 1) : 156.420 MB/s [ 38188.5 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 85.847 MB/s [ 20958.7 IOPS]
Sequential Read (T= 1) : 463.422 MB/s
Sequential Write (T= 1) : 272.184 MB/s
Random Read 4KiB (Q= 1,T= 1) : 51.432 MB/s [ 12556.6 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 42.582 MB/s [ 10396.0 IOPS]

Re: StarWind iSCSI poor performance with Vsphere 5.5 and crash with CentOS

Posted: Wed Mar 01, 2017 11:13 am
by softmaster
In addition - performed test via Diskspd:

C:\Users\Administrator\Downloads\Diskspd-v2.0.17\amd64fre>Diskspd.exe -b8K -d60 -h -L -o2 -t4 -r -w30 -c50M d:\io.dat

Command Line: Diskspd.exe -b8K -d60 -h -L -o2 -t4 -r -w30 -c50M d:\io.dat

Input parameters:

timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'd:\io.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing mix test (read/write ratio: 70/30)
block size: 8192
using random I/O (alignment: 8192)
number of outstanding I/O operations: 2
thread stride size: 0
threads per file: 4
using I/O Completion Ports
IO priority: normal



Results for timespan 1:
*******************************************************************************

actual test time: 60.00s
thread count: 4
proc count: 4

CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 49.14%| 2.60%| 46.54%| 50.86%
1| 46.33%| 2.42%| 43.91%| 53.67%
2| 46.59%| 2.58%| 44.01%| 53.41%
3| 47.21%| 2.66%| 44.56%| 52.79%
-------------------------------------------
avg.| 47.32%| 2.57%| 44.75%| 52.68%

Total IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1774682112 | 216636 | 28.21 | 3610.59 | 0.551 | 0.653 | d:\io.dat (50MB)
1 | 1773445120 | 216485 | 28.19 | 3608.08 | 0.551 | 0.653 | d:\io.dat (50MB)
2 | 1776361472 | 216841 | 28.23 | 3614.01 | 0.550 | 0.672 | d:\io.dat (50MB)
3 | 1773084672 | 216441 | 28.18 | 3607.34 | 0.551 | 0.644 | d:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total: 7097573376 | 866403 | 112.81 | 14440.02 | 0.551 | 0.656

Read IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1243881472 | 151841 | 19.77 | 2530.68 | 0.554 | 0.675 | d:\io.dat (50MB)
1 | 1240981504 | 151487 | 19.72 | 2524.78 | 0.554 | 0.638 | d:\io.dat (50MB)
2 | 1244094464 | 151867 | 19.77 | 2531.11 | 0.555 | 0.710 | d:\io.dat (50MB)
3 | 1240096768 | 151379 | 19.71 | 2522.98 | 0.556 | 0.671 | d:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total: 4969054208 | 606574 | 78.98 | 10109.55 | 0.555 | 0.674

Write IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 530800640 | 64795 | 8.44 | 1079.91 | 0.543 | 0.599 | d:\io.dat (50MB)
1 | 532463616 | 64998 | 8.46 | 1083.30 | 0.545 | 0.686 | d:\io.dat (50MB)
2 | 532267008 | 64974 | 8.46 | 1082.90 | 0.539 | 0.572 | d:\io.dat (50MB)
3 | 532987904 | 65062 | 8.47 | 1084.36 | 0.541 | 0.576 | d:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total: 2128519168 | 259829 | 33.83 | 4330.48 | 0.542 | 0.610


%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 0.178 | 0.184 | 0.178
25th | 0.370 | 0.361 | 0.367
50th | 0.433 | 0.423 | 0.430
75th | 0.526 | 0.513 | 0.522
90th | 0.687 | 0.672 | 0.683
95th | 0.931 | 0.917 | 0.927
99th | 3.464 | 3.391 | 3.441
3-nines | 6.097 | 6.098 | 6.097
4-nines | 9.394 | 9.692 | 9.531
5-nines | 69.577 | 17.566 | 68.219
6-nines | 101.063 | 92.236 | 101.063
7-nines | 101.063 | 92.236 | 101.063
8-nines | 101.063 | 92.236 | 101.063
9-nines | 101.063 | 92.236 | 101.063
max | 101.063 | 92.236 | 101.063

Re: StarWind iSCSI poor performance with Vsphere 5.5 and crash with CentOS

Posted: Fri Mar 03, 2017 12:55 pm
by Michael (staff)
Updating the community.
StarWind engineers are working over this case. We will let you know about the result soon.

Re: StarWind iSCSI poor performance with Vsphere 5.5 and crash with CentOS

Posted: Sat Sep 23, 2017 12:44 pm
by softmaster
Hi
Spent a lot of time with different performance tests of StarWInd based iSCSI target in virtual Esxi based environment (ESXi 5.5 and 6.0) with different configurations.
Tested both Microsoft and Vmware iSCSI initiators. Tested Linux and Windows based StarWind storage. Tested physical Infiniband 20Gb/sec network and virtual Vsphere VMXnet3 based connection inside the same Vsphere host.
Unfortunately have to note that StarWind based iSCSI target performance is far from optimal. Despite very fast physical storage (I tested NVME PCiE disks, RAID0, RAID10 and RAID60 based on 20 SAS 12GB/sec HDD drives, even RAM disk) - Starwind based iSCSI storage was able to transfer to iSCSI initiator less then 30%-50% of really possible physical throughput. I mean maximally possible speed of sequential access. I believe it's a very important parameter, that shows efficiency of iSCSI layer. In real life this speed very important for backup/recovery procedures and for bulk read/write procedures. And if 10%-20% lack in Random Read/Write is unpleasant but not lethal, but 48 hour size of backup/recover time window instead 6 hours means downtime of whole business for 2 days instead inconspicuous night procedure.

Just few numbers (This is result of the single StarWind node. Based on my real tests, for dual node StarWind system read speed will be close to 200% and write speed is about 80%-90% of single node speed) :

Direct speed of NVME PCIe based disks:

Sequential Read (Q= 32,T= 1) : 3034 MB/s
Sequential Write (Q= 32,T= 1) : 2535 MB/s

Starwind based iSCSI target on same NVME PCIe disks:

Sequential Read (Q= 32,T= 1) : 610 MB/s
Sequential Write (Q= 32,T= 1) : 730 MB/s


Direct speed of RAID0 based on 20 HDD 12 GB/sec SAS HDDs.:

Sequential Read (Q= 32,T= 1) : 3340 MB/s
Sequential Write (Q= 32,T= 1) : 3750 MB/s

Starwind based iSCSI target on same RAID0:

Sequential Read (Q= 32,T= 1) : 615 MB/s
Sequential Write (Q= 32,T= 1) : 590 MB/s



Direct speed of RAM drive:

Sequential Read (Q= 32,T= 1) : 7240 MB/s
Sequential Write (Q= 32,T= 1) : 5315 MB/s

Starwind based iSCSI target on same RAM drive:

Sequential Read (Q= 32,T= 1) : 940 MB/s
Sequential Write (Q= 32,T= 1) : 815 MB/s

I have to note that tested another SDS systems on the same environment and got much better performance. I do not think it's correctly to name competing products on this forum, but I have to note that results I got on another products was much better.

Many times I tried to discuss the problem with StarWind support specialist, but unfortunately did not meet any understanding and concern about the problem.

Re: StarWind iSCSI poor performance with Vsphere 5.5 and crash with CentOS

Posted: Tue Sep 26, 2017 10:13 am
by Sergey (staff)
Hello, softmaster. I have replied to you in another thread:
https://forums.starwindsoftware.com/vie ... 007#p27007