P.S. Could you please check Windows Event Log as well? Very much appreciated!
georgep wrote:same target , no chaps no acl... basic stuff just to test performance...
Weird eh?
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
georgep wrote:same target , no chaps no acl... basic stuff just to test performance...
Weird eh?
georgep wrote:HI guys,
So I have these 4 ET quad nics. I first had the problem with them not being able to connect to ISCSI target from my starwind SAN. But other nics could connect just fine. So I tested them all and this "Loopbackup test" was failing. Couldn`t beleive it. So I took those 4 nics and tested in a different Asus motherboards. Guess what ? the test passed on all. So now I put one on a completely different ASUS MB and it passed of course and also no problems with ISCSI MPIO things....
So it seems to be a problem only under this server specific MB Asus z8pe-d18 boards. I do have 3 of these and tested under 2 of them with Windows with latest drivers for Intel ET but they all failed that loopback test and also couldn`t connect to my Starwind target.....
So the question is where does the problem really resides to ? Asus sZ8-18 server board ...it`s an expensive board.. and I do have 3 of these... or would it be to the intel ET cards themself ?.... Weird.....
In the end I have 3 servers with that board and want to make them work with Intel Quad ET to run ESXi 4.1 Update 1 for ISCSI traffic with MPIO and jumbo frames... Hopefully the bios update of the MB will help and maybe ESXi would have better luck with the Intel ETs....
Also the Asus z8pe-d18 has 2 Intel on board nics which work just great with ISCSI and my Starwind setup.....
Any answers would be appreciated.
Will update the post for everyone...
georgep wrote:that makes no sense... why is that ? intel with intel stuff should work...
georgep wrote:Will deal with that later. So finally decided to go with RAID50 instead of RAID 5 and instead of RAID10 on my 2 sans. I run couple of hard core test with sqlio on read and write both seq and random with 8, 32 and 64k. From the tests looks like RAID50 will be the best bet. I can post the results here if u guys want.
Anton also for ISCSI optimization should that Windows things be done on stardiwnd servers as well or just on the Windows initiators servers ?
georgep wrote:Just got Starwind 5.7 installed yesterday on both SANs and did some testing. I love the performance monitor tab. Good enough for the begging of a hopefully hard core monitor section. So now I just tested from SAN2 5 regular targets performance. The weird things I discovered using MPIO RR with 4 nics from SAN1 to SAN2 targets is:
Even with no caching enable i get 230MB for seq read. With Cache with 256 , 512 and 1024 no big difference what so ever. With caching I should see the RAM use on that SAN2 go up no ? Didn`t see that.... and it was write back mode with 5000ms.
When I created a 3GB ramdisk iscsi target I saw double performance than regular but i could not pass more than 320MB per sec with 4 GB nics ET for ISCSI with MPIO.
What TCP/IP optimizations should I start doing on those SANS to achieve better performance ?