Hyperconverged Full SSD 2 nodes for ESXi 5.5
Posted: Mon Jun 01, 2015 1:23 pm
Recently I went to this page of StarWind Software site.
https://www.starwindsoftware.com/starwi ... san-vmware
StarWind software proposed a killer offer , 2 nodes StarWind Virtual San for Free. I have immediately asked and obtained a free 2 node license for Version 8
The idea is to use 2 very old Dell R210 1RackUnit to test the StarWind Software. Both R210 have 16GB of RAM and just a SATA ( 1.5 GB/s) controller with only 3 SATA ports. Each R210 is equipped with a 2TB WD Red spindle disk in vmhba0 ( HD 0 ) , a Samsung SSD PRO 512 in vmhb33 ( HD 1 ) and 4 GB network cards.
I was attracted by the fact that StarWind claims its software as Flash-Friendly with LSFS ( see the link above ) . Will this bring the two DELL R210 to a new life, forming an Hyperconverged FULL SSD VSAN ?
I followed step by step the indications found here https://www.starwindsoftware.com/starwi ... re-vsphere .Well…. Not really step by step because the technical paper do not mention the use of Vlan, just 4 netcards for each node, while the paragraph “22. Specify interfaces for synchronization and Heartbeat channels” shows 6 network cards in the StarWind VM node ( or at least 6 IP each node, but I preferred not to have netcards with multiple IP )
Anyway, the StarWind VirtualSan in Hyperconverged 2 nodes scenario was successfully built, with Jumbo frames activated and "DS1" and "DS2" LUNs presented to the iscsi targets.
Following the guide above a DS1 HAimage1 file was created in the WD RED spindle creating a 300GB disk with write-back cache enabled, and a DS2 HAimage2 was created using a 300GB disk created in the SAMSUNG SSD datastore, write-trough cache activated. NO L2 cache.
The result is 2 different iScsiData channels presented to the remaining ESXi 5.5 infrastructure, and 2 Vmfs5 datastores named StarWindSpindle and StarWindSSD. ESXi 5.5 was set to “see” the StarWindSSD datastore as “SSD” disk
Round robin was also enabled in the iscsi adapter in Vmware Esxi 5.5 hypervisors. Vlans was used to separate iScsiData and Sync lanes.
High Availability:
IT WORKS! GREAT ! Stopping StarWind Vsan service in 1 node does not change almost anything. I tried everything to chrash the system but the datastores remains intact.
Now the performances:
Please note that I tried to have a FULL SSD HighAvailability Datastore . The DS2 HAimage2 shown as StarWindSSD datastore was set up with LSOF filesystem, in accord with the “flash-frendly” suggestion by StarWind.
I used for the test VM “on top of” the StarWindSpindle and StarWindSSD datastore, using iometer 1.1.0 for testing.
The performances are not so different between spindle and ssd datastores, while the same VM running directly in the SSD ( without StarWind iScsi VSAN ) is at least 10 times faster.
Now the question for the forum:
Why performances are similar with Spindle and SSD storage ? Have I missed something ?
Networks and switches are correctly set to 1GB using Jumbo Frames at 9000. Why a VM in the ESXi 5.5 SSD datastore runs 10 times faster than the same VM in the datastore presented by StarWind as iScsi ?
Is the bottleneck caused by Windows 2012 R2 with the disk repository made by a vmdk file in the ESXi 5.5 SSD datastore ? There is a special way or tuning to speed up the disk as a Flash disk ?
Or is the LFOS the bottleneck ? Should I activate the Write-back memory cache even in SSD ? Is this causing a punch-hole effect causing some damage ?
I read all the posts in the forum but all the posts are talkink about L1 and L2 caching mechanism.
Please note what I would like to get:
FULL SSD Hyperconverged VSAN,
not a SSD Enhanced Hyperconverged “Spindle” VSAN using L2 caching mechanism.
Anyway, StarWind Vsan is a GREAT product, and, as I already mentioned above, the HA is perfect.
Regards,
Claudio
https://www.starwindsoftware.com/starwi ... san-vmware
StarWind software proposed a killer offer , 2 nodes StarWind Virtual San for Free. I have immediately asked and obtained a free 2 node license for Version 8


The idea is to use 2 very old Dell R210 1RackUnit to test the StarWind Software. Both R210 have 16GB of RAM and just a SATA ( 1.5 GB/s) controller with only 3 SATA ports. Each R210 is equipped with a 2TB WD Red spindle disk in vmhba0 ( HD 0 ) , a Samsung SSD PRO 512 in vmhb33 ( HD 1 ) and 4 GB network cards.
I was attracted by the fact that StarWind claims its software as Flash-Friendly with LSFS ( see the link above ) . Will this bring the two DELL R210 to a new life, forming an Hyperconverged FULL SSD VSAN ?
I followed step by step the indications found here https://www.starwindsoftware.com/starwi ... re-vsphere .Well…. Not really step by step because the technical paper do not mention the use of Vlan, just 4 netcards for each node, while the paragraph “22. Specify interfaces for synchronization and Heartbeat channels” shows 6 network cards in the StarWind VM node ( or at least 6 IP each node, but I preferred not to have netcards with multiple IP )
Anyway, the StarWind VirtualSan in Hyperconverged 2 nodes scenario was successfully built, with Jumbo frames activated and "DS1" and "DS2" LUNs presented to the iscsi targets.
Following the guide above a DS1 HAimage1 file was created in the WD RED spindle creating a 300GB disk with write-back cache enabled, and a DS2 HAimage2 was created using a 300GB disk created in the SAMSUNG SSD datastore, write-trough cache activated. NO L2 cache.
The result is 2 different iScsiData channels presented to the remaining ESXi 5.5 infrastructure, and 2 Vmfs5 datastores named StarWindSpindle and StarWindSSD. ESXi 5.5 was set to “see” the StarWindSSD datastore as “SSD” disk
Round robin was also enabled in the iscsi adapter in Vmware Esxi 5.5 hypervisors. Vlans was used to separate iScsiData and Sync lanes.
High Availability:




Now the performances:


Please note that I tried to have a FULL SSD HighAvailability Datastore . The DS2 HAimage2 shown as StarWindSSD datastore was set up with LSOF filesystem, in accord with the “flash-frendly” suggestion by StarWind.
I used for the test VM “on top of” the StarWindSpindle and StarWindSSD datastore, using iometer 1.1.0 for testing.
The performances are not so different between spindle and ssd datastores, while the same VM running directly in the SSD ( without StarWind iScsi VSAN ) is at least 10 times faster.
Now the question for the forum:
Why performances are similar with Spindle and SSD storage ? Have I missed something ?

Networks and switches are correctly set to 1GB using Jumbo Frames at 9000. Why a VM in the ESXi 5.5 SSD datastore runs 10 times faster than the same VM in the datastore presented by StarWind as iScsi ?
Is the bottleneck caused by Windows 2012 R2 with the disk repository made by a vmdk file in the ESXi 5.5 SSD datastore ? There is a special way or tuning to speed up the disk as a Flash disk ?
Or is the LFOS the bottleneck ? Should I activate the Write-back memory cache even in SSD ? Is this causing a punch-hole effect causing some damage ?
I read all the posts in the forum but all the posts are talkink about L1 and L2 caching mechanism.
Please note what I would like to get:
FULL SSD Hyperconverged VSAN,
not a SSD Enhanced Hyperconverged “Spindle” VSAN using L2 caching mechanism.
Anyway, StarWind Vsan is a GREAT product, and, as I already mentioned above, the HA is perfect.
Regards,
Claudio