Page 1 of 1

Hyperconverged Full SSD 2 nodes for ESXi 5.5

Posted: Mon Jun 01, 2015 1:23 pm
by cguiotto
Recently I went to this page of StarWind Software site.
https://www.starwindsoftware.com/starwi ... san-vmware
StarWind software proposed a killer offer , 2 nodes StarWind Virtual San for Free. I have immediately asked and obtained a free 2 node license for Version 8
:idea: :idea:
The idea is to use 2 very old Dell R210 1RackUnit to test the StarWind Software. Both R210 have 16GB of RAM and just a SATA ( 1.5 GB/s) controller with only 3 SATA ports. Each R210 is equipped with a 2TB WD Red spindle disk in vmhba0 ( HD 0 ) , a Samsung SSD PRO 512 in vmhb33 ( HD 1 ) and 4 GB network cards.

I was attracted by the fact that StarWind claims its software as Flash-Friendly with LSFS ( see the link above ) . Will this bring the two DELL R210 to a new life, forming an Hyperconverged FULL SSD VSAN ?

I followed step by step the indications found here https://www.starwindsoftware.com/starwi ... re-vsphere .Well…. Not really step by step because the technical paper do not mention the use of Vlan, just 4 netcards for each node, while the paragraph “22. Specify interfaces for synchronization and Heartbeat channels” shows 6 network cards in the StarWind VM node ( or at least 6 IP each node, but I preferred not to have netcards with multiple IP )
Anyway, the StarWind VirtualSan in Hyperconverged 2 nodes scenario was successfully built, with Jumbo frames activated and "DS1" and "DS2" LUNs presented to the iscsi targets.

Following the guide above a DS1 HAimage1 file was created in the WD RED spindle creating a 300GB disk with write-back cache enabled, and a DS2 HAimage2 was created using a 300GB disk created in the SAMSUNG SSD datastore, write-trough cache activated. NO L2 cache.

The result is 2 different iScsiData channels presented to the remaining ESXi 5.5 infrastructure, and 2 Vmfs5 datastores named StarWindSpindle and StarWindSSD. ESXi 5.5 was set to “see” the StarWindSSD datastore as “SSD” disk

Round robin was also enabled in the iscsi adapter in Vmware Esxi 5.5 hypervisors. Vlans was used to separate iScsiData and Sync lanes.

High Availability:
:D :D :D :D IT WORKS! GREAT ! Stopping StarWind Vsan service in 1 node does not change almost anything. I tried everything to chrash the system but the datastores remains intact.

Now the performances: :roll: :roll:
Please note that I tried to have a FULL SSD HighAvailability Datastore . The DS2 HAimage2 shown as StarWindSSD datastore was set up with LSOF filesystem, in accord with the “flash-frendly” suggestion by StarWind.

I used for the test VM “on top of” the StarWindSpindle and StarWindSSD datastore, using iometer 1.1.0 for testing.
The performances are not so different between spindle and ssd datastores, while the same VM running directly in the SSD ( without StarWind iScsi VSAN ) is at least 10 times faster.

Now the question for the forum:

Why performances are similar with Spindle and SSD storage ? Have I missed something ? :oops:
Networks and switches are correctly set to 1GB using Jumbo Frames at 9000. Why a VM in the ESXi 5.5 SSD datastore runs 10 times faster than the same VM in the datastore presented by StarWind as iScsi ?

Is the bottleneck caused by Windows 2012 R2 with the disk repository made by a vmdk file in the ESXi 5.5 SSD datastore ? There is a special way or tuning to speed up the disk as a Flash disk ?
Or is the LFOS the bottleneck ? Should I activate the Write-back memory cache even in SSD ? Is this causing a punch-hole effect causing some damage ?

I read all the posts in the forum but all the posts are talkink about L1 and L2 caching mechanism.
Please note what I would like to get:

FULL SSD Hyperconverged VSAN,
not a SSD Enhanced Hyperconverged “Spindle” VSAN using L2 caching mechanism.

Anyway, StarWind Vsan is a GREAT product, and, as I already mentioned above, the HA is perfect.

Regards,
Claudio

Re: Hyperconverged Full SSD 2 nodes for ESXi 5.5

Posted: Tue Jun 02, 2015 12:18 pm
by Oles (staff)
Hello Cguiotto!

Thank you for such a great feedback!

To understand which part is the bottle neck, please provide me with next information:
- What StarWind devices are used? (i am almost sure its LSFS, just double checking)
- How much of L1 cache is used per device?
- What StarWind build are you using?

Thank you.

Re: Hyperconverged Full SSD 2 nodes for ESXi 5.5

Posted: Tue Jun 02, 2015 1:49 pm
by cguiotto
Hallo Oles,
the StarWind Software Management console used is Version 8.0.7929 , all the plug-ins are version 8.0 with the exception of "Deduplicated Disk" that is Version 5.8 ( not used )

The memory amount used in the VM of the nodes is just 4096 MB as suggested in the technical paper

The "Spindle" DS1 image file is "Thick" Disk, WRITE-BACK cache with L1 set to Maximum ( shows 1200 MB ) NO L2 Cache
The "Flash" DS2 image on top of a SSD Disk is LSOF , NO DEDUPLICATION, WRITE-TROUGH cache with L1 set to Maximum ( shows 1200 MB ) NO L2 cache

I tried also a "L2" cached "spindle" version , creating the LSOF main imagefile in the "spindle" disk ( E: ) and the L2 cache in the "Flash" disk ( S:).
This is the layout of imagefiles in the primary node, BUT in the secondary node BOTH the main image file AND the L2 cache file are located in the same disk ( E:) leaving the L2 cache in the "spindle" disk instead of the "flash" disk. I made many attempts to get around of it, without success.

A "spindle" "thick" disk WITH L2 CACHE in the primary node, do not even create any L2 disk in the secondary node.

This made the decision to use a FULL SSD solution with LSOF filesystem (at least the imagefile is always in the flash disk in both nodes )

Thank you for the fast answer
Claudio

Re: Hyperconverged Full SSD 2 nodes for ESXi 5.5

Posted: Fri Jun 05, 2015 11:34 am
by Jhon_Smith
Try setting fixed amount of L1 cache (not set to maximum), and while preforming benchmarking tests on LSFS, set align settings to at least 4KB border. This should help.

Re: Hyperconverged Full SSD 2 nodes for ESXi 5.5

Posted: Wed Jun 10, 2015 4:39 pm
by Oles (staff)
Please let me know if Jhon was right and your issue is fixed. Also, please let me know if you require my any further assistance. Thank you.

Re: Hyperconverged Full SSD 2 nodes for ESXi 5.5

Posted: Tue Jul 07, 2015 4:49 pm
by cguiotto
Hi Oles,
I already tried John Smith suggestion. I've also attended to the webinar "Live Demo: StarWind 2-Node Hyper-Converged for VMware" in June 25.

The performances in the webinar was almost identical to my benchmarks . Unfortunately the speaker had some trouble setting up the lab and had no time to answer my questions, but I found a way to get around of L2 cache misplacement in another post.

Besides... it is not I was looking for.
L1 and L2 cache in SSD and memory cache is a way to reduce the I/O Blender effect using spindle disks with reasonnable performance.

Instead I was looking for maximum performance in a Full SSD hyperconverged solution, where to run just a Win2003 VM at maximum speed and reliability.
My lab consist in two old and almost useless Dell R210 with only 16 GB of Ram, and maybe I better install Hyper-V free on both nodes, run the VM in a SSD and setup Hyper-V standard replication in the secondary node. No iScsi, no datastores, no round robin, no jumbo frames, no Vlans .... no complexity

probably I was tring to shoot a fly with a bazooka. :D

But I had good time having the opportunity to test your software, set up networks, iScsi, set round robin timeout, etcetera. I am 60 but I fill like a young nerd when I do this "high acrobatic".
Thus, the knowledge acquired is not lost, and still I have the VM of both nodes.... I plan to use it to "resurrect" 2 old HP G6 DL160 with 8 bay Raid5 arrays and SmartArray P800 as controller. Esxi 5.5 do not "see" virtual disk larger than 1.99 TB with this controller....

See you in the next post

Re: Hyperconverged Full SSD 2 nodes for ESXi 5.5

Posted: Fri Jul 10, 2015 6:18 pm
by Oles (staff)
Hello Cguiotto!

I am glad that you have done everything the way you needed, although its unfortunate that it is without StarWind.
Please let me know if i can provide you with any further assistance as a StarWind engineer. Thank you.