VSAN-Free pass whole disk in HA

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
LearnVSan
Posts: 5
Joined: Sat May 17, 2025 10:13 am

Sat May 17, 2025 10:35 am

Firstly, thank you for making a free version available for home users. I am trying to set up my 1st HyperV cluster and have read on the powershell scripts but unsure how to proceed. But perhaps it helps other users.

Using Windows app/driver not the VHA VM.
I have 2 hosts with 3 disks that I want too:
- C/boot that will hold the quorum/witness as a file (C:\StarWind)
- E/hdd that I want to pass through directly in HA (as I see in GUI) not create an image file on [to store data as CSV later]
- V/sdd that I also want to pass trough direct in HA [to store VMs as CSV later]

I think I got the script OK for the witness/quorum based on reading this forum but not sure how to do the rest ;(

- Do I run "CreateHA_2.ps" 3 times with the same parameters for failover/bmp/network/iscsi for the 3 disks?

- Should "$poolName" be the same or different for each disk? Does it matter?

- What are the changes to pass an entire disk - "$createImage=$false" ?
"$imagePath" as ""My Computer\E" does not work - does it want the underlying "\\PhysicalDisk2" on each host?
"$size"=0?
$sectorSize=0?

Host1: MGMT=192.168.4.5, iSCSI/HB=172.16.10.5, SYNC:172.16.20.5
Host2: MGMT=192.168.4.6, iSCSI/HB=172.16.10.6, SYNC:172.16.20.6

Here's what I use for Quorum/Witness - does it look OK?

Code: Select all

param($addr="192.168.4.5", $port=3261, $user="root", $password="starwind",
	$addr2="192.168.4.6", $port2=$port, $user2=$user, $password2=$password,
	$my_dev_name = "witness",
	$my_target_name = "witness",
	$my_img_path = "My Computer\C\StarWind",
	$my_pool_name = "pool1",
#common
	$initMethod="NotSynchronize",
	$size=1024,
	$sectorSize=4096,
# 0-heartbeat, 1-node majority
	$failover=0,
# 1-ram, 2-disk?
	$bmpType=1,
# 0,1-perf, 2-fast recovery - where failure journal on disk?
	$bmpStrategy=0,
#primary node
	$imagePath=$my_img_path,
	$imageName=$my_dev_name,
	$createImage=$true,
	$storageName="",
	$targetAlias=$my_target_name,
	$poolName=$my_pool_name,
	$syncSessionCount=1,
	$aluaOptimized=$true,
# Also "wt" and "wb"
	$cacheMode="none",
# Size in MB?
	$cacheSize=0,
	$syncInterface="#p2=172.16.10.6:3260",
	$hbInterface="#p2=172.16.20.6:3260,192.168.4.6:3260",
	$createTarget=$true,
	$bmpFolderPath="",
#secondary node
	$imagePath2=$my_img_path,
	$imageName2=$my_dev_name,
	$createImage2=$true,
	$storageName2="",
	$targetAlias2=$my_target_name,
	$poolName2=$my_pool_name,
	$syncSessionCount2=1,
	$aluaOptimized2=$false,
	$cacheMode2=$cacheMode,
	$cacheSize2=$cacheSize,
	$syncInterface2="#p1=172.16.10.5:3260",
	$hbInterface2="#p1=172.16.20.5:3260,192.168.4.5:3260",
	$createTarget2=$true,
	$bmpFolderPath2=""
	)
Now to pass Drive E directly?

Code: Select all

param($addr="192.168.4.5", $port=3261, $user="root", $password="starwind",
	$addr2="192.168.4.6", $port2=$port, $user2=$user, $password2=$password,
	$my_dev_name = "",
	$my_target_name = "csv_e",
	$my_img_path = "My Computer\E",
	$my_pool_name = "pool2",
#common
	$initMethod="NotSynchronize",
	$size=0,
	$sectorSize=0,
# 0-heartbeat, 1-node majority
	$failover=0,
# 1-ram, 2-disk?
	$bmpType=1,
# 0,1-perf, 2-fast recovery - where failure journal on disk?
	$bmpStrategy=0,
#primary node
	$imagePath=$my_img_path,
	$imageName=$my_dev_name,
# false for direct pass-through?
	$createImage=$false,
	$storageName="",
	$targetAlias=$my_target_name,
	$poolName=$my_pool_name,
	$syncSessionCount=1,
	$aluaOptimized=$true,
# Also "wt" and "wb"
	$cacheMode="wt",
# Size in MB?
	$cacheSize=1024,
	$syncInterface="#p2=172.16.10.6:3260",
	$hbInterface="#p2=172.16.20.6:3260,192.168.4.6:3260",
	$createTarget=$true,
	$bmpFolderPath="",
#secondary node
	$imagePath2=$my_img_path,
	$imageName2=$my_dev_name,
# false for direct pass-through?
	$createImage2=$false,
	$storageName2="",
	$targetAlias2=$my_target_name,
	$poolName2=$my_pool_name,
	$syncSessionCount2=1,
	$aluaOptimized2=$false,
	$cacheMode2=$cacheMode,
	$cacheSize2=$cacheSize,
	$syncInterface2="#p1=172.16.10.5:3260",
	$hbInterface2="#p1=172.16.20.5:3260,192.168.4.5:3260",
	$createTarget2=$true,
	$bmpFolderPath2=""
	)
Many thanks!

Adrian
yaroslav (staff)
Staff
Posts: 3598
Joined: Mon Nov 18, 2019 11:11 am

Sun May 18, 2025 12:07 am

Welcome to StarWind Forum. Sadly, you cannot pass theough the disk in the free version.
Witness must be on the different host.
You can see the sample scripts here viewtopic.php?f=5&t=6852&p=37208&hilit=HINT8#p37208 you can read more on parameters there.
LearnVSan
Posts: 5
Joined: Sat May 17, 2025 10:13 am

Mon May 19, 2025 9:02 pm

OK, thanks, got it working with some issues.

- Can you move the .img, .swdsk.bak to another drive later if the same drive letter is assigned?

[i.e. stop SW service, copy image files to another disk, change drive letter to original, re-start SW?]

I ask as I cannot get tiered storage spaces with SW [Server 2025]. Any tiered disk just breaks on Failover Cluster validation tests [CSV reservation test] - disk goes offline [on whichever node happens to own it], SW/iSCSI goes off and whole node does not even turn off.

I've tried all combinations, ReFS/NTFS 4k/64k cluster, SW's own L1 cache none/wt/wb - nothing. But I can hit this tiered disk on either node with a disk benchmark (Crystal, Spd, copy) and there is no problem.

Without SSD tier - a 3x HDD parity array read performance is OK but writes are really poor even 32k interleave/64k cluster . There are 3x so can't do mirror, and I cannot fit more. With SSD tier and ReFS 64k - tiered disk read/write performance was almost as high as SSD's own.

- The SW GUI utility [the limited one in Free] does not scale with high DPI displays; you have to set compatibilty mode/System - otherwise you end up with tiny fonts.

Thanks!
yaroslav (staff)
Staff
Posts: 3598
Joined: Mon Nov 18, 2019 11:11 am

Tue May 20, 2025 1:38 am

Thanks for sharing your experience.
- Can you move the .img, .swdsk.bak to another drive later if the same drive letter is assigned?
Yes. just make sure the same folder structure is in place.
Any tiered disk just breaks on Failover Cluster validation tests [CSV reservation test] - disk goes offline [on whichever node happens to own it], SW/iSCSI goes off and whole node does not even turn off.
That's expected as MS storage spaces arrays die during the storage validation test. Exclude storage from validation (that preserves VSAN on top).
The SW GUI utility [the limited one in Free] does not scale with high DPI displays; you have to set compatibilty mode/System - otherwise you end up with tiny fonts.
Yes, sadly, that's the known issue.
LearnVSan
Posts: 5
Joined: Sat May 17, 2025 10:13 am

Wed May 21, 2025 5:54 pm

Many thanks for your continued help, appreciated.

If I can have one last question - what do you need to modify to change the journal from default RAM to disk - when the image/HA is already up? (i think creation options "bmpType=2"? (or is it 1)? and "bmpStrategy=1"? (write to disk on partner down) and "bmpPath="My Computer\C\SwJournalHDD"?

I ask as while pure RAM journal would be OK for a SSD disk/array - I don't think it's great for a HDD array - if both nodes go down it would take days to resync TB of data [while likely something will break completely]. Having RAM journal but store to SSD on partner down is more appropriate?

I had a look in the .swdsk files but there is only "<type_bitmap>RAM</type_bitmap>". What do I need to add for strategy and path?

I'd rather not destroy everything to re-create it with the right journal options.
yaroslav (staff)
Staff
Posts: 3598
Joined: Mon Nov 18, 2019 11:11 am

Wed May 21, 2025 9:08 pm

You are always welcome :)
You can change the bitmap location with the GUI fairly easily https://knowledgebase.starwindsoftware. ... a-devices/.
Without a GUI, you need to alter the creation script. Bitmap type (bmpType) can be 1 - RAM, 2 - DISK (continuous)
For the existing devices, you can use ModifySyncJournal.ps1
LearnVSan
Posts: 5
Joined: Sat May 17, 2025 10:13 am

Wed May 21, 2025 9:42 pm

Ah - did not see that script - thanks you saved me from trashing the HDD array and re-copying. Good thing also as there are a few more fields in the .swdsk files.

To be honest I think this type of journalling (2-disk, 1-strategy/on-failure) makes the most sense for high capacity devices, from what I read/understand is really still a RAM journal, only using the disk/file when the other node fails.

Might switch the HDD array to continous journalling once I copy all the data just in case.
yaroslav (staff)
Staff
Posts: 3598
Joined: Mon Nov 18, 2019 11:11 am

Wed May 21, 2025 10:14 pm

A failure journal might not switch when things like a power outage happen.
RAM grants performance, and continuous, in turn, helps to avoid full synchronization.
If there is a write-back cache, though, there will always be full synchronization https://knowledgebase.starwindsoftware. ... may-start/.

Chose wisely :D
LearnVSan
Posts: 5
Joined: Sat May 17, 2025 10:13 am

Wed May 21, 2025 10:28 pm

Ah - can I switch to "write-thru" L1 cache just by editing the .swdsk files then? Stop service, edit, re-start?

Indeed I made the caches "write-back" for both HDD and SSD.

Not sure how much worse the HDD array performance is going to get with write-thru but can't resync TB of data it's taking days just copying onto it already.

Thanks!
yaroslav (staff)
Staff
Posts: 3598
Joined: Mon Nov 18, 2019 11:11 am

Thu May 22, 2025 6:57 am

Ah - can I switch to "write-thru" L1 cache just by editing the .swdsk files then? Stop service, edit, re-start?
Yes.
IMO, Write-through does not help as it boosts only reads (already improved due to reading from both StarWind VSAN devices).
Write-back boosts both writes and reads, yet the system becomes harder to handle, so I can't justify it. Yet, sometimes the performance improvement makes it way too tempting.
Post Reply