Page 1 of 1

VSAN - Poor write performance

Posted: Wed Jun 04, 2025 12:43 am
by MrSquish
I've been playing around with VSAN Free edition with a windows server 2022 on 2 Dell R730XD's. I've been trying different configurations with the CVM with direct physical access vs virtul drives... different iSCSI settings, multiple NIC's vs single for sync channels and different cache settings. When I run CrystalMark benchmarks on a clustered volume hitting the iSCSI HA targets, the read performance is good, but my write performance is a fraction of what I get doing a benchmark directly against the volume on the local host.

For example, the write test shows 6864 MB/s when doing it directly against the drive with a 16MB test, but hitting the clustered volume on the VSAN only tests around 750 MB/s.

The current config is 8 3.84TB SAS 12G SSD's in a raid 5 with the H730 perc raid controller. (I have also tried raid 10 as well as software raid passing the physical disks directly to the VM with similar write performance issues). Tried all different kinds of cache settings with the raid controller.

I have 2 25GB Sync channels and 2 25GB iSCSI channels. All 4 are directly attached with no switch between the 2 hosts. I tried with and without jumbo frames (Without always performs worse).

My main question is... should I expect the VSAN write performance to be that much worse than hitting the volumes directly with the hosts?

Re: VSAN - Poor write performance

Posted: Wed Jun 04, 2025 7:49 am
by yaroslav (staff)
CrystalDisk is probably my least favorite software for performance tests. Use FIO or diskspd.
Did you benchmark the underlying storage? Also, could you please let me know how you have the storage connected to the VM?

p.s. Speaking of performance, you probably should try NVMe-oF. If that sounds interesting, drop a line to support@starwind.com to join our beta.
p.p.s. Try a Windows-based VSAN deployment and at least 2x loopback sessions.

Re: VSAN - Poor write performance

Posted: Wed Jun 04, 2025 11:53 am
by MrSquish
I don't currently have any NVMe drives in these hosts. I assume I would need to add some in order to use NVMe-oF? I had considered getting the U.2 expander to add 4 NVMe drives, but haven't done it yet.

Currently the storage is connected to the CVM's via 2 raid volumes.
1 Raid volume is 8 SAS drives in a raid 5 passed as a physical disk to the CVM's.

The 2nd Raid volume is 8 SAS drives in a raid 5 with a thick provisioned virtual drive placed on this volume. Either way has similar performance numbers when tested with Crystal.

Re: VSAN - Poor write performance

Posted: Wed Jun 04, 2025 12:18 pm
by yaroslav (staff)
NVMe-oF does not need the NVMe disks it is the protocol that can connect any storage over TCP or RDMA. Reach out to us for more details.
StarWind VSAN will act as a target that publishes the storage over NVMe-oF.
Yes, for RAID5 out of SSD, Windows-based StarWind VSAN. I think Hyper-V virtualization is messing with storage performance again here.

P.S. for Windows, you will need an Initiator (we also have it :D). Sure, you can use one in WS 2025, but it is quite a new thing in Windows, so I'd go with something more "mature".
p.p.s I don't trust 6 Gbps in CrystalDisk: RAID 5 out of SAS SSDs is unlikely to deliver that kind of performance (maybe during short tests, before the transient area of the SSD). Use big files and long tests (see some sample methodology here https://www.starwindsoftware.com/best-p ... practices/ ю

Re: VSAN - Poor write performance

Posted: Wed Jun 04, 2025 5:55 pm
by MrSquish
Sounds good to me! Request submitted to join the beta for NVME-oF. Very interested in trying that out if it works with my existing hardware. I also have windows 2022 on these hosts right now (Which isn't technically supported by Dell for the R730's). I'm not ready to jump to Windows 2025 just yet anyway, so i'll use your initiator for that.

As far as loading the VSAN windows service vs CVM. I did actually try the windows service years ago when I got a 1 year license to play with it. But all the latest documentation on the free version kept pushing me towards the linux CVM method with the latest release. I'll power off these VM's and try spinning up the windows service version instead and see how that does once I can try the other disk perf tools so I have a baseline to compare against.

As far as the performance numbers, I realize a larger test will yield more real world numbers. I did do random larger tests as well. I was mostly curious of what should be expected performance numbers comparing VSAN clustered HA volumes vs just hitting the drives directly. If it's expected to be a lot slower? 50% slower due to network latency and dual writes? etc. Didn't want to keep spinning my wheels on it, if what I was seeing was normal.

Re: VSAN - Poor write performance

Posted: Wed Jun 04, 2025 8:57 pm
by yaroslav (staff)
Good luck with beta and your project.
The performance could drop to up to 50% due to double write, but I often see better numbers.
There is still documentation on Windows-based deployments, yet NVMe-oF is available only for CVM; that's why, in the long run, we expect more such installations.