"supported" and "good idea" are both debatable.
I've been supported pretty well! For me the only issues have been with HA. The original 5.0 release had quite a few issues which have been addressed now, and only one of them was virtualisation related, this was cured by the move from Starport to the MS iSCSI initator for syncing between nodes. Now that's sorted, and that auto-resync is done, I'm very confident about putting 5.5 into production and am subjecting it to torture tests.
If I recall correctly, the EULA mentions/used to mention running Starwind as a VM.
And if you get data corruption, well, in real life, you are always on your own. Always have a backup!
I ran 4.1 on "real" hardware, and have been running 4.2 for over a year as a VM. In all that time, I've only had one issue, the starwind.exe service stopped abruptly while I was on holiday, I was able to RDP in and restart it. No data corruption, but a few of the VMs using it for storage crashed. I can't say what caused the issue - may have been a bug in starwind, may have been a windows issue, may have been a virtualisation issue, but one glitch in over a year is pretty good going, and as I said before, no data was lost. I didn't even bother contacting support.
There are lots of advantages to virtualising. In no particular order:
- rebooting a VM is usually quicker than rebooting a real server. If your host has RAID cards etc that slow the bios down, and you need to reboot the starwind vm (a windows patch maybe) then you have less downtime (unless you also need to patch the host!)
- backing up your starwind install, and moving it to new hardware, is simpler. I basically have windows + starwind on a vhd, and then all the starwind img based targets are on pass through disks
- I can control more easily how much RAM and CPU resources are allocated to Starwind. What if? scenarios are much easier to test.
- not tempted to run other roles in the starwind vm!
- if you don't have 10Gbe hardware, but you do have iscsi clients as VMs on the same box - they can talk faster to starwind than if they were on seperate boxes via 1Gbit/sec
- you can test new releases of starwind / windows on your production hardware (if you have spare capacity) without risking your production installation
- the performance hit is negligible (for me performance actually improved, but that's mainly down to better hardware)
- if you just want small, but really fast targets, you can run these as highly available VMs on a hyper-v cluster which you can live migrate. Larger installs aren't really suitable for live migration unless you want to use Starwind as a gateway to another SAN.
- a starwind VM can still have more hardware resources than some hardware SANs. E.g. the EMC AX150i was Celeron/P3 based.
There are some disadvantages
- the weird problem with starport. This might impact mirroring, which still uses starport (not sure, I don't use starwind mirroring)
- if you need more than four cpu cores, you can't allocate them to a hyper-v VM (vmware goes up to

. So if you have a CPU bottleneck in your starwind VM, you either have to get a faster cpu, spread the load across more VMs, or migrate to physical
- same goes for RAM - I think hyper-v limit is 64GB per VM
Finally, do not attempt this with the 2008 version of Hyper-V. Only use R2. Why? R2's virtual switches & nics support jumbo frames.
HP Lefthand is available as a vmware virtual appliance and a while ago they announced a port to hyper-v - so the idea of virtualising your iSCSI SAN is not that weird or unique!