Public beta (bugs, reports, suggestions, features and requests)
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Mon Nov 04, 2013 9:51 am
Hi Fred!
Thanks for using StarWind.
I think it will be useful for you if you will read through our Benchmarking Guide that you can find by using the link below - it will show you how to benchmark the ysstem properly and locate the bottleneck of the system basing on the testing results.
http://www.starwindsoftware.com/starwin ... ice-manual
thanks
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
fgiroux
- Posts: 3
- Joined: Thu Oct 31, 2013 10:53 pm
Tue Nov 05, 2013 3:27 pm
Hello!
Thanks for the link to the benchmarking document.
In the log, I have a huge count of a specific error: ImageFile_ScsiExec: SCSIOP (0x4D) is not supported.
I have no issue and I assume it is informative, but I would like to know the meaning.
I have attached the full log in case it is needed.
Thanks!
Fred
-
Attachments
-
- 192.168.27.104-3261-service-starwind-20131031-161517.zip
- (191.95 KiB) Downloaded 3976 times
-
RaymondTH
- Posts: 3
- Joined: Sat Oct 05, 2013 6:08 pm
Sun Nov 10, 2013 5:41 pm
Haven't had any issues with beta 2 here, Just one request for the next version can we have an easy way of moving the starwind store to another drive on the server without having to remove/add all the replicated targets again.
I tried editing the starwind to the new paths but that broke all the ha drives
-
fbifido
- Posts: 125
- Joined: Thu Sep 05, 2013 7:33 am
Mon Nov 11, 2013 9:22 pm
Hi,
I have 3 LUN
1.6TB - lun1
1.5TB - lun2
150GB - lun3
but the starwind service is using about 4.4GB Memeory, is this normal?

-
Bohdan (staff)
- Staff
- Posts: 435
- Joined: Wed May 23, 2007 12:58 pm
Tue Nov 12, 2013 8:48 am
Hi.
It depends on the cache settings you have specified for those LUNs

It is not normal in case of no caching or if the sum of specified cache size values is significantly smaller.
-
Ironwolf
- Posts: 59
- Joined: Fri Jul 13, 2012 4:20 pm
Tue Nov 12, 2013 5:08 pm
On a 2008R2 system, I experience a memory problem when L2 cache was used. Everything seemed fine until the reboot. After the reboot, Storage sizes I had assigned to L2 cache were consumed in memory after the reboot. As in 100GB drive, 128k L1 cache, and 2Gb L2 cache, on reboot 2Gb of addition memory was used from RAM. This was the same system I was having the VSS hardware driver error on, in the previous post, unfortunately we have already dismantled the system and didn't have any logs to share.
-
Bohdan (staff)
- Staff
- Posts: 435
- Joined: Wed May 23, 2007 12:58 pm
Tue Nov 12, 2013 5:18 pm
Ironwolf wrote:My previous post about the event viewer errors for the VSS driver was from a 2008R2 system.
On a 2012R2 load, I am now getting almost the same error
Do you have any backup software on the server which uses VSS?
-
Ironwolf
- Posts: 59
- Joined: Fri Jul 13, 2012 4:20 pm
Tue Nov 12, 2013 5:27 pm
Storage Pool and Image files don’t mix, but lsfs do work?
There is no direct Feedback that the creation of an image file did not work until you review the log file. It appears incompatible with advance format drives? Or drives that are reporting 4k sector sizes as is with Storage pool disks.
I feel Storage pools will be gaining momentum in the next few years. This problem with the Image file seems related to some of the problems I experienced with LFSF files not being portable/moveable on a simulated OS failure. There will be a need to move files from physically failed raids to new raids, it seems very tedious to copy out and copy into the container files.
Check the log file at 8:15:53 is the most resent attempt to create an image file
-
Attachments
-

- LSFS are fine, but the 3rd one an image files shows 0 and 0.PNG (11.18 KiB) Viewed 203501 times
-

- seems fine.PNG (17.66 KiB) Viewed 203500 times
-
Ironwolf
- Posts: 59
- Joined: Fri Jul 13, 2012 4:20 pm
Tue Nov 12, 2013 5:58 pm
No I don't have any software that uses VSS, its a clean load, HDD benchmarking, Drivers, Starwind
-
WardA
- Posts: 13
- Joined: Thu Feb 23, 2012 1:18 pm
Wed Nov 13, 2013 11:44 am
Hi,
I was wondering if the new Beta supports adding read-only access to targets in addition to read-write access to those same targets.
We run Veeam for backup, and one of the ways it can do backup is by direct SAN access. At the moment, if we want to do that, we have to grant read-write access to the IQN of the Veeam server to all of the ESX HA targets - which is fine, and works, but allows for user-error to cause disaster, if for example a user mounts one of the ESX luns directly on the server.
What we'd like to do is have all the ESXi servers have read-write access to those targets - based on their IQNs - and for the Veeam server to have read-only access to those targets - based on its IQN.
Is this on the roadmap? I couldn't see it as a configuration possibility on the Beta.
Thanks
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Wed Nov 13, 2013 8:39 pm
Not on the roadmap as we've never got an official request from VEEAM directly or from VEEAM users to do this. I'd suggest you to spawn a thread on a
VEEAM forum with a quiz / feature request so we could track amount of interested people. With a decent amout I don't see any issue to implement a IQN
based filter triggering read-only LUN access.
WardA wrote:Hi,
I was wondering if the new Beta supports adding read-only access to targets in addition to read-write access to those same targets.
We run Veeam for backup, and one of the ways it can do backup is by direct SAN access. At the moment, if we want to do that, we have to grant read-write access to the IQN of the Veeam server to all of the ESX HA targets - which is fine, and works, but allows for user-error to cause disaster, if for example a user mounts one of the ESX luns directly on the server.
What we'd like to do is have all the ESXi servers have read-write access to those targets - based on their IQNs - and for the Veeam server to have read-only access to those targets - based on its IQN.
Is this on the roadmap? I couldn't see it as a configuration possibility on the Beta.
Thanks
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
WardA
- Posts: 13
- Joined: Thu Feb 23, 2012 1:18 pm
Thu Nov 14, 2013 10:02 am
Hi Anton,
Thanks for replying. I doubt I would manage to get any response on the Veeam forum for this - 99% of the discussions about this over there essentially boil down to 'well, your SAN should support this, as HP does, NetApp does' etc etc.
Hopefully you'll consider it anyway!
Reference would be:
http://www.veeam.com/blog/using-the-isc ... -a-vm.html
Andy
anton (staff) wrote:Not on the roadmap as we've never got an official request from VEEAM directly or from VEEAM users to do this. I'd suggest you to spawn a thread on a
VEEAM forum with a quiz / feature request so we could track amount of interested people. With a decent amout I don't see any issue to implement a IQN
based filter triggering read-only LUN access.