Love your answer, we all got a good laugh here at the office :-) But in all seriousness, regarding the new version due out next month -- are there any technical changes that optimize for this scenario (2 node, iSCSI target on the same box as the iSCSI initiator), or is it more of a licensing-only ty...
You'll have double buffering for I/O (waste of system memory), lots of CPU cycles lost b/c both target and initiator will work in the same machine, bad latency and so on. And from what you've told it's absolutely not clear what you really want to do. I mean - what task are you going to solve. I rea...
The Dell PowerEdge R510 is a great box for building a StarWind SAN, while still using "enterprise" hardware that will be fully supported by one vendor. Our recipe for this is: Dell PE R510 12-bay hot-swap drive chassis option w/2 internal (non-hot swap) drives (internal drives can be used ...
I understand your point. But, I've been watching these forums long enough to know that StarWind is not immune to this problem either -- that is, a new update/patch/version has introduced problems that did not exist beforehand. So, that is why my policy is always very conservative, and why I always e...
I think you misunderstand my point -- I was saying that it isn't necessarily a "best practice" to apply patches/fixes from a software vendor (not specifically StarWind) the moment they come out, especially if what you have running is running just fine. The old saying "If it ain't brok...
Sorry to disagree, but if you've been doing I.T. work for any length of time, you know that "upgrading for the sake of upgrading" is bad habit to get in to.
Do you always apply the latest Microsoft hotfix or service pack the day it comes out, too?
So you are saying that StarWind does not allow you to spread the iSCSI target traffic over multiple NICs via MPIO? If this is the case, then it seems like a huge performance limitation (HA does not even come in to play here).
The system has been running VERY WELL with iSCSI using failover configuration with preferred adapters set in ESX to spread load between the two iSCSI channels (A trick I posted last year). All running well with jumbo frames enabled at Starwind NICs, Switch and (I think by default) on the QLogic ada...
Ok, so the RAM cache is a FIFO type system. How large of a cache has Starwind been tested with? Does it have any "adaptive" smarts to only cache the most frequently read data?
So you are saying that as data is read from the Starwind Server, it gets "mirrored" into the RAM cache in addition to getting sent on to the requesting iSCSI initiator?
So with write through, I understand that writes are committed "immediately". But, how exactly does this new RAM cache feature in 5.x benefit READ operations? A little more detail would be helpful.