1) Telling us how much RAM you've allocated for cache and not letting us know how much RAM system has in general makes ZERO sense. The answer is "it depends". If you have 8 GB RAM installed allocating 4 GB for cache is OK. I'd even go 6 GB as OS footprint itself is less then 500 MB in any case. But your system can run various apps except iSCSI target service itself and you should not ignore them and their memory appetites! General rule of common sense: allocate for caching ~60% of *free* *physical* memory. Going more raises your chances of putting machine into swap and having performance degradation under heavier load. And don't drink and drive... I mean you can mix iSCSI target with something else on the same server but I would not do it unless you're extremely low on budget.
2) Using 4 GB of cache on controller and 4 GB of system memory as a cache is good idea! Maybe... Maybe not. The problem is you don't tell us two very important details. Number one: are you HA or non-HA? Number two: what are you gonna do with this storage server? Run SQL or Exchange Server? Have shared folders? Being destination for weekly backups? Devil is in the details (c) ... So if you're HA and have properly configured cluster you don't need to think about anything. Cache is distributed and coherent, if one node will die (any reason actually including put down for ordinary mainenance service) alive node will sync cache to disk immediately and continue running in cacheless mode. Less performance but risk safe. One question is - do you have battery power for you controller cache? You should have it actually. If all listed is true (HA and battery) you're OK. If you don't have own power for controller cache or if you don't have HA you're in trouble... Dust in CPU cooler will result CPU death and your last ~8 GB of SQL Server transactions lost. I guess once and forever. Could cost your company a lot of money. And could cost you your job. So consider HA configuration. OK, you don't have HA b/c you can allow your server some downtime. Say it's hosting shared folders for your sales department and it's OK to wait for that .DOC file until server reboots. And it's OK if data is lost b/c it's still hosted on somebody's PC. And that particular .DOC is not very important. Or your server hosts weekly backups from another server and it's absolytely OK to re-run backup job again after fixing errors on the file system. In these cases you CAN go Write-Back cache and can skip using HA.
3) You need to enable multi-path when you need multi-path. So if you expect to run some clustering software (ESX or Hyper-V or MS cluster) you need to enable it. If you plan to run high peformance disk I/O app and you know for sure it can serve trasnactions itself (manages write order like SQL Server for example does) you can enable multi-path and increase number of sessins in MS initiator options (I have an impression you're using Windows as a client). If you don't need anything from listed - just keep it disabled.
4) Windows initially tells you write performance of doing memory copy to system cache. Has nothing to do with a real bandwidth. Don't do what you're doing. Grab Intel I/O Meter and run some tests against mapped iSCSI storage. Should tell you the truth about what's going on.
5) Wire speed for GbE is around 100 MB/sec. So after cache is filled with data lazy writer starts pushing it to the disk (remote iSCSI target in your case) and you see real picture. 100 MB/sec for write is OK. I'd expect a bit more (maybe 110-115) but it does not matter much.
6) 700 KB/sec is a very *BAD* result. You should again get wire speed of around 100 MB/sec. At this time you need to run I/O Meter against mapped disk LOCALLY (on the target side) finding out what's wrong with the disk. ARECA cards are well known for their issues (Google a bit to find out details) so I'd start here. If local I/O is OK check I/O Meter numbers over the network. If you'd get 100 MB/sec or more - ignore what Windows says as you're OK. If not - our engineers will continue with you finding out what's broken.
oxyi wrote:Hi there,
I've tested out all the links within my Starwind setup with ntttcpr, and every link has at least 975MB/sec of throughoutput.
Since I have the Areca 1600ix-24 with 4GB of cache. When I build a iSCSI target, I created with cache using write-back mode and 4000MB of cache. Not sure if that's correct amount of cache I should be putting down ?
Question: While I am mounting the iSCSI target, how come I don't need to selec "Enable multi-path" when we mount the iSCSI target ?
Anyway, I proceeded to dump a 4GB file to the storage from local C to the storage, Windows shows a speed of 1.5GB/Sec, it finished transfer the file within seconds, but within task manager, how come right after it finished transferring the file, it still show the network utilization at however many % and not just idle, since it finished transferring the file ? Or that was just an illusion of it finished transferring file but it is really working in the back end ?
Second test, I dump a 50GB file to the storage from local C. I see the speed dropped all the way from 1.5GB/sec to 100MB/sec after mins, does that mean the cache is filling up hence the speed drop ?
During both test, I tried to copy a file (4GB) back from the storage, and I would only get 700ish KB/sec of transfer rate, why is it that ?
Thanks for answering my questions.