Page 1 of 1

LSFS/L2 Problem?

Posted: Mon Jun 23, 2014 5:35 pm
by kspare
Here's the scenario.

LSI 9266 controller, 18 WD Black Sata drives in a Raid 10 array, 64k strip. 6 60gb samsung ssd's, in a raid5 array (i know shouldn't run ssd in raid5) with 64k stripe.

We have 64gb ram in the server.

In windows both partitions for the radi 10 array the l2 cache array are formatted with ntfs and 64k clusters.

in starwind the array is created with 9tb thin drive, max memory for caching (ended up at 45gb) and 160g for l2 caching using write back.

We're connected via 10gb nics via a cisco nexus 5020 switch.

So all high performance stuff here.

We moved a vm onto the iscsi store and we're moving it off because the performance was terrible.

While moving off the nics of the san show 90Mbps each (we use round robin)


On the local disks of the server we're moving to we are seeing write speed of 46960 kb's

Why is it so slow? we have MUCH higher performance with a thick disk and using cachecade on the lsi controller?

Re: LSFS/L2 Problem?

Posted: Mon Jun 23, 2014 9:41 pm
by anton (staff)
Do you have any actual numbers to share? Thin Vs. Thick, Intel I/O Meter, 4 workers, 16 cmds in queue, 100% read and 100% write, all-random. Thanks!

Re: LSFS/L2 Problem?

Posted: Tue Jun 24, 2014 1:22 am
by kspare
I'll be honest Anton, I don't care about io meter. I was trying to move a thick disk from my starwind san to an all ssd drive and the performance was beyond terrible. 2 hours to move 80gb.

I've given up on lsfs it doesn't work.

we're playing around with the L2 caching because I think it shows promise compared to the cachecade option from LSI.

If you want to connect over skype and take a look at my setup by all means let me know.

Re: LSFS/L2 Problem?

Posted: Tue Jun 24, 2014 8:37 am
by robnicholson
ve given up on lsfs it doesn't work.
To be honest, this is the biggest bit of v8 that worries me the most. Is anyone on here using it in production yet?

Cheers, Rob.

Re: LSFS/L2 Problem?

Posted: Tue Jun 24, 2014 9:39 am
by anton (staff)
We need to have performance numbers as a starting point to do trouble shooting the issue. It's like the doctor asking you on phone about your body temperature and you telling him you don't care about that you just don't want to be sick :)

Please PM or e-mail me your remote access details (RDP, Skype etc) and I'll bring you in touch with R&Ds and QA guys and they will be happy to help. Definitely what you see is NOT normal.
kspare wrote:I'll be honest Anton, I don't care about io meter. I was trying to move a thick disk from my starwind san to an all ssd drive and the performance was beyond terrible. 2 hours to move 80gb.

I've given up on lsfs it doesn't work.

we're playing around with the L2 caching because I think it shows promise compared to the cachecade option from LSI.

If you want to connect over skype and take a look at my setup by all means let me know.

Re: LSFS/L2 Problem?

Posted: Tue Jun 24, 2014 9:52 am
by anton (staff)
I would suggest to spawn a dedicated thread for that. I'll check from my side what statistics do we have to share.
robnicholson wrote:
ve given up on lsfs it doesn't work.
To be honest, this is the biggest bit of v8 that worries me the most. Is anyone on here using it in production yet?

Cheers, Rob.

Re: LSFS/L2 Problem?

Posted: Tue Jun 24, 2014 1:20 pm
by kspare
The problem here guys is that you provide *no* documentation for recommended setups. ie raid stripe sizes raid levels, etc etc.

I do more experimentation to see if I have it right than anything else and at the end of the day running cachecade with raid 5 or raid 10 and a physical disk to vmware vs a virtual disk works the best. It's frustrating.

I'm using REAL work scenarios. You can hype about io meter all you want but when I can't get my data off my san because it's running so incredibly slow the product becomes useless and people are going to move on.

Re: LSFS/L2 Problem?

Posted: Tue Jun 24, 2014 3:06 pm
by kspare
I sent you a pm with my skype information.

Re: LSFS/L2 Problem?

Posted: Tue Jun 24, 2014 3:50 pm
by anton (staff)
1) We'll update best case scenarios ASAP. Thank you for pointing.

2) If you have issues you think don't belong to your config you're welcomed to ask for support. You paid for it - use it. Engineers will jump to remote session as it's their job to figure out what's wrong.

3) I just asked for numbers to get a feeling what's broken. If you're not comfortable with me being a proxy between you and engineers that's fine I'll just assign remote session immediately. NP.
kspare wrote:The problem here guys is that you provide *no* documentation for recommended setups. ie raid stripe sizes raid levels, etc etc.

I do more experimentation to see if I have it right than anything else and at the end of the day running cachecade with raid 5 or raid 10 and a physical disk to vmware vs a virtual disk works the best. It's frustrating.

I'm using REAL work scenarios. You can hype about io meter all you want but when I can't get my data off my san because it's running so incredibly slow the product becomes useless and people are going to move on.

Re: LSFS/L2 Problem?

Posted: Tue Jun 24, 2014 3:58 pm
by anton (staff)
Brought you in touch with engineers. See Inbox. I'll make sure they would do it ASAP.
kspare wrote:I sent you a pm with my skype information.

Re: LSFS/L2 Problem?

Posted: Tue Jun 24, 2014 9:31 pm
by kspare
I got in touch with one of your engineers, he was able to get on my system and identify a bug. So we're just waiting for a fix now.

Re: LSFS/L2 Problem?

Posted: Tue Jun 24, 2014 9:33 pm
by kspare
I would also like to say thanks, this is much better support than before having them access the box directly to diagnose it.

Re: LSFS/L2 Problem?

Posted: Tue Jun 24, 2014 10:51 pm
by anton (staff)
Thank you let's see the end of the story...