raw image on NTFS or raw device, performance difference

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
sunyucong
Posts: 43
Joined: Mon Sep 12, 2011 8:21 am

Sun Apr 29, 2012 7:59 am

Hi,

Can some one with real-world experience talk about the performance difference running starwind using a raw image on an big NTFS file-system vs directly exposing block device? in both cases, it will be on 12 disks raid10

Thanks.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun Apr 29, 2012 12:23 pm

No difference. We use NTFS (or any other FS we layer on top of) only to mark extents as "used". All requests are page-aligned (memory), extent-aligned (disk) and don't exploit FS cache (as we use own, much more effective one). It's RECOMMENDED to use IMGs rather then raw devices because of flexibility to move them.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
sunyucong
Posts: 43
Joined: Mon Sep 12, 2011 8:21 am

Tue May 01, 2012 8:39 pm

It sounds like you are saying one image for each client , is that right? In term of VMware, Is it better to make one image for each guest and use mapped raw or should use one image per disk array as a vmfs store?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue May 01, 2012 10:18 pm

I never said you need LUN-per-VM... Good practice is to store ~20 VMs per LUN. Heavy things like f.e. SQL server require dedicated LUN for optimal performance.
sunyucong wrote:It sounds like you are saying one image for each client , is that right? In term of VMware, Is it better to make one image for each guest and use mapped raw or should use one image per disk array as a vmfs store?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
sunyucong
Posts: 43
Joined: Mon Sep 12, 2011 8:21 am

Thu May 10, 2012 11:11 pm

The problem is, I want store about 500 vm on the array, possibly a single lun. And I think should not divide the lun since that will just screws io scheduling in Esxi. what do you think?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Fri May 11, 2012 10:33 am

It's considered to be a very bad practice to store that many VMs on a single LUN. Google a bit about recommended settings but AFAIR it's 20-30 maximum.
sunyucong wrote:The problem is, I want store about 500 vm on the array, possibly a single lun. And I think should not divide the lun since that will just screws io scheduling in Esxi. what do you think?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
sunyucong
Posts: 43
Joined: Mon Sep 12, 2011 8:21 am

Sat May 12, 2012 6:56 am

can you enlight me some reasons?
Is this because of command queuing issue or scsi reservation conflict issue?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sat May 12, 2012 7:04 am

I'm not hypervisor vendor so I can only guess. I think it's because even with hardware accelerated locking the whple thing put too heavy load on a LUN.
sunyucong wrote:can you enlight me some reasons?
Is this because of command queuing issue or scsi reservation conflict issue?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply