EXTREME Home Lab with Starwind

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

georgep
Posts: 38
Joined: Thu Mar 24, 2011 1:25 am

Thu Mar 24, 2011 1:58 am

Hi everyone. I am building an "extreme home production live lab". The reason behind this is to always work at home with technologies that we also use everyday at work as a network/system admin.

Project Goal: Have 2 Extremely fast HA SANS on both random/seq throughput as well as IOPS for 3 ESXi hosts. When all gets done I do expect to have about 50-80 VMs.Will always update this topic with the progress, images etc.
The end result goal for me will be to have each SAN write back. It should first write to RAM I hope I can have a 4GB RAM disk then (fast SSD) then to Adaptec RAID Controller Cache and then to the RAID 5 Array. I know 4GB of RAM cache will be a lot to loose in a power failure but this is for a HA environment and its for home use protected under UPS as well.
In the end I still prefer to have once a RAM data disaster, but not to worry about IOps for random and r/w perf rather than having the SAN perform slow.


My hardware (not complete yet):
SAN1: - 15 Hitachi 2TB 7200rpm HDDs in RAID 5 with Adaptec 31605
- Core 2 Duo 2.9Ghz 4GB RAM DDR2
- 2 on board NICS for HA and 4 Gb Intel NIC for ISCSI traffic (Intel E1G44ETBLK)

SAN2: - 10 Western Digital Green 2TB in RAID 5 with Adaptec 31605 controller
- 2 on board NICS for HA and 4 Gb Intel NIC for ISCSI traffic (Intel E1G44ETBLK)

ESX Hosts:
24 GB DDR3 each with Intel Xeon 2.6 Quad core
2 Gb Intel NIC on board for network traffic
4 Gb Intel NICs PCIe for ISCSI traffic (Intel E1G44ETBLK Gigabit ET Quad Port Server)

Switches:
So far I got two Dell 2848(redundant switches for LAN network) and one Dell 6248 for now (ISCSI traffic with jumbo frames enabled)

Will add more hardware as I go with this build. This will mimic a live production environment with 100VMs or so but its my home live production environment where I can work with live servers not worrying I may destroy an important server during an upgrade or a cluster config etc..

So once again thanks to Starwind for giving me a temp licence. I might want for 5.7 to start with as I still need to do some more testing on getting SAN1 ready for Starwind.



Testing so far:
1. SAN1 is performing really good in RAID5 achieving 450MB/ps on seq read and 300MB/ps on seq write on the local platform.



Next steps:
1. Test SAN1 IOPS (iometer, SQLIO)
2. Test SAN1 seq r/w perf with NIC teaming though a Windows share. Copy movies from 2 Windows computers that will have a RAMDISK of 4GB each to max the gigabit port.
3. Test SAN1 ISCSI seq perf r/w through the Dell Switches

Will take baby steps to see where I lose and gain performance.


More updates/pictures/performance results will come.
Last edited by georgep on Tue Jul 26, 2011 2:15 pm, edited 1 time in total.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Thu Mar 24, 2011 9:44 am

Great! We'll wait for your updates.
You'll be informed about start o 5.7 beta-testing. We are expecting it in the middle of April.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
georgep
Posts: 38
Joined: Thu Mar 24, 2011 1:25 am

Thu Mar 24, 2011 2:43 pm

A few questions here:
1. When running HA cluster can I use high RAM (4GB) for cache ?
2. When deploying HA Starwind solution do I have to have same size SANS `? volumes or I just have to make sure the second san has enough space to host the Target LUNs from SAN1 ?
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Thu Mar 24, 2011 3:08 pm

1. Sure you can. Then bigger cache then bigger performance.
2. *.img that you will create for HA should have same size.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
georgep
Posts: 38
Joined: Thu Mar 24, 2011 1:25 am

Thu Mar 24, 2011 3:30 pm

Perfect. Does starwind HA come with an option to set RAM as the cache for the ISCSI target ? Also If I want to utilize and profit from SSD as cache as well , is there`s an option to set that within Starwind ?

What is also best to do with starwind one target and multiple LUns or multiple targets with single lun
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Mar 24, 2011 4:12 pm

RAM is used for Level1 cache and SSD used for Level2.
georgep wrote:Perfect. Does starwind HA come with an option to set RAM as the cache for the ISCSI target ? Also If I want to utilize and profit from SSD as cache as well , is there`s an option to set that within Starwind ?

What is also best to do with starwind one target and multiple LUns or multiple targets with single lun
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
georgep
Posts: 38
Joined: Thu Mar 24, 2011 1:25 am

Fri Mar 25, 2011 7:44 pm

Does Starwind have support for VMware VAAI ? Is there anything specific that has to be configured for Starwind to be able to do multipathing for ISCSI targets? Or all I need is whitin the Vmware Host ISCSI software adapter config ? Not sure if I need to enable MPIO on both ends ESX and also Starwind.


Also having starwind create *.img isn`t that adding an extra layer on top of ntfs/iscsi/vmfs/vmdk etc ? How much performance is it looking just by doing that ?
CyberNBD
Posts: 25
Joined: Fri Mar 25, 2011 10:56 pm

Fri Mar 25, 2011 11:14 pm

Nice to read I am not the only one to use enterprise grade technology at home.

I am currently in the process of upgrading my home lab using a similar setup.
I have closed some deals on the first pieces of equipment and I am designing, testing and benching some scenario's at the moment.

The final setup will include a Dell Poweredge 2950 starwind server connected to a Dell MD1000 SAN (8-disk Raid 50 SAS 15k array + hotspares for FAST storage and 4-disk Raid 5 SATA 7.5k array + hotspare for LARGE files) and 3 Poweredge 2950 ESXi 4.1 servers (two "production" servers which also include important data and one lab-server mainly for testing and as an addition to the cisco lab I have).

I will see if I can post updates to this or a new topic on a regular basis.

Seems I also have to talk to the StarWind guys on how to apply for a temporary license :lol:
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sat Mar 26, 2011 8:50 pm

1) No native support for VAAI so far. We're trying to be as vendor agnostic as possible. But VAAI for VMware and SNI-S for Hyper-V and Xen are coming...

2) Just follow the HOWTOs to configure MPIO for ESX / ESXi. Everything should be pretty much straight forward :)

3) This "extra layer" impact is miserable as it's as thin as rhodium on gold.
georgep wrote:Does Starwind have support for VMware VAAI ? Is there anything specific that has to be configured for Starwind to be able to do multipathing for ISCSI targets? Or all I need is whitin the Vmware Host ISCSI software adapter config ? Not sure if I need to enable MPIO on both ends ESX and also Starwind.


Also having starwind create *.img isn`t that adding an extra layer on top of ntfs/iscsi/vmfs/vmdk etc ? How much performance is it looking just by doing that ?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sat Mar 26, 2011 8:52 pm

1) Wow! Did you rob the bank? Kidding... I'm just jealous :)

2) Sure! You're always welcomed to write to sales@starwindsoftware.com telling who you are, what you need and for what. We're neither cheap nor greedy.
CyberNBD wrote:Nice to read I am not the only one to use enterprise grade technology at home.

I am currently in the process of upgrading my home lab using a similar setup.
I have closed some deals on the first pieces of equipment and I am designing, testing and benching some scenario's at the moment.

The final setup will include a Dell Poweredge 2950 starwind server connected to a Dell MD1000 SAN (8-disk Raid 50 SAS 15k array + hotspares for FAST storage and 4-disk Raid 5 SATA 7.5k array + hotspare for LARGE files) and 3 Poweredge 2950 ESXi 4.1 servers (two "production" servers which also include important data and one lab-server mainly for testing and as an addition to the cisco lab I have).

I will see if I can post updates to this or a new topic on a regular basis.

Seems I also have to talk to the StarWind guys on how to apply for a temporary license :lol:
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
georgep
Posts: 38
Joined: Thu Mar 24, 2011 1:25 am

Mon Mar 28, 2011 2:09 pm

hi anton. thanks a lot of the feedback on VAAI. I know how to configure multipathing from ESX side. I never configured from Windows 2008 R2 side but I got some info on how to do it. So it`s necessary to add that MPIO feature in R2 for Starwind to have the capability of MPIO on ISCSI targets... right ?

My Plan is to try using NIC Teaming between STARWIND and DELL SW and from DELL SW to ESX hosts to use MPIO. Will test this as well as MPIO on both ends see which one offers better perf.
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Mon Mar 28, 2011 3:56 pm

HA will not work if MPIO's not configured, enabling it is a matter of minute. Just install the MPIO feature, open the MPIO config and tick the "add iSCSI device support" checkbox. ♠
As for the teaming - I've got better results in LACP mode (the one with the intelligent mix of FT and LA). We are also working on our own MCS implementation which should be available within 5.x branch (no NIC teaming involved)
Max Kolomyeytsev
StarWind Software
georgep
Posts: 38
Joined: Thu Mar 24, 2011 1:25 am

Thu Apr 21, 2011 2:05 pm

almost got all the hardware ready.
So here is the problem. SAN1 has quad Intel ET NICS. each on its own IP. From a Windows server with 2 nics on board INtel I can connect to the target do the multipathing just fine. Now I purchased 4 quad intel ET for the other servers. When try to use those 4 nics on the server to connect to SAN1 i get Target Error. I swapped the quad nic with another one same error.
I am wondering if it`s from starwind or it`s the nics. Can I send you guys the logs from starwind ? I do have the latest drivers for the ET nics on both SAN1 and the Windows server I am testing with.
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Thu Apr 21, 2011 3:52 pm

Can you tell me if that's the same target you've used before while connecting with 2 onboard NICs?
Also I need to know if there are any CHAPs or ACLs configured.
Max Kolomyeytsev
StarWind Software
georgep
Posts: 38
Joined: Thu Mar 24, 2011 1:25 am

Thu Apr 21, 2011 4:53 pm

same target , no chaps no acl... basic stuff just to test performance...

Weird eh?
Post Reply