Design 3-Node Strech Cluster (Hyper-V, SMB)

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
ulrich.laaser
Posts: 4
Joined: Sat Feb 09, 2013 6:35 pm

Sun Feb 10, 2013 9:17 pm

Hello, im trying to find out if the following is suitable, and if you have some recommendations about my Hardware Design

I want to setup a 3-node Starwind Stretch-Cluster for presenting HA-Storage to Hyper-V, Citrix VDI, and SMB Clients. Because of limited initial Budget, this 3 Nodes have to host also Infrastructure for Hyper-V and Citrix VDI (loopback configuration or virtualized). The primary AD Infrastructure like DC, DHCP, DNS PXE and NTP will be installed on an dedicated 4. Server. Monitoring and Managment on a 5. Server. Secondary DC, DHCP,DNS should be virtualized.

The Implementation is a 2 Step procedure
    1. Step implement virtualized AD Infrastructure, and basic Citrix VDI (5 Standard User + 1 HDX3DPRO USER)
      2. Step Grow Citrix VDI to 100 Standard User + 10 HDX3DPRO USER)

      2 Nodes should be Hosted in the "Main DC", in Building 1. The last Node should be hosted in the "Secondary DC" ,in Building 2.

      Each node has the following configuration:

      1 x E5-2660 (i need CPU Power mostly for VDI Environment) 1. Question: Must i prevent CPU contention and priorize Ressources in case i install Starwind on Bare-Metal?

      4*16GB DDR3 ECC Registered RAM 1600Mhz (aditional 64GB in Step 2)

      2* Onboard Intel i350 1gb Nic provided by Supermicro MOBO (Management and 1.Heartbeat Network)
      2* Dual Port Intel X540 10gb NIC (2 for Sync Channel, 2 For ISCSI and SMB-Traffic and 2.and 3 Heartbeat Network)

      2* 128GB SSD Intel 520 for Baremetal-OS and Starwind in RAID1 Setup (Windows 2012 with Hyper-V or Windows 2012 Hyper-V)
      1* 240GB SSD Intel 520 for virtualized AD Infrastructure Servers
      1* 240GB SSD Intel 520 for virtualized Exchange Server DB
      1* 480GB SSD Intel 520 for virtualized FileServer
      1* 480GB SSD Intel 520 for VDI Storage
      1* 3TB SATA Disk for Backup Storage

      Im segmenting the disk to specific Services, because i want to be sure that in case of a disk failure only on specific Service would be affected. And of course i know that the design is 3-node cluster... so up to 2 disks of the corresponding "Lun/Share" can fail... i mostly want to achieve low Recovery times in case of catastrophic failure.

      The Gigabit Nics and 2 of the 10Gb Nics of each Server should be connected to 3 Cisco SG500X-48 Switches in Strech-Stack (2 Switches in the Main DC, 1 in the Secondary DC all Switches also interconnected via 10GB Ring)

      The last 2 10Gb Nics will be used to interconnect al 3 Nodes Directly for Sync-Channel an Heartbeat

      The supermicro Boards offers an integrated LSI 2208 Controller with 1GB Memory Cache.
      2. Question: Im not sure if i should buy the BBU?
      3. Question: Do need a Raid Controller at all (im not configuring Raid in the LUNs) and can use instead a LSI HBA so i almost can connect the disks direct to an PCI-e 2 8x HBA Controller and not to an shared C602 Controller)
      4. Question: As im using SSD Disks do i need a Raid Controller with Cache?


      Ok thats for the physical initial design. Well almost: In case i have to go further, the 3 Nodes will still have to serve (in Loopback configuration) the above , and i will setup aditional Server for Citrix VDI.


      At Software Side:

      1. Install a Windows Server 2012 with Hyper-V or Windows Hyper-V Server 2012 on each node (only using 2 OS Disk) 1.1 Configure Hyper-V Cluster
      2. Install Starwind in the OS Partition (Parent Partition), and prepare HA LUNs, one LUN/DISK for each Service as mentionated before.
      3. Create a virtualized Windows 2012 File-Server Cluster spanned over the 3 Nodes (Finally this virtual FileServer Cluster will use this LUNs as VM-Storage for Hyper-V, Citrix and other SMB Clients.

      Here is where im getting confused:
      5. Question: Is Step 3 necessary can i only use the Starwind Software to present and handle simultaneus access from Hyper-V Cluster, Citrix VDI and SMB Clients?

      If i install Starwindsoftware directly on Parent Partition im not sure if im getting into the not by MS supported constellation of

      Loopback configurations (where the computer that is running Hyper-V is used as the file server for virtual machine storage) are not supported.

      http://technet.microsoft.com/en-us/libr ... 34187.aspx

      Well i hope i have been able to make me understand, and will wait for some suggestions and recommendations.

      Let me say i was reading the forum the last week up to comments from 2011...but i cant remember that the above 3-node nested setup was explained before or in a technical whitepaper... :cry:
      User avatar
      Anatoly (staff)
      Staff
      Posts: 1675
      Joined: Tue Mar 01, 2011 8:28 am
      Contact:

      Wed Feb 13, 2013 9:30 am

      OK, lets go through your questions one-by-one.
      1. Question: Must i prevent CPU contention and prioritize Resources in case i install Starwind on Bare-Metal?
      First of all I do not understand what do you mean by saying "install Starwind on Bare-Metal" - our solution is Windows based, so I will ask you to clarify this. As for the baseline of this question - having a lot of roles on one server is pretty tricky or resource consuming mission, and specifying what application will be more important is #1 step to do in such projects, otherwise you will have no plan and your system will be awfully-balanced.
      2. Question: Im not sure if i should buy the BBU?
      If you want to have some extra time in case of major electricity failures - yes, you need them.
      3. Question: Do need a Raid Controller at all (im not configuring Raid in the LUNs) and can use instead a LSI HBA so i almost can connect the disks direct to an PCI-e 2 8x HBA Controller and not to an shared C602 Controller)
      Well, I`m a fan of having RAID. BTW I`d like you to know that recommended RAID for implementing an HA are RAID 1, 0 or 10, RAID 5 or 6 are not recommended due to low write performance ((btw the last two ones are not recommended to put into production by LSI, RAID vendor).
      The performance of a RAID array directly depends on the Stripe Size used. There are no exact recommendations of which stripe size to use. It is a test-based choice. As best practice we recommend at first step to set recommended by vendor and run tests. Then set a bigger value and run tests again. In third step set a smaller value and test again. These 3 results should guide you to the optimal stripe size value to set. In some configuration smaller stripe size value like 4k or 8k give better performance and in some other cases 64k, 128k or even 256k values will give better performance.
      Performance of the HA will depend on the performance of the RAID array used. It’s up to the customer to determine the optimal stripe size.
      4. Question: As im using SSD Disks do i need a Raid Controller with Cache?
      Cache assummes performance, so if you want to get great nubers - you need cache. But to be honest, I think that assigning plenty of RAM for StarWind WB cache should work in the similar way.
      1. Install a Windows Server 2012 with Hyper-V or Windows Hyper-V Server 2012 on each node (only using 2 OS Disk) 1.1 Configure Hyper-V Cluster
      2. Install Starwind in the OS Partition (Parent Partition), and prepare HA LUNs, one LUN/DISK for each Service as mentionated before.
      3. Create a virtualized Windows 2012 File-Server Cluster spanned over the 3 Nodes (Finally this virtual FileServer Cluster will use this LUNs as VM-Storage for Hyper-V, Citrix and other SMB Clients.

      Here is where im getting confused:
      5. Question: Is Step 3 necessary can i only use the Starwind Software to present and handle simultaneous access from Hyper-V Cluster, Citrix VDI and SMB Clients?
      The answer is "Yes" mostly. You can have StarWind iSCSI SAN (not Native SAN) to run shared storage for those apps. But I have one question - I`m I correct in my understanding of your plan - you want to run Citrix VDi and Starwind on same box?
      Best regards,
      Anatoly Vilchinsky
      Global Engineering and Support Manager
      www.starwind.com
      av@starwind.com
      ulrich.laaser
      Posts: 4
      Joined: Sat Feb 09, 2013 6:35 pm

      Thu Feb 14, 2013 2:26 am

      1.) Means on first Windows OS installed direct on physical Machine without Virtualization.
      2.) Ok its really not a problema since a BBU cost about $150...
      3.) Well, thats all correct for Spindle based disks. But in my design im almost only using SSDs, so i need to consider other things like trim and unmap support. Also RAID is absolut a requirement in a single Server Setup, but in the scenario im using a 3-Node "mirror". So in the best case i only have to be notificated, change the ssd and perform some steps to reactivate the ssd inside the virtual os and then inside the Starwind iscsi Management. Resyncing from 2 Nodes that also Uses Intel 520 SSDs specs @ 500MB/s r/w over about 2x 10Gb/s Channel it should be a matter of 10 min to resync a whole 480GB SSD. But i need to know if it is all so easy (mostly the reassignment inside Starwind Management Console)
      4.) Fine in this case i will tryout a simple HBA LSi2208/LSI2008 and compare it with a Full-Featured LSI 2208 with 1GB BBU Cache.... :)
      5.) Yes...well as described in the attached diagrams VDI will run on its own assigned SSDs. In the first step i will implement VDI-in-a-Box with 10 Users... to see how it performs. If i can control Boot-Storms and general CPU,RAM and Network spikes...i will try further. To garantize Personalized Desktops Storage Requirements i will test 4 480GB in physical Raid0 vs virtualized Storage Spaces on the FileServer Cluster...

      Well as shown in the Virtual Server Diagram i want to use Starwind iscsi San as Virtual Shared Storage, and use this storage in a virtualized FileServer. The Shares will be used from "physical" Hyper-V and from SMB Clients...

      Last question in this subject: What is the technical difference between Native SAN for Hyper-V and iscsi SAN. Im couldnt found a document that shows me installing native San directly on Hyper-V Cluster. Maybe because this is not the approach?
      Attachments
      Virtual Server
      Virtual Server
      HA diagram virtualization option 1.png (101.97 KiB) Viewed 7482 times
      Physical Design
      Physical Design
      ha diagram physical.png (98.57 KiB) Viewed 7493 times
      ulrich.laaser
      Posts: 4
      Joined: Sat Feb 09, 2013 6:35 pm

      Thu Feb 14, 2013 2:42 am

      Here i Summit a design considering posible to install the STARWIND NATIVE SAN for HYPER-V directly on the Parent OS.
      Attachments
      Starwind on Parent OS
      Starwind on Parent OS
      HA diagram virtualization option 2.png (95.79 KiB) Viewed 7484 times
      jeddyatcc
      Posts: 49
      Joined: Wed Apr 25, 2012 11:52 pm

      Thu Feb 14, 2013 5:02 pm

      This may seem like an odd question, but what program did you use to make those diagrams? My management is asking for some thing very similar so that they can understand our DR posture lol.
      User avatar
      Anatoly (staff)
      Staff
      Posts: 1675
      Joined: Tue Mar 01, 2011 8:28 am
      Contact:

      Mon Feb 18, 2013 3:17 pm

      3.) Well, that's all correct for Spindle based disks. But in my design im almost only using SSDs, so i need to consider other things like trim and unmap support. Also RAID is absolut a requirement in a single Server Setup, but in the scenario im using a 3-Node "mirror". So in the best case i only have to be notificated, change the ssd and perform some steps to reactivate the ssd inside the virtual os and then inside the Starwind iscsi Management. Re-syncing from 2 Nodes that also Uses Intel 520 SSDs specs @ 500MB/s r/w over about 2x 10Gb/s Channel it should be a matter of 10 min to resync a whole 480GB SSD. But i need to know if it is all so easy (mostly the reassignment inside Starwind Management Console)
      StarWind do supprot email notification, so you`ll always be updated about hte health status of your SAN if you want. As about all the rest - we are just storing our virtual disks on the HDDs, SSDs or Arrays, and we see it as the storage only, so if the storage will be up - starwind will work too.
      What is the technical difference between Native SAN for Hyper-V and iscsi SAN. Im couldnt found a document that shows me installing native San directly on Hyper-V Cluster. Maybe because this is not the approach?
      Its not that technical - mostly this is just different scenarios. Native SAN is aimed to be used in scenarios where it will be installed on the Hy-er-V hosts and used by them only. You can do the same thing with the iSCSI SAN, but in this case you will be able to connect to the StarWind HA sotrages with anything - Hyper-V, ESX, Citrix, etc. Well, and Native SAN edition is cheaper :D
      This may seem like an odd question, but what program did you use to make those diagrams?
      I`d recommend you to take a look at Visio - yes, its MS, but I like it really much.
      Best regards,
      Anatoly Vilchinsky
      Global Engineering and Support Manager
      www.starwind.com
      av@starwind.com
      Post Reply