I want to setup a 3-node Starwind Stretch-Cluster for presenting HA-Storage to Hyper-V, Citrix VDI, and SMB Clients. Because of limited initial Budget, this 3 Nodes have to host also Infrastructure for Hyper-V and Citrix VDI (loopback configuration or virtualized). The primary AD Infrastructure like DC, DHCP, DNS PXE and NTP will be installed on an dedicated 4. Server. Monitoring and Managment on a 5. Server. Secondary DC, DHCP,DNS should be virtualized.
The Implementation is a 2 Step procedure
2 Nodes should be Hosted in the "Main DC", in Building 1. The last Node should be hosted in the "Secondary DC" ,in Building 2.
Each node has the following configuration:
1 x E5-2660 (i need CPU Power mostly for VDI Environment) 1. Question: Must i prevent CPU contention and priorize Ressources in case i install Starwind on Bare-Metal?
4*16GB DDR3 ECC Registered RAM 1600Mhz (aditional 64GB in Step 2)
2* Onboard Intel i350 1gb Nic provided by Supermicro MOBO (Management and 1.Heartbeat Network)
2* Dual Port Intel X540 10gb NIC (2 for Sync Channel, 2 For ISCSI and SMB-Traffic and 2.and 3 Heartbeat Network)
2* 128GB SSD Intel 520 for Baremetal-OS and Starwind in RAID1 Setup (Windows 2012 with Hyper-V or Windows 2012 Hyper-V)
1* 240GB SSD Intel 520 for virtualized AD Infrastructure Servers
1* 240GB SSD Intel 520 for virtualized Exchange Server DB
1* 480GB SSD Intel 520 for virtualized FileServer
1* 480GB SSD Intel 520 for VDI Storage
1* 3TB SATA Disk for Backup Storage
Im segmenting the disk to specific Services, because i want to be sure that in case of a disk failure only on specific Service would be affected. And of course i know that the design is 3-node cluster... so up to 2 disks of the corresponding "Lun/Share" can fail... i mostly want to achieve low Recovery times in case of catastrophic failure.
The Gigabit Nics and 2 of the 10Gb Nics of each Server should be connected to 3 Cisco SG500X-48 Switches in Strech-Stack (2 Switches in the Main DC, 1 in the Secondary DC all Switches also interconnected via 10GB Ring)
The last 2 10Gb Nics will be used to interconnect al 3 Nodes Directly for Sync-Channel an Heartbeat
The supermicro Boards offers an integrated LSI 2208 Controller with 1GB Memory Cache.
2. Question: Im not sure if i should buy the BBU?
3. Question: Do need a Raid Controller at all (im not configuring Raid in the LUNs) and can use instead a LSI HBA so i almost can connect the disks direct to an PCI-e 2 8x HBA Controller and not to an shared C602 Controller)
4. Question: As im using SSD Disks do i need a Raid Controller with Cache?
Ok thats for the physical initial design. Well almost: In case i have to go further, the 3 Nodes will still have to serve (in Loopback configuration) the above , and i will setup aditional Server for Citrix VDI.
At Software Side:
1. Install a Windows Server 2012 with Hyper-V or Windows Hyper-V Server 2012 on each node (only using 2 OS Disk) 1.1 Configure Hyper-V Cluster
2. Install Starwind in the OS Partition (Parent Partition), and prepare HA LUNs, one LUN/DISK for each Service as mentionated before.
3. Create a virtualized Windows 2012 File-Server Cluster spanned over the 3 Nodes (Finally this virtual FileServer Cluster will use this LUNs as VM-Storage for Hyper-V, Citrix and other SMB Clients.
Here is where im getting confused:
5. Question: Is Step 3 necessary can i only use the Starwind Software to present and handle simultaneus access from Hyper-V Cluster, Citrix VDI and SMB Clients?
If i install Starwindsoftware directly on Parent Partition im not sure if im getting into the not by MS supported constellation of
Loopback configurations (where the computer that is running Hyper-V is used as the file server for virtual machine storage) are not supported.
http://technet.microsoft.com/en-us/libr ... 34187.aspx
Well i hope i have been able to make me understand, and will wait for some suggestions and recommendations.
Let me say i was reading the forum the last week up to comments from 2011...but i cant remember that the above 3-node nested setup was explained before or in a technical whitepaper...
