Currently we are stress testing your software to use in our QA and eventually production environment. A question came up in our testing that may just be from a configuration error on our part. Here is our setup before I begin:
SQL Cluster (10.160.0.44)
--Windows Cluster (10.160.0.43)
---- SQL01-NODE1 (10.160.0.40)
---- SQL01-NODE2 (10.160.0.41)
SAN01 <-------- ISCSI Initiator Points to SAN01 ------ SQL01-NODE1 (10.160.0.40)
1- RAID01 (OS)
2- RAID10 (Share)
3- RAID10 (Share)
SAN02 <-------- ISCSI Initiator Points to SAN01 ------ SQL02-NODE2 (10.160.0.41)
1- RAID01 (OS)
2- RAID10 (Share)
3- RAID10 (Share)
Following the Starwind Documentation we have HA set up between the two SANs. Currently with this setup when either server (SQL01-NODE1 or SQL01-NODE2) fails the other takes over flawlessly. Being that our application points to the virtual ip of the sql cluster 10.160.0.44, when the failover happens the transition is seamless and there is 0 downtime. The issue is when both SQL01-NODE1 and SAN02 fails it makes the SQL Cluster unaccessible. What type of setup is recommended to circumvent this type of issue? Should the two SANs be put in its own Windows Cluster as well? Are we mapping the available drives incorrectly?
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software