A RAID controller reconfiguration gone wrong required one of my two HA nodes to be rebuilt almost from bare metal (the OS and Starwind install survived, but three data-storing RAID arrays were fried, oops).
Going from NODE2 (the surviving good node) to Node 1, I re-created Node 2's HA replication partners using the replication manager and did a full sync of data. Everything seemed fine.
Fast forward to yesterday when I go to get vmware reconnected to NODE1. Rescan the iscsi bus... nothing.
* All paths to node1 are connected but no I/O.
* NODE1's targets show sessions for all of my VM hosts.
I isolate one host and ditch the multipathing, connecting it only to Node 1.
S scan shows all 3 iscsi disks, reading out as "STARWIND ISCSI Disk (eui.yyyyyyyyy)" They show up as BLANK disks.
A working VM host shows the same HA device as "ROCKET ISCSI DISK (eui.xxxxxxxx)".
Nodes 1 and 2 were originally v6 installs, upgraded to v8 recently. On the view of a HA disk Image in NODE2, there is a "HA Image" section with a serial-id and a virtual disk path pointing to the thick-provisioned disk image file. On NODE1, there is a "HA Image" section with the same serial-id but the virtual disk points to "imagefile1". There is a section below called "STORAGE" defining "imagefile1" with the path to the virtual disk.
Two of the three disks on NODE1 were created using the replication node wizard. The third was done manually using "add device (advanced)" and added manually. They all have the same results.
I am running off a single Storage node and, with VMWare thinking one half of a supposedly-synchronized cluster is blank, this is a very bad thing.
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software