After a scale down and scale back up (a complete node cycle)
The volume replica sets now have the wrong IP, and cannot bring up the volume. Anyone know
kubectl exec -it portworx-k6dpr -n kube-system /opt/pwx/bin/pxctl volume inspect pvc-13524216-7ea4-11e7-98d9-025dab342ba7
Volume : 56796491782559141
Name : pvc-13524216-7ea4-11e7-98d9-025dab342ba7
Group : rmq_vg
Size : 8.0 GiB
Format : ext4
HA : 2
IO Priority : HIGH
Creation time : Apr 15 06:51:45 UTC 2020
Shared : yes
Status : up
State : Attached: 26b197b0-1846-43a6-996d-2f41d085e254 (IP HIDDEN FOR FOURM POST) ***.***.***.***
Device Path : /dev/pxd/pxd56796491782559141
Labels : app=rabbitmq-ha,group=rmq_vg,io_priority=high,namespace=customer-10042931-production,pvc=data-rmq-rabbitmq-ha-0,repl=2,shared=true
Reads : 0
Reads MS : 0
Bytes Read : 0
Writes : 0
Writes MS : 0
Bytes Written : 0
IOs in progress : 0
Bytes used : 0 B
Error in stats : Node(s) in replication sets are not online
Replica sets on nodes:
Set 0
Node : (IP HIDDEN FOR FOURM POST) ***.***.***.*** (Pool bc8590fa-5aac-4a2c-9ee8-7ab725a035d4 )
Node : (IP HIDDEN FOR FOURM POST) ***.***.***.*** (Pool 0959b11v-dea1-41cd-825f-c69307dc0194 )*
Replication Status : Not in quorum
Volume consumers :
- Name : rmq-rabbitmq-ha-0 (cb4161b7-82c6-11ea-95d9-025dab342c28) (Pod)
Namespace : customer-10042931-production
Running on : ***-***-***-***.**-******-*
Controlled by : rmq-rabbitmq-ha (StatefulSet)
Can anyone shed light on how this is meant to work under Auto Scale Groups?