Hello. What is the procedure for moving KVDB processes from one set of nodes in k8s or OpenShift to another? The only thing I was able to achieve is to start etcd on the target set by adding -kvdb_cluster_size “5” parameter to portworx DaemonSet, but there doesn’t seem to be a way to stop it on the old nodes. One would think that labeling all target nodes with px/metadata-node=true and all old nodes with px/metadata-node=false label would simply shut etcd down on the latter, but no luck. Removing -kvdb_cluster_size, which effectively sets it to 3, didn’t help either. I’d like to know if there is a way to do this via portworx itself before resorting to removing etcd endpoints via etcdctl.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Steps to move the ETCD database to specific or another disk within the same node | 0 | 1141 | March 27, 2021 | |
| Kvdb node addition when using portworx operator | 0 | 695 | February 9, 2022 | |
| Internal KVDB considerations for cloud deployment | 1 | 1074 | March 26, 2021 | |
| Fail to start Portworx | 1 | 1555 | May 26, 2021 | |
| In environments where KVDBs are not isolated, does Stork preferentially place pods on nodes where KVDBs are not running? | 0 | 609 | December 5, 2022 |