Steps to move the ETCD database to specific or another disk within the same node

  1. Update Portworx DaemonSet and make two changes
    kubectl -nkube-system edit ds portworx
    add the following in the “args” parameter after the “-b” option (/dev/sdc is the disk in my environment, please update to the one in yours):
    - -kvdb_dev

    • /dev/sdc
      change update strategy to “OnDelete” as you want to control which pods are updated with the kvdb disk change above:
      type: OnDelete
  2. Check for the pods that have internal kvdb:
    pxctl sv kvdb members

  3. Delete all portworx pods that don’t have the internal kvdb - delete one at a time and wait for the them to restart
    kubectl -n kube-system delete pod portworx-xxxxx

  4. After pod is restarted and is in 1/1 status you can check in the node that /etc/pwx/config.json file changed and have kvdb disk now:
    “kvdb_dev”: “/dev/sdc”,

  5. Stop one of the pods that is current an old kvdb member:
    kubectl label node px/service=stop --overwrite

  6. After 3 minutes node is offline PX will move kvdb to a different node. It should move it to one of the new nodes with a dedicated kvdb disk.
    check if move is complete using the “pxctl sv kvdb members” command

  7. After move is complete you can ssh to new node and run “blkid” to confirm kvdb is using dedicated disk:
    /dev/sdc: LABEL=“kvdbvol” UUID=“20094302-6e08-4b9f-9dc7-c774a17f4e83” TYPE=“ext4”

  8. Restart PX on the previous node where internal kvdb was running before and then delete the pod so it picks up new daemonset configuration:
    kubectl label node px/service=stop --overwrite
    kubectl -nkube-system delete pod portworx-xxxx

  9. Repeat steps 4 - 7 for the other two nodes that where old kvdb members

  10. After everything is up to date, edit the Portworx daemonset again and change the update strategy back to default one:
    type: RollingUpdate
    maxUnavailable: 1