Portworx failing to start, volume not found

Looking for some guidance. Kubernetes 1.23.16; 3 node cluster running Ubuntu 20.04.5. I see a volume created on our Pure flasharray, but it does not appear to match the disk that Portworx is failing to create. Not sure on next steps. This was not the first array that this cluster ran, but the Portworx uninstallation process was followed in between attempts.

kubectl describe pods -l name=portworx -n kube-system
Name:         dr-storage01-1d06456b-62c5-41af-abdf-582b6f31de8d-94gj6
Namespace:    kube-system
Priority:     0
Node:         dru-storage03aio1/10.200.99.65
Start Time:   Mon, 14 Aug 2023 16:46:40 -0500
Labels:       controller-revision-hash=c97877d67
              name=portworx
              operator.libopenstorage.org/driver=portworx
              operator.libopenstorage.org/name=dr-storage01-1d06456b-62c5-41af-abdf-582b6f31de8d
Annotations:  cluster-autoscaler.kubernetes.io/safe-to-evict: true
              kubernetes.io/psp: default-psp
              operator.libopenstorage.org/node-labels:
                {"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","cattle.io/creator":"norman","kubernetes.io/arch":"amd64","kubernetes.i...
Status:       Running
IP:           10.200.99.65
IPs:
  IP:           10.200.99.65
Controlled By:  StorageCluster/dr-storage01-1d06456b-62c5-41af-abdf-582b6f31de8d
Containers:
  portworx:
    Container ID:  docker://4eec551838c109eac14c82c95fda9fabc85d5047d21a404ba33c49c45a06b8ea
    Image:         docker.io/portworx/oci-monitor:3.0.0
    Image ID:      docker-pullable://portworx/oci-monitor@sha256:80a0586cf5b6b044120f673b7e6c224043ea7c22d7386a8742f6251c8330fe81
    Port:          <none>
    Host Port:     <none>
    Args:
      -c
      dr-storage01-1d06456b-62c5-41af-abdf-582b6f31de8d
      -x
      kubernetes
      -b
      -d
      ens160
      -m
      ens160
      -s
      size=150
      -max_storage_nodes_per_zone
      3
      -secret_type
      k8s
      -rt_opts
      default-io-profile=6
      --oem
      esse
    State:       Running
      Started:   Wed, 16 Aug 2023 12:26:11 -0500
    Last State:  Terminated
      Reason:    Error
      Message:   PX stopped working 5.9s ago.  Last status: Could not init boot manager  (error="error loading node identity: Cause: ProviderInternal Error: failed to attach volume a19502952c: not found")

      Exit Code:    2
      Started:      Wed, 16 Aug 2023 12:10:41 -0500
      Finished:     Wed, 16 Aug 2023 12:26:10 -0500
    Ready:          False
    Restart Count:  169
    Liveness:       http-get http://127.0.0.1:9001/status delay=840s timeout=1s period=30s #success=1 #failure=3
    Readiness:      http-get http://127.0.0.1:9015/health delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      NODE_NAME:                  (v1:spec.nodeName)
      PX_TEMPLATE_VERSION:       v4
      CSI_ENDPOINT:              unix:///var/lib/kubelet/plugins/pxd.portworx.com/csi.sock
      PURE_FLASHARRAY_SAN_TYPE:  ISCSI
      PX_NAMESPACE:              kube-system
      PX_SECRETS_NAMESPACE:      kube-system
    Mounts:
      /etc/ccm from ccm-phonehome-config (rw)
      /etc/crictl.yaml from crioconf (rw)
      /etc/pwx from etcpwx (rw)
      /etc/systemd/system from sysdmount (rw)
      /host_proc from procmount (rw)
      /opt/pwx from optpwx (rw)
      /run/containerd from containerddir (rw)
      /var/cores from diagsdump (rw)
      /var/lib/containerd from containerdvardir (rw)
      /var/lib/osd from varlibosd (rw)
      /var/log from journalmount2 (ro)
      /var/run/crio from criosock (rw)
      /var/run/dbus from dbusmount (rw)
      /var/run/docker.sock from dockersock (rw)
      /var/run/log from journalmount1 (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rzdhw (ro)
  csi-node-driver-registrar:
    Container ID:  docker://ce6d1273ef8c68c4dacc3f7e4ff5e6197cf8985d9a453149a2ea44ae5abca14c
    Image:         registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.2
    Image ID:      docker-pullable://registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:a13bff2ed69af0cf4270f0cf47bdedf75a56c095cd95b91195ae6c713a9b1845
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=$(ADDRESS)
      --kubelet-registration-path=/var/lib/kubelet/plugins/pxd.portworx.com/csi.sock
    State:          Running
      Started:      Mon, 14 Aug 2023 16:46:42 -0500
    Ready:          True
    Restart Count:  0
    Environment:
      ADDRESS:         /csi/csi.sock
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from csi-driver-path (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rzdhw (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  dockersock:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/docker.sock
    HostPathType:  
  containerddir:
    Type:          HostPath (bare host directory volume)
    Path:          /run/containerd
    HostPathType:  
  containerdvardir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/containerd
    HostPathType:  
  criosock:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/crio
    HostPathType:  
  crioconf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/crictl.yaml
    HostPathType:  FileOrCreate
  optpwx:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/pwx
    HostPathType:  
  procmount:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  
  sysdmount:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/systemd/system
    HostPathType:  
  dbusmount:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/dbus
    HostPathType:  
  varlibosd:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/osd
    HostPathType:  
  diagsdump:
    Type:          HostPath (bare host directory volume)
    Path:          /var/cores
    HostPathType:  
  etcpwx:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/pwx
    HostPathType:  
  journalmount1:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/log
    HostPathType:  
  journalmount2:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:  
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry
    HostPathType:  DirectoryOrCreate
  csi-driver-path:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/pxd.portworx.com
    HostPathType:  DirectoryOrCreate
  varcache:
    Type:          HostPath (bare host directory volume)
    Path:          /var/cache
    HostPathType:  
  timezone:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/timezone
    HostPathType:  
  localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:  
  ccm-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      px-telemetry-config
    Optional:  false
  ccm-phonehome-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      px-telemetry-phonehome
    Optional:  false
  kube-api-access-rzdhw:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason                              Age                    From      Message
  ----     ------                              ----                   ----      -------
  Warning  NodeStartFailure                    53m (x143 over 20h)    portworx  Failed to start Portworx: error loading node identity: Cause: ProviderInternal Error: drive set '45f0a579-3c92-49a9-9a30-8b44b924ffd7' is locked by other node 'dru-storage03aio4' which is not us, not locking...
  Normal   PortworxMonitorImagePullInProgress  48m                    portworx  Portworx image docker.io/portworx/px-essentials:3.0.0 pull and extraction in progress
  Normal   PortworxMonitorImagePullInProgress  33m                    portworx  Portworx image docker.io/portworx/px-essentials:3.0.0 pull and extraction in progress
  Warning  NodeStartFailure                    23m (x141 over 20h)    portworx  Failed to start Portworx: error loading node identity: Cause: ProviderInternal Error: drive set '45f0a579-3c92-49a9-9a30-8b44b924ffd7' is locked by other node 'dru-storage03aio3' which is not us, not locking...
  Normal   PortworxMonitorImagePullInProgress  17m                    portworx  Portworx image docker.io/portworx/px-essentials:3.0.0 pull and extraction in progress
  Warning  NodeStartFailure                    3m18s (x402 over 20h)  portworx  Failed to start Portworx: error loading node identity: Cause: ProviderInternal Error: failed to attach volume a19502952c: not found
  Normal   PortworxMonitorImagePullInProgress  2m12s                  portworx  Portworx image docker.io/portworx/px-essentials:3.0.0 pull and extraction in progress
  Warning  Unhealthy                           93s (x18295 over 43h)  kubelet   Readiness probe failed: HTTP probe failed with statuscode: 503


Name:         dr-storage01-1d06456b-62c5-41af-abdf-582b6f31de8d-fmf95
Namespace:    kube-system
Priority:     0
Node:         dru-storage03aio4/10.200.99.66
Start Time:   Mon, 14 Aug 2023 16:46:40 -0500
Labels:       controller-revision-hash=c97877d67
              name=portworx
              operator.libopenstorage.org/driver=portworx
              operator.libopenstorage.org/name=dr-storage01-1d06456b-62c5-41af-abdf-582b6f31de8d
Annotations:  cluster-autoscaler.kubernetes.io/safe-to-evict: true
              kubernetes.io/psp: default-psp
              operator.libopenstorage.org/node-labels:
                {"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","cattle.io/creator":"norman","kubernetes.io/arch":"amd64","kubernetes.i...
Status:       Running
IP:           10.200.99.66
IPs:
  IP:           10.200.99.66
Controlled By:  StorageCluster/dr-storage01-1d06456b-62c5-41af-abdf-582b6f31de8d
Containers:
  portworx:
    Container ID:  docker://eba10b58abc2fac3b867f85d379fea9744b304515500a65e17fdaca5c60a2c76
    Image:         docker.io/portworx/oci-monitor:3.0.0
    Image ID:      docker-pullable://portworx/oci-monitor@sha256:80a0586cf5b6b044120f673b7e6c224043ea7c22d7386a8742f6251c8330fe81
    Port:          <none>
    Host Port:     <none>
    Args:
      -c
      dr-storage01-1d06456b-62c5-41af-abdf-582b6f31de8d
      -x
      kubernetes
      -b
      -d
      ens160
      -m
      ens160
      -s
      size=150
      -max_storage_nodes_per_zone
      3
      -secret_type
      k8s
      -rt_opts
      default-io-profile=6
      --oem
      esse
    State:       Running
      Started:   Wed, 16 Aug 2023 12:26:11 -0500
    Last State:  Terminated
      Reason:    Error
      Message:   PX stopped working 8.2s ago.  Last status: Could not init boot manager  (error="error loading node identity: Cause: ProviderInternal Error: drive set '45f0a579-3c92-49a9-9a30-8b44b924ffd7' is locked by other node 'dru-storage03aio3' which is not us, not locking...")

      Exit Code:    2
      Started:      Wed, 16 Aug 2023 12:10:41 -0500
      Finished:     Wed, 16 Aug 2023 12:26:10 -0500
    Ready:          False
    Restart Count:  169
    Liveness:       http-get http://127.0.0.1:9001/status delay=840s timeout=1s period=30s #success=1 #failure=3
    Readiness:      http-get http://127.0.0.1:9015/health delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      NODE_NAME:                  (v1:spec.nodeName)
      PX_TEMPLATE_VERSION:       v4
      CSI_ENDPOINT:              unix:///var/lib/kubelet/plugins/pxd.portworx.com/csi.sock
      PURE_FLASHARRAY_SAN_TYPE:  ISCSI
      PX_NAMESPACE:              kube-system
      PX_SECRETS_NAMESPACE:      kube-system
    Mounts:
      /etc/ccm from ccm-phonehome-config (rw)
      /etc/crictl.yaml from crioconf (rw)
      /etc/pwx from etcpwx (rw)
      /etc/systemd/system from sysdmount (rw)
      /host_proc from procmount (rw)
      /opt/pwx from optpwx (rw)
      /run/containerd from containerddir (rw)
      /var/cores from diagsdump (rw)
      /var/lib/containerd from containerdvardir (rw)
      /var/lib/osd from varlibosd (rw)
      /var/log from journalmount2 (ro)
      /var/run/crio from criosock (rw)
      /var/run/dbus from dbusmount (rw)
      /var/run/docker.sock from dockersock (rw)
      /var/run/log from journalmount1 (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fmnvm (ro)
  csi-node-driver-registrar:
    Container ID:  docker://c265b73582f732ce0ae0c385fbc1f3d808a034506a7d1da607914914fd7f7165
    Image:         registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.2
    Image ID:      docker-pullable://registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:a13bff2ed69af0cf4270f0cf47bdedf75a56c095cd95b91195ae6c713a9b1845
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=$(ADDRESS)
      --kubelet-registration-path=/var/lib/kubelet/plugins/pxd.portworx.com/csi.sock
    State:          Running
      Started:      Mon, 14 Aug 2023 16:46:41 -0500
    Ready:          True
    Restart Count:  0
    Environment:
      ADDRESS:         /csi/csi.sock
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from csi-driver-path (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fmnvm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  dockersock:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/docker.sock
    HostPathType:  
  containerddir:
    Type:          HostPath (bare host directory volume)
    Path:          /run/containerd
    HostPathType:  
  containerdvardir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/containerd
    HostPathType:  
  criosock:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/crio
    HostPathType:  
  crioconf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/crictl.yaml
    HostPathType:  FileOrCreate
  optpwx:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/pwx
    HostPathType:  
  procmount:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  
  sysdmount:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/systemd/system
    HostPathType:  
  dbusmount:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/dbus
    HostPathType:  
  varlibosd:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/osd
    HostPathType:  
  diagsdump:
    Type:          HostPath (bare host directory volume)
    Path:          /var/cores
    HostPathType:  
  etcpwx:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/pwx
    HostPathType:  
  journalmount1:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/log
    HostPathType:  
  journalmount2:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:  
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry
    HostPathType:  DirectoryOrCreate
  csi-driver-path:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/pxd.portworx.com
    HostPathType:  DirectoryOrCreate
  varcache:
    Type:          HostPath (bare host directory volume)
    Path:          /var/cache
    HostPathType:  
  timezone:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/timezone
    HostPathType:  
  localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:  
  ccm-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      px-telemetry-config
    Optional:  false
  ccm-phonehome-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      px-telemetry-phonehome
    Optional:  false
  kube-api-access-fmnvm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason                              Age                    From      Message
  ----     ------                              ----                   ----      -------
  Normal   PortworxMonitorImagePullInProgress  48m                    portworx  Portworx image docker.io/portworx/px-essentials:3.0.0 pull and extraction in progress
  Normal   PortworxMonitorImagePullInProgress  33m                    portworx  Portworx image docker.io/portworx/px-essentials:3.0.0 pull and extraction in progress
  Normal   PortworxMonitorImagePullInProgress  17m                    portworx  Portworx image docker.io/portworx/px-essentials:3.0.0 pull and extraction in progress
  Warning  NodeStartFailure                    8m12s (x408 over 20h)  portworx  Failed to start Portworx: error loading node identity: Cause: ProviderInternal Error: failed to attach volume a19502952c: not found
  Warning  NodeStartFailure                    4m14s (x142 over 20h)  portworx  Failed to start Portworx: error loading node identity: Cause: ProviderInternal Error: drive set '45f0a579-3c92-49a9-9a30-8b44b924ffd7' is locked by other node 'dru-storage03aio1' which is not us, not locking...
  Normal   PortworxMonitorImagePullInProgress  2m12s                  portworx  Portworx image docker.io/portworx/px-essentials:3.0.0 pull and extraction in progress
  Warning  Unhealthy                           93s (x18285 over 43h)  kubelet   Readiness probe failed: HTTP probe failed with statuscode: 503


Name:         dr-storage01-1d06456b-62c5-41af-abdf-582b6f31de8d-s4q9p
Namespace:    kube-system
Priority:     0
Node:         dru-storage03aio3/10.200.99.64
Start Time:   Mon, 14 Aug 2023 16:46:39 -0500
Labels:       controller-revision-hash=c97877d67
              name=portworx
              operator.libopenstorage.org/driver=portworx
              operator.libopenstorage.org/name=dr-storage01-1d06456b-62c5-41af-abdf-582b6f31de8d
Annotations:  cluster-autoscaler.kubernetes.io/safe-to-evict: true
              kubernetes.io/psp: default-psp
              operator.libopenstorage.org/node-labels:
                {"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","cattle.io/creator":"norman","kubernetes.io/arch":"amd64","kubernetes.i...
Status:       Running
IP:           10.200.99.64
IPs:
  IP:           10.200.99.64
Controlled By:  StorageCluster/dr-storage01-1d06456b-62c5-41af-abdf-582b6f31de8d
Containers:
  portworx:
    Container ID:  docker://94e82bafa511162df4deb1731e42e5db680deabb36742dd6baee46598efb0e15
    Image:         docker.io/portworx/oci-monitor:3.0.0
    Image ID:      docker-pullable://portworx/oci-monitor@sha256:80a0586cf5b6b044120f673b7e6c224043ea7c22d7386a8742f6251c8330fe81
    Port:          <none>
    Host Port:     <none>
    Args:
      -c
      dr-storage01-1d06456b-62c5-41af-abdf-582b6f31de8d
      -x
      kubernetes
      -b
      -d
      ens160
      -m
      ens160
      -s
      size=150
      -max_storage_nodes_per_zone
      3
      -secret_type
      k8s
      -rt_opts
      default-io-profile=6
      --oem
      esse
    State:       Running
      Started:   Wed, 16 Aug 2023 12:21:09 -0500
    Last State:  Terminated
      Reason:    Error
      Message:   PX stopped working 500ms ago.  Last status: Could not init boot manager  (error="error loading node identity: Cause: ProviderInternal Error: drive set '45f0a579-3c92-49a9-9a30-8b44b924ffd7' is locked by other node 'dru-storage03aio1' which is not us, not locking...")

      Exit Code:    2
      Started:      Wed, 16 Aug 2023 12:06:09 -0500
      Finished:     Wed, 16 Aug 2023 12:21:09 -0500
    Ready:          False
    Restart Count:  174
    Liveness:       http-get http://127.0.0.1:9001/status delay=840s timeout=1s period=30s #success=1 #failure=3
    Readiness:      http-get http://127.0.0.1:9015/health delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      PX_NAMESPACE:              kube-system
      PX_SECRETS_NAMESPACE:      kube-system
      NODE_NAME:                  (v1:spec.nodeName)
      PX_TEMPLATE_VERSION:       v4
      CSI_ENDPOINT:              unix:///var/lib/kubelet/plugins/pxd.portworx.com/csi.sock
      PURE_FLASHARRAY_SAN_TYPE:  ISCSI
    Mounts:
      /etc/ccm from ccm-phonehome-config (rw)
      /etc/crictl.yaml from crioconf (rw)
      /etc/pwx from etcpwx (rw)
      /etc/systemd/system from sysdmount (rw)
      /host_proc from procmount (rw)
      /opt/pwx from optpwx (rw)
      /run/containerd from containerddir (rw)
      /var/cores from diagsdump (rw)
      /var/lib/containerd from containerdvardir (rw)
      /var/lib/osd from varlibosd (rw)
      /var/log from journalmount2 (ro)
      /var/run/crio from criosock (rw)
      /var/run/dbus from dbusmount (rw)
      /var/run/docker.sock from dockersock (rw)
      /var/run/log from journalmount1 (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c7mdx (ro)
  csi-node-driver-registrar:
    Container ID:  docker://7205eaeab5e1225602fa0077dd2a75f9f422837bcb29c305bb9dccbbeec350e3
    Image:         registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.2
    Image ID:      docker-pullable://registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:a13bff2ed69af0cf4270f0cf47bdedf75a56c095cd95b91195ae6c713a9b1845
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=$(ADDRESS)
      --kubelet-registration-path=/var/lib/kubelet/plugins/pxd.portworx.com/csi.sock
    State:          Running
      Started:      Mon, 14 Aug 2023 16:46:40 -0500
    Ready:          True
    Restart Count:  0
    Environment:
      ADDRESS:         /csi/csi.sock
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from csi-driver-path (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c7mdx (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  dockersock:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/docker.sock
    HostPathType:  
  containerddir:
    Type:          HostPath (bare host directory volume)
    Path:          /run/containerd
    HostPathType:  
  containerdvardir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/containerd
    HostPathType:  
  criosock:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/crio
    HostPathType:  
  crioconf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/crictl.yaml
    HostPathType:  FileOrCreate
  optpwx:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/pwx
    HostPathType:  
  procmount:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  
  sysdmount:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/systemd/system
    HostPathType:  
  dbusmount:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/dbus
    HostPathType:  
  varlibosd:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/osd
    HostPathType:  
  diagsdump:
    Type:          HostPath (bare host directory volume)
    Path:          /var/cores
    HostPathType:  
  etcpwx:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/pwx
    HostPathType:  
  journalmount1:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/log
    HostPathType:  
  journalmount2:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:  
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry
    HostPathType:  DirectoryOrCreate
  csi-driver-path:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/pxd.portworx.com
    HostPathType:  DirectoryOrCreate
  varcache:
    Type:          HostPath (bare host directory volume)
    Path:          /var/cache
    HostPathType:  
  timezone:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/timezone
    HostPathType:  
  localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:  
  ccm-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      px-telemetry-config
    Optional:  false
  ccm-phonehome-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      px-telemetry-phonehome
    Optional:  false
  kube-api-access-c7mdx:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason                              Age                    From      Message
  ----     ------                              ----                   ----      -------
  Normal   PortworxMonitorImagePullInProgress  52m                    portworx  Portworx image docker.io/portworx/px-essentials:3.0.0 pull and extraction in progress
  Normal   PortworxMonitorImagePullInProgress  37m                    portworx  Portworx image docker.io/portworx/px-essentials:3.0.0 pull and extraction in progress
  Normal   PortworxMonitorImagePullInProgress  22m                    portworx  Portworx image docker.io/portworx/px-essentials:3.0.0 pull and extraction in progress
  Warning  NodeStartFailure                    13m (x121 over 20h)    portworx  Failed to start Portworx: error loading node identity: Cause: ProviderInternal Error: drive set '45f0a579-3c92-49a9-9a30-8b44b924ffd7' is locked by other node 'dru-storage03aio1' which is not us, not locking...
  Warning  NodeStartFailure                    8m24s (x115 over 20h)  portworx  Failed to start Portworx: error loading node identity: Cause: ProviderInternal Error: drive set '45f0a579-3c92-49a9-9a30-8b44b924ffd7' is locked by other node 'dru-storage03aio4' which is not us, not locking...
  Normal   PortworxMonitorImagePullInProgress  7m13s                  portworx  Portworx image docker.io/portworx/px-essentials:3.0.0 pull and extraction in progress
  Warning  NodeStartFailure                    4m17s (x446 over 20h)  portworx  Failed to start Portworx: error loading node identity: Cause: ProviderInternal Error: failed to attach volume a19502952c: not found
  Warning  Unhealthy                           94s (x18309 over 43h)  kubelet   Readiness probe failed: HTTP probe failed with statuscode: 503

I would suggest we first look at the last part of your post:

..the Portworx uninstallation process was followed in between attempts

Could you examine closely what that process was? Because this message:

Failed to start Portworx: error loading node identity: Cause: ProviderInternal Error: drive set '45f0a579-3c92-49a9-9a30-8b44b924ffd7' is locked by other node 'dru-storage03aio1' which is not us, not locking

…is usually indicative that there may have been an incomplete uninstall.

It is not sufficient to merely delete and recreate the StorageCluster, as that only removes the Portworx software from the nodes and stops the pods/services, but leaves behind node pool and configmap data so as not result in data-loss (by-design).

The design intent was so this situation allows if the object was accidentally deleted to assist with recovering data. But if that’s not what you want, this may result in issues cleanly reinstalling, as the block-device volumes PX installs and configmaps (in kube-system) are left behind for manual recovery.

While you may be able to manually clean up the configmaps yourself (one about clouddrives sound like they might be the culprit from the errors posted), the easiest way is to have Portworx completely clean/wipe all remnants of itself (if there’s nothing from the cluster you wanted to preserve), is to add the following to your StorageCluster before deleting it again:

spec:       
  deleteStrategy:
    type: UninstallAndWipe

…then after deleting the StorageCluster the exiting installation should be fully wiped/cleaned off the system, which will allow you to proceed to re-create the StorageCluster in order to install successfully.