How to recover from possible volume not found in Portworks Essential

Currently I am experiencing an issue when a pod in k8s will not initiate because a pvc reference appears to not be found in portworx.

Storage class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: px-sharedv4-sc
provisioner: kubernetes.io/portworx-volume
parameters:
  repl: '2'
  sharedv4: 'true'
  sharedv4_svc_type: ClusterIP
reclaimPolicy: Delete
volumeBindingMode: Immediate

PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-bd0bcf3b-a070-43d3-b757-064ecf29b04a
  annotations:
    kubernetes.io/createdby: portworx-volume-dynamic-provisioner
    pv.kubernetes.io/bound-by-controller: 'yes'
    pv.kubernetes.io/provisioned-by: kubernetes.io/portworx-volume
status:
  phase: Bound
spec:
  capacity:
    storage: 10Gi
  portworxVolume:
    volumeID: '186200297439868709'
  accessModes:
    - ReadWriteMany
  claimRef:
    kind: PersistentVolumeClaim
    namespace: cp4i
    name: user-home-pvc
    uid: bd0bcf3b-a070-43d3-b757-064ecf29b04a
    apiVersion: v1
    resourceVersion: '100178102'
  persistentVolumeReclaimPolicy: Delete
  storageClassName: px-sharedv4-sc
  volumeMode: Filesystem

PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: user-home-pvc
  namespace: cp4i
  labels:
    app: 0010-infra
    app.kubernetes.io/component: user-home-pvc
    app.kubernetes.io/instance: 0010-infra
    app.kubernetes.io/managed-by: 0010-infra
    app.kubernetes.io/name: 0010-infra
    component: user-home-pvc
    release: 0010-infra
  annotations:
    helm.sh/resource-policy: keep
    pv.kubernetes.io/bind-completed: 'yes'
    pv.kubernetes.io/bound-by-controller: 'yes'
    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/portworx-volume
    volume.kubernetes.io/storage-provisioner: kubernetes.io/portworx-volume
  ownerReferences:
    - apiVersion: zen.cpd.ibm.com/v1
      kind: ZenService
      name: iaf-zen-cpdservice
status:
  phase: Bound
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 10Gi
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  volumeName: pvc-bd0bcf3b-a070-43d3-b757-064ecf29b04a
  storageClassName: px-sharedv4-sc
  volumeMode: Filesystem

However the logs from autopilot seem to indicate that the volume cannot be located

time="13-11-2024 18:10:19" level=error msg="Failed to get dependents while scraping all objects: rpc error: code = NotFound desc = Volume id pvc-bd0bcf3b-a070-43d3-b757-064ecf29b04a not found" file="engine.go:321" component=engine-v1 rule=:pvc-bd0bcf3b-a070-43d3-b757-064ecf29b04a=

And the px-clusterlog shows an effort to mount the id

Nov 13 18:32:41 eai-oc-dev-49hs7-worker-southcentralus1-2j6x4 portworx[506903]: time="2024-11-13T18:32:41Z" level=info msg="Mount=VOLUME_ACTION_PARAM_NONE Attach=VOLUME_ACTION_PARAM_ON" file="volume.go:351" Driver=pxd ID=pvc-bd0bcf3b-a070-43d3-b757-064ecf29b04a Request=volumeSet component=openstorage/api/server

So trying to figure out why the IBM zen pod it stuck on initiating and it appears to be tied to this specific PVC. Looking for help or ideas at this point. Thanks.

Update:
It appears that new Azure disks got created but the data was not transferred to the new disks. It is unclear what caused this. However, the old Azure disks were not delete and are still present. Is there a way to change the Portworx cluster in kubernetes to use the old Azure disks versus the new ones? Possibly update/change the data in the px-cloud-drive config map to use the old disks versus the new?