Due to a conflict with a piece of software we use on our cluster, we weren’t able to use Stork for the portworx installation. I’m testing cloudsnap backups right now, and I can restore a volume to a new volume name, but at that point, I’m not sure how to get that back into kubernetes as a usable storage area.
We’re using the CSI provisioner to attach to our pure flash array, if that’s any help for any commands to run to get things hooked up again.
Is there any way to do this without the use of Stork?
Thanks.
I think I actually got this to work in this way:
- restore the volume using pxctl cloudsnap restore
- create a PV and PVC to re-create a volume in kubernetes. The PV definition looks similar to:
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: pxd.portworx.com
volume.kubernetes.io/provisioner-deletion-secret-name: ""
volume.kubernetes.io/provisioner-deletion-secret-namespace: ""
name: restoretest
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 50Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: restored-volume
namespace: test-project1
csi:
driver: pxd.portworx.com
fsType: ext4
volumeAttributes:
attached: ATTACH_STATE_INTERNAL_SWITCH
error: ""
parent: ""
readonly: "false"
secure: "false"
shared: "true"
sharedv4: "false"
state: VOLUME_STATE_DETACHED
storage.kubernetes.io/csiProvisionerIdentity: 1698839449476-4689-pxd.portworx.com
volumeHandle: "288397840309906678"
persistentVolumeReclaimPolicy: Delete
storageClassName: portworx
volumeMode: Filesystem
- the volumeHandle should be replaced with the volume ID presented by pxctl volume list
The PVC definition that gets created is pretty straightforward after that, just making sure that the name/namespace/storagesize matches what was in the PV definition.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: test-project1
name: restored-volume
spec:
storageClassName: portworx
volumeName: restoretest
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi