Service not restored

Hi,

I try to restore my wordpress deployment. Everything works except the service is not restored.
I can’t find any logs to understand.

Thanks

Hi,
What is the kubernetes version that you are using? There is a known issue with k8s 1.20.0.
Please let us the k8s version on your setup.
Thanks,
~Siva

Hi Siva,

The version used is 1.20.4.

Thanks

@pbouffet how are you taking backup and restoring? can provide the HIGH-level steps which you are performing? and also stroke and portworx version. you can describe and get the portworx & stork logs…

Hi @sensre,

I backup an app (wordpress/mysql for ex)
I delete everything (service, deployment, pv, pvc, secret) with kubectl delete.
And then I restore everything, but each time the result of the restore is partial:“Volumes were restored successfully. Some existing resources were not replaced”.
The resources that are not restored are the services only.

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                                             IMAGES                                                                                                                 SELECTOR
autopilot                1/1     1            1           36d   autopilot                                              portworx/autopilot:1.3.0                                                                                               name=autopilot,tier=control-plane
coredns                  2/2     2            2           37d   coredns                                                k8s.gcr.io/coredns:1.7.0                                                                                               k8s-app=kube-dns
portworx-operator        1/1     1            1           36d   portworx-operator                                      portworx/px-operator:1.4.2                                                                                             name=portworx-operator
px-csi-ext               3/3     3            3           36d   csi-external-provisioner,csi-snapshotter,csi-resizer   quay.io/openstorage/csi-provisioner:v1.6.0-1,quay.io/k8scsi/csi-snapshotter:v2.1.0,quay.io/k8scsi/csi-resizer:v0.5.0   app=px-csi-driver
px-prometheus-operator   1/1     1            1           36d   px-prometheus-operator                                 quay.io/coreos/prometheus-operator:v0.34.0                                                                             k8s-app=px-prometheus-operator
stork                    3/3     3            3           36d   stork                                                  openstorage/stork:2.6.2                                                                                                name=stork,tier=control-plane
stork-scheduler          3/3     3            3           36d   stork-scheduler                                        k8s.gcr.io/kube-scheduler-amd64:v1.20.4                                                                                component=scheduler,tier=control-plane

I can’t get the logs from Portworx with this command: kubectl logs -n kube-system -l name=portworx --tail=99999

error: a container name must be specified for pod px-ha-cluster-28fcf802-9518-4232-9b47-64e95b334ada-7xjgv, choose one of: [portworx csi-node-driver-registrar]

And we are limited in number of characters in a post.

understood. Looks like you are hitting the known issue as @Sivakumar_Subramani mentioned before. this issue has been resolved in stork:2.6.3, which will be released(GA) in a couple of weeks or so. and also i would recommend you to upgrade to portworx/px-operator:1.4.4.

to get the portworx log you need to pass this command kubectl logs -n kube-system -c portworx -l name=portworx --tail=99999

We will notify you once Stork-v2.6.3 released. thanks for your understanding.

Thank you for the command

You will find the log here:
Portworx logs

I will take look the logs. as I mentioned before, we will keep post on the stork:2.6.3 release date asap. thx

@pbouffet , 2.6.3 stork is GA released. This release has the fix for your issue. You can upgrade and verify.
Thanks,
~Siva

Thanks Silva. We will test it.