The portworx pvc controller’s role is to handle the create and delete the PVCs from Kubernetes and forward the request to the Portworx API.On most clusters, this is handled by the built-in controller manager that ships with Kubernetes. This is because we have a native driver in Kubernetes.Additional deployment of portworx-pvc-controller is needed in either of below 2 cases
- When the built-in controller manager is running on an isolated network: This is mostly seen in hosted installs like GKE on cloud, AKS etc where master nodes run on a different network. We also saw onprem Openshift 3.11 install master on an isolated network. Since the built-in controller manager runs on the master nodes, it cannot reach the Portworx API to create and delete volumes. And hence one needs to deploy it on the worker nodes
- When Portworx is not installed in kube-system namespace: The PVC controller only looks like the portworx-service in the kube-system namespace by default. So if Portworx is not installed in the kube-system namespace, it needs to be additionally deployed on the worker nodes. When running on the worker nodes, it will also attempt to talk to Portworx locally on the node where it’s running.
Note: Change the Kubernetes version in the URL matching to your cluster k8s version: and they try to install and redeploy your applications. https://install.portworx.com/?comp=pvc-controller&kbver= 1.17.4
Let me know, how it goes. Thanks