Why and When you need portworx pvc-controller to be deployed
The portworx pvc controller’s role is to handle the create and delete the PVCs from Kubernetes and forward the request to the Portworx API.
On most clusters, this is handled by the built-in controller manager that ships with Kubernetes. This is because we have a native driver in Kubernetes.
Additional deployment of portworx-pvc-controller is needed in either of below 2 cases
When the built-in controller manager is running on an isolated network: This is mostly seen in hosted installs like GKE on cloud, AKS etc where master nodes run on a different network. We also saw onprem Openshift 3.11 install master on an isolated network. Since the built-in controller manager runs on the master nodes, it cannot reach the Portworx API to create and delete volumes. And hence one needs to deploy it on the worker nodes
When Portworx is not installed in kube-system namespace: The PVC controller only looks like the portworx-service in the kube-system namespace by default. So if Portworx is not installed in the kube-system namespace, it needs to be additionally deployed on the worker nodes. When running on the worker nodes, it will also attempt to talk to Portworx locally on the node where it’s running.
Point #2 has had some updates and is now no longer true (not necessarily a requirement of portworx pvc-controller running).
Instead, when PX is deployed in a non-kube-system namespace and is deployed via the Operator (now the recommended way, as deploying via Daemonset is deprecated and not recommended any more), we will automatically create “portworx-proxy” pods in the kube-system namespace. This is because the in-tree (bundled in the kubernetes source code) portworx drivers are hard-coded to expect the portworx service in kube-system (and this had been hard-coded a while back, and the in-tree portworx driver is no longer modifiable). NOTE: the in-tree driver is also now deprecated as of K8s 1.25 and will be removed in 1.29, so we now recommend everyone use the CSI driver over the in-tree driver. You can check your storageclass(es)'s “provisioner”. Values of kubernetes.io/portworx are in-tree drivers, values of pxd.portworx.com are the csi driver.
In those certain environments, if you are using the CSI driver only (no storageclasses exist/will exist with provisioners containing kubernetes.io/portworx), you can disable these proxy-pods via the StorageCluster annotation portworx.io/portworx-proxy: "false" if they deploying via the operator (also operator 1.5.2 is required for this annotation).