Home lab consists of: 3 workers with storage and portworx currently functional and 2 control-plane nodes
In reviewing the Storage Cluster document it appears an initial deployment supports this type of configuration per the bullet point section
Portworx with node specific overrides. Use different devices or no devices on different set of nodes
I tried removing the below lines noted with
>> from the operator:
placement: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: [truncated] >> - key: node-role.kubernetes.io/control-plane >> operator: DoesNotExist [truncated] >> - key: node-role.kubernetes.io/control-plane >> operator: Exists
which resulted in crashlooping portworx-pvc-controller pods on the control-plane nodes
"command failed" err="failed to create listener: failed to listen on 0.0.0.0:10257: listen tcp 0.0.0.0:10257: bind: address already in use"
bc that is the port the kube-controller is using. I added those lines back into the storagecluster config and removed the nodes from the cluster.
Before I irreparably damage the cluster I am curious if I follow the below steps will it add the control-plane nodes to the cluster as storageless?
( I’m using info from the Storage Cluster docs (link above) and the post Installing Portworx using Operator with Node / Storage specifics .
- Label the control-plane nodes with
kubectl label nodes node4 node5 px/storage=storageless
- Edit operator/storagecluster to include at the same level as my “generic”
storage:key which is a under the
nodes: - selector: labelSelector: matchLabels: px/storage: "storageless" storage: devices: 
- Contingent upon step two working, or at least appearing to work, remove the original two blocks of texts mentioned prior.