Scaling portworx to new worker nodes in the cluster

Hi, I have a Red Hat OpenShift cluster with three worker nodes and I installed portworx with block storage attached to each those worker nodes. Now I added three more worker nodes, how do I expand portworx to these new worker nodes ?

by default, portworx will be deployed automatically in the newly added nodes in the cluster as a daemonset. wasn’t the case in your env?

Does it mean it will also add block storage of new nodes into SDS storage pool as additional capacity ? or is there anything I need to do ?

yes, if you use the cloud drive (ASG) option to provision the block storage to nodes before. Portworx will provision the storage and add the new nodes to the portworx storage pool.

How do I verify that Portworx is working on other nodes, also is there an admin sort of page for portworx ?

regd storage, I added block storage and installed portworx on old nodes, I am not sure of ASG option, what is it and how do I verify ?

Hi I get this error6-10T01:45:11Z" level=warning msg=“503 Node status not OK (STATUS_INIT)” Driver=“Cluster API” ID=nodeHealth Request=“Cluster API” time=“2021-06-10T01:45:21Z” level=warning msg=“Could not retrieve PX node status” error=“Node status not OK (STATUS_INIT)\n” @trust4ever-zone3-wrk1-new portworx[23443]: time=“2021-06-10T01:45:21Z” level=warning msg=“503 Node status not OK (STATUS_INIT)” Driver=“Cluster API” ID=nodeHealth Request=“Cluster API” time=“2021-06-10T01:45:31Z” level=warning msg=“Could not retrieve PX node status” error=“Node status not OK (STATUS_INIT)\n” @trust4ever-zone3-wrk1-new portworx[23443]: time=“2021-06-10T01:45:31Z” level=warning msg=“503 Node status not OK (STATUS_INIT)” Driver=“Cluster API” ID=nodeHealth Request=“Cluster API” time=“2021-06-10T01:45:41Z” level=warning msg=“Could not retrieve PX node status” error=“Node status not OK (STATUS_INIT)\n” @trust4ever-zone3-wrk1-new portworx[23443]: time=“2021-06-10T01:45:41Z” level=warning msg=“503 Node status not OK (STATUS_INIT)” Driver=“Cluster API” ID=nodeHealth Request=“Cluster API”

if your nodes in any of this environment AWS, GCP or Vmware vSphere, you can use the cloud drive option, where portworx will provision the drives for you and manage it by itself. Automatic disk provisioning all you need to provide the required privileges in the respective environment.

to verify the portworx status on the nodes: you can check using below commands

/opt/pwx/bin/pxctl status or
systemctl status portworx

Hi, Something happened and my portworx disappeared, so I had to reinstall. Is there a way I can track history of installs, crashes and reinstalls to find out root cause, I would really appreciate that.

is this nodes running in any cloud env? and did you perform and upgrade or node was down and replaced by upgrade activity?