How to sync Portworx volume

Hi All

I have a four worker node kubernetes cluster and may add some more nodes as Portworx volume but I find that the Volume amount is different among nodes.

Below is the /opt/pwx/bin/pxctl status:
IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
Disabled Yes 14 GiB 120 GiB Online Up 5.4.0-90-generic Ubuntu 20.04.3 LTS
Disabled Yes 37 GiB 120 GiB Online Up 5.4.0-90-generic Ubuntu 20.04.2 LTS
Disabled Yes 37 GiB 120 GiB Online Up 5.4.0-90-generic Ubuntu 20.04.2 LTS
Disabled Yes 8.0 GiB 120 GiB Online Up (This node) 5.4.0-90-generic Ubuntu 20.04.3 LTS
Global Storage Pool
Total Used : 96 GiB
Total Capacity : 480 GiB

It may affects which service the pod build on. Could give me some tips on how to sync across volume? Like every node should be 37GiB in this case.

Thanks in advance.

I am sure you have moved on, since I just noticed this question. But, maybe a better understanding of how Portworx PVCs are created might help.
Once you deploy Portworx into a cluster, each node forms a storage pool from the assigned, available, provided disks. In your cluster, that is 120GiB disks per node to be used by Portworx. There is a small amount of internal use of node capacity for journaling, the KVDB if you didn’t specify a specific disk for it, and metadata.
I see two nodes with 37G used. If this is an application you have deployed and know that you have written 37G to it, then I would suggest checking the storageClass used. It is likely configured with a “parameters.repl:2”. This means that Portworx will synchronously write two copies of the volume to the Portworx StorageCluster. In this case, it is on the two nodes you show with 37G used.
The maximum supported replication factor is 3, so you will likely never see an even distribution of capacity used across the disks until you have a great deal more data and volumes on the system.

If you use the pxctl command, you can issue /opt/pwx/bin/pxctl volume list to show a list of user-created Portworx volumes. It will also show you the number of replicas in the HA column of the output. From here, you can copy any of the Volume IDs and use pxctl volume inspect <volume_id> to get more details, including which nodes in the cluster hold the replicas of the volume.

I hope this helps understand what you are seeing in your deployment, and to others reading, I hope you find this helpful.