NFS port setup questions

In https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-pvcs/open-nfs-ports/
I found inconsistency about the port numbers for nfsd, lockd, mountd.
I am using centos and try setup manually.
Here is the statment:
SharedV4 volumes communicate over the following ports:

  • portmapper: 111 (default on all linux distributions)
  • nfs service: 2049 (default on all linux distributions)
  • mountd: 20048 (depends on the linux distribution)

But in centos config, it uses mountd port is 9025

  • LOCKD_TCPPORT=9023
  • LOCKD_UDPPORT=9024
  • MOUNTD_PORT=9025
  • STATD_PORT=9026

later on it mentioned port 9023 for nfs:
Open /etc/systemd/system/nfs-server.service in a text editor and, under the [Service] section, add the --port 9023 value to the ExecStart=/usr/sbin/rpc.nfsd key

While 9023 is used as lockd port.
And in the statement above, it nfs needs to use port 2049.

I really appreciate to have a diagram with port numbering per service.
Also update iptables rules correctly, if needed.

Hello @kaitaklam

Each linux distribution has its own way of setting up the NFS ports. This doc tries to tackle all the distributions on a single page.

Essentially you need to only refer to the CentOS section and ignore the subsequent section of Debian which has slightly different port range.

Since you are on CentOS, your NFS port will already be configured at 2049. The rest of the ports can be configured to the following settings.

LOCKD_TCPPORT=9023
LOCKD_UDPPORT=9024
MOUNTD_PORT=9025
STATD_PORT=9026

Finally you can use the following commands to open the ports

iptables -I INPUT -p tcp -m tcp --match multiport --dports 111,2049,9023,9025,9026 -j ACCEPT
iptables -I OUTPUT -p tcp -m tcp --match multiport --dports 111,2049,9023,9025,9026 -j ACCEPT
iptables -I INPUT -p udp -m udp --dport 9024  -j ACCEPT
iptables -I OUTPUT -p udp -m udp --dport 9024  -j ACCEPT

Thanks I will try.
By the way, due to my internet bandwidth, pulling took more than 15 minutes and it restarted the pulling. As a result it didn’t started up completely. Any suggestion?

kube-system   portworx-2lwxv                             0/1     Running    1          25m
kube-system   portworx-7dsxb                             0/1     Running    1          25m

 Events:
  Type     Reason                             Age                   From                        Message
  ----     ------                             ----                  ----                        -------
  Normal   Scheduled                          32m                   default-scheduler           Successfully assigned kube-system/portworx-2lwxv to ip-10-230-13-253
  Normal   Pulling                            31m                   kubelet, ip-10-230-13-253   Pulling image 
portworx/oci-monitor:2.5.7"
  Normal   Pulled                             31m                   kubelet, ip-10-230-13-253   Successfully pulled image "portworx/oci-monitor:2.5.7" in 6.288140949s
  Normal   Created                            31m                   kubelet, ip-10-230-13-253   Created container portworx
  Normal   Started                            31m                   kubelet, ip-10-230-13-253   Started container portworx
  Normal   PortworxMonitorImagePullInPrgress  31m                   portworx, ip-10-230-13-253  Portworx image portworx/px-essentials:2.5.7 pull and extraction in progress
  Normal   PortworxMonitorImagePullInPrgress  16m                   portworx, ip-10-230-13-253  Portworx image portworx/px-essentials:2.5.7 pull and extraction in progress
  Warning  Unhealthy                          112s (x180 over 31m)  kubelet, ip-10-230-13-253   Readiness probe failed: HTTP probe failed with statuscode: 503
  Normal   PortworxMonitorImagePullInPrgress  42s                   portworx, ip-10-230-13-253  Portworx image portworx/px-essentials:2.5.7 pull and extraction in progress

Try with using kubelet config values. It didn’t solve the issue:

cat << EOF >> /var/lib/kubelet/config.yaml
runtime-request-timeout: 10h
serialize-image-pulls: false
image-pull-progress-deadline: 120m
EOF

sudo  systemctl daemon-reload
sudo systemctl restart kubelet.service