Portworx Essentials/Enterprise on OKD 4.x/Fedora CoreOS

May I request for Portworx Essentials/Enterprise to support OKD 4.x/Fedora CoreOS ? Thanks.

Currently Portworx Essentials requires a Kubernetes-based control plane as the specific unique-ID is passed in as a Secret… This should be alright as OKD, which is based on OpenShift, is based on Kubernetes, and so should be supported. Our support for CoreOS is also in place already.

Hi,

I tested PX Essentials on Code Ready Containers and it works like a charm after changing the node affinity and the pod counts. However impossible to make it work on OKD 4.5.

The following pods fail to start : portworx-api, px-cluster-, px-lighthouse.

I added a dedicated disk but the StorageNode does not come up and there is no error message.

Anybody managed to deploy a StorageCluster on OKD ?

Cheers

More on the issue by connecting to the target node :

CMD >> /var/opt/pwx/bin/px-runc run --name portworx --oci /var/opt/pwx/oci

Result :
[…]
INFO[0001] Switched ‘/var/opt/pwx/oci’ to PRIVATE mount propagation
INFO[0001] Found 2 usable runc binaries: /var/opt/pwx/bin/runc and /var/opt/pwx/bin/runc-fb
INFO[0001] Detected kernel release 5.6.19-300
INFO[0001] Exec: ["/var/opt/pwx/bin/runc" “run” “-b” “/var/opt/pwx/oci” “–no-new-keyring” “portworx”]
Executing with arguments: -A -b -c px-cluster-d1b4a388-48cb-41cd-8eb3-9c507047024b -f -marketplace_name OperatorHub -r 17001 -secret_type k8s -x kubernetes
Installed pxctl…
Sat Oct 10 13:10:46 UTC 2020 : Running version 2.6.1.2-669fb0c on Linux master-1.cluster.okd4.local 5.6.19-300.fc32.x86_64 #1 SMP Wed Jun 17 16:10:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Version: Linux version 5.6.19-300.fc32.x86_64 (mockbuild@bkernel01.iad2.fedoraproject.org) (gcc version 10.1.1 20200507 (Red Hat 10.1.1-1) (GCC)) #1 SMP Wed Jun 17 16:10:48 UTC 2020
sed: can’t read /etc/mdadm/mdadm.conf: No such file or directory

OK finally managed to make it work by ‘oc debug’ in the node and then start by hand :

sh-5.0# /var/opt/pwx/bin/px-runc run --name portworx --oci /var/opt/pwx/oci

I would assume this might a Fedora CoreOS issue. But unsure.

@besn0847, do you see any warning event on the StorageCluster or StorageNode object?
Also, can you check portworx-operator and px-cluster-* pod logs for errors?

<Events from px-cluster.*>
Readiness probe failed: HTTP probe failed with statuscode: 503

<Logs from px-cluster.*>
time=“2020-10-13T05:19:47Z” level=warning msg=“Could not retrieve PX node status” error=“Get http://127.0.0.1:17001/v1/cluster/nodehealth: dial tcp 127.0.0.1:17001: connect: connection refused”
time=“2020-10-13T05:19:57Z” level=warning msg=“Could not retrieve PX node status” error=“Get http://127.0.0.1:17001/v1/cluster/nodehealth: dial tcp 127.0.0.1:17001: connect: connection refused”

time="13-10-2020 05:21:12" level=debug msg="Marking old pods for deletion" file="update.go:72" time="13-10-2020 05:21:12" level=debug msg="Nodes needing storage pods for storage cluster px-cluster-d1b4a388-48cb-41cd-8eb3-9c507047024b: [], creating 0" file="storagecluster.go:548" time="13-10-2020 05:21:12" level=debug msg="Pods to delete for storage cluster px-cluster-d1b4a388-48cb-41cd-8eb3-9c507047024b: [], deleting 0" file="storagecluster.go:600" time="13-10-2020 05:21:17" level=warning msg="error connecting to GRPC server [172.30.252.64:9020]: Connection timed out" file="status.go:55"

And to finish no events in the Cluster operator for StorageCluster and ClusterNode.
They are stuck in the Phase : Initiliazing.

What’s weird is that if i connect with ‘oc debug node/my-worker-1’ + stop the portworx service + launch portworx manually, it fails the ifrst 2 times but succeed generally the 3rd one.

That’s quite unusual.
Could you get the portworx logs from one of the failing portworx nodes?

journalctl -lu portworx*

Hi,

I got these errors while trying to re-install Portworx on OKD cluster.

time="2020-10-13T18:16:22Z" level=warning msg="Detected invalid security context on host's /opt/pwx (attempting to fix it)" ls-out="drwxr-xr-x. 2 root root system_u:object_r:var_t:s0 6 Oct 13 18:08 /opt/pwx\n"
time="2020-10-13T18:16:22Z" level=info msg="> run-host: /bin/sh -c d=$(readlink -m /opt/pwx) ; semanage fcontext -a -t usr_t $d'(/.*)?' ; restorecon -Rv $d/"
>> /bin/sh: semanage: command not found
time="2020-10-13T18:24:32Z" level=info msg="> Installing nfs-server.service@Fedora ..."
time="2020-10-13T18:24:32Z" level=info msg="> run-host: /bin/sh -c 'yum clean all && exec yum makecache'"
/bin/sh: yum: command not found
time="2020-10-13T18:24:32Z" level=error msg="Error running \"/bin/sh -c 'yum clean all && exec yum makecache'\" command" error="exit status 127"
time="2020-10-13T18:24:32Z" level=error msg="Could not enable NFS service" error="Could not install NFS service: Command 'yum clean all && exec yum makecache' failed: exit status 127"
[root@okd1w1 bin]# systemctl status portworx
● portworx.service - Portworx OCI Container
     Loaded: loaded (/etc/systemd/system/portworx.service; enabled; vendor preset: disabled)
     Active: activating (auto-restart) (Result: exit-code) since Tue 2020-10-13 18:30:57 UTC; 3s ago
TriggeredBy: ● portworx.socket
       Docs: https://docs.portworx.com/runc
    Process: 2586574 ExecStartPre=/bin/sh -c /var/opt/pwx/bin/runc delete -f portworx || true (code=exited, status=0/SUCCESS)
    Process: 2586587 ExecStart=/var/opt/pwx/bin/px-runc run --name portworx (code=exited, status=203/EXEC)
   Main PID: 2586587 (code=exited, status=203/EXEC)
        CPU: 23ms
[root@okd1w1 pwx]# ls -ldZ /etc/pwx/
drwx------. 2 root root system_u:object_r:etc_t:s0 6 Sep 28 16:16 /etc/pwx/
[root@okd1w1 pwx]# ls -ldZ /var/opt/pwx/
drwxr-xr-x. 2 root root system_u:object_r:var_t:s0 6 Oct 13 18:08 /var/opt/pwx/
[root@okd1w1 pwx]# ls -ldZ /opt/pwx/
drwxr-xr-x. 2 root root system_u:object_r:var_t:s0 6 Oct 13 18:08 /opt/pwx/

So I try to run systemd service command manually, then got an error related to NFS service and /run/docker/plugins directory missing.

[root@okd1w1 bin]# /var/opt/pwx/bin/px-runc run --name portworx
INFO[0000] Rootfs found at /var/opt/pwx/oci/rootfs      
INFO[0000] PX binaries found at /var/opt/pwx/bin/px-runc 
INFO[0000] Initializing as version 2.6.1.2-669fb0c (OCI) 
INFO[0000] SPEC READ [2eaf46c2a9ab0de0424d4b0945be4b32  /var/opt/pwx/oci/config.json] 
INFO[0000] Enabling Sharedv4 NFS support ...            
INFO[0000] Setting up NFS service                       
INFO[0000] > Initialized service controls via DBus{type:dbus,svc:nfs-server.service,id:0xc4204b4660} 
INFO[0000] > Service nfs-server.service not running     
INFO[0000] > Service nfs-server.service not installed   
WARN[0000] NFS Install still in cooldown for another 5m36s 
ERRO[0000] Could not enable NFS service                  error="NFS install skipped (in cooldown due to previous failures)"
INFO[0000] Checking mountpoints for following shared directories: [/var/lib/kubelet /var/lib/osd] 
INFO[0000] Found following mountpoints for shared dirs: map[/:{isMP=T,Opts=shared:1} /var/lib/osd:{isMP=f,Opts=shared:3,Parent=/var} /var/lib/kubelet:{isMP=f,Opts=shared:3,Parent=/var} /var:{isMP=T,Opts=shared:3,Parent=/}] 
...
INFO[0000] Switched '/var/opt/pwx/oci' to PRIVATE mount propagation 
INFO[0000] Found 2 usable runc binaries: /var/opt/pwx/bin/runc and /var/opt/pwx/bin/runc-fb 
INFO[0000] Detected kernel release 5.6.19-300           
INFO[0000] Exec: ["/var/opt/pwx/bin/runc" "run" "-b" "/var/opt/pwx/oci" "--no-new-keyring" "portworx"] 
Executing with arguments: -b -c okd1-cluster -j auto -marketplace_name OperatorHub -r 17001 -s /dev/sdc -secret_type k8s -x kubernetes 
Installed pxctl...
Tue Oct 13 18:28:56 UTC 2020 : Running version 2.6.1.2-669fb0c on Linux okd1w1.xxx 5.6.19-300.fc32.x86_64 #1 SMP Wed Jun 17 16:10:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Version: Linux version 5.6.19-300.fc32.x86_64 (mockbuild@bkernel01.iad2.fedoraproject.org) (gcc version 10.1.1 20200507 (Red Hat 10.1.1-1) (GCC)) #1 SMP Wed Jun 17 16:10:48 UTC 2020
time="2020-10-13T18:28:56Z" level=error msg="Cannot listen on UNIX socket: listen unix /run/docker/plugins/pxd.sock: bind: no such file or directory"
Error: failed to listen on pxd.sock
Tue Oct 13 18:28:56 UTC 2020 partprobe begin
Tue Oct 13 18:28:56 UTC 2020 partprobe end
sed: can't read /etc/mdadm/mdadm.conf: No such file or directory

I run mkdir /run/docker/plugins then re-run systemd command and got below output without error message but process terminate.

[root@okd1w1 bin]# /var/opt/pwx/bin/px-runc run --name portworx
INFO[0000] Rootfs found at /var/opt/pwx/oci/rootfs      
INFO[0000] PX binaries found at /var/opt/pwx/bin/px-runc 
INFO[0000] Initializing as version 2.6.1.2-669fb0c (OCI) 
INFO[0000] SPEC READ [2eaf46c2a9ab0de0424d4b0945be4b32  /var/opt/pwx/oci/config.json] 
INFO[0000] Enabling Sharedv4 NFS support ...            
INFO[0000] Setting up NFS service                       
INFO[0000] > Initialized service controls via DBus{type:dbus,svc:nfs-server.service,id:0xc4202259a0} 
INFO[0000] > Service nfs-server.service not running     
INFO[0000] > Service nfs-server.service not installed   
WARN[0000] NFS Install still in cooldown for another 36s 
ERRO[0000] Could not enable NFS service                  error="NFS install skipped (in cooldown due to previous failures)"
INFO[0000] Checking mountpoints for following shared directories: [/var/lib/kubelet /var/lib/osd] 
INFO[0000] Found following mountpoints for shared dirs: map[/var/lib/kubelet:{isMP=f,Opts=shared:3,Parent=/var} /var:{isMP=T,Opts=shared:3,Parent=/} /:{isMP=T,Opts=shared:1} /var/lib/osd:{isMP=f,Opts=shared:3,Parent=/var}] 
...
INFO[0000] Exec: ["/var/opt/pwx/bin/runc" "run" "-b" "/var/opt/pwx/oci" "--no-new-keyring" "portworx"] 
Executing with arguments: -b -c okd1-cluster -j auto -marketplace_name OperatorHub -r 17001 -s /dev/sdc -secret_type k8s -x kubernetes 
Installed pxctl...
Tue Oct 13 18:33:56 UTC 2020 : Running version 2.6.1.2-669fb0c on Linux okd1w1.xxx 5.6.19-300.fc32.x86_64 #1 SMP Wed Jun 17 16:10:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Version: Linux version 5.6.19-300.fc32.x86_64 (mockbuild@bkernel01.iad2.fedoraproject.org) (gcc version 10.1.1 20200507 (Red Hat 10.1.1-1) (GCC)) #1 SMP Wed Jun 17 16:10:48 UTC 2020
mapping: 
Setting portmap: 9001
Tue Oct 13 18:33:56 UTC 2020 partprobe begin
Tue Oct 13 18:33:56 UTC 2020 partprobe end
sed: can't read /etc/mdadm/mdadm.conf: No such file or directory
checking /hostusr/src/kernels/5.6.19-300.fc32.x86_64
checking /hostusr/src/linux-headers-5.6.19-300.fc32.x86_64
checking /usr/src/kernels/5.6.19-300.fc32.x86_64
checking /usr/src/linux-headers-5.6.19-300.fc32.x86_64
checking /usr/src/linux-5.6.19-300-obj/x86_64/fc32.x86_64
checking /hostusr/src/linux-5.6.19-300-obj/x86_64/fc32.x86_64
checking /lib/modules/5.6.19-300.fc32.x86_64/build
checking /usr/src/linux
checking /var/lib/osd/pxfs/kernel_headers/usr/src/kernels/5.6.19-300.fc32.x86_64
checking /var/lib/osd/pxfs/kernel_headers/usr/src/linux-headers-5.6.19-300.fc32.x86_64
checking /var/lib/osd/pxfs/kernel_headers/usr/src/linux-5.6.19-300-obj/x86_64/fc32.x86_64
checking local archive, please wait...

Here is status from Portworx service

[root@okd1w1 bin]# systemctl status portworx
● portworx.service - Portworx OCI Container
     Loaded: loaded (/etc/systemd/system/portworx.service; enabled; vendor preset: disabled)
     Active: activating (auto-restart) (Result: exit-code) since Tue 2020-10-13 18:53:35 UTC; 2s ago
TriggeredBy: ● portworx.socket
       Docs: https://docs.portworx.com/runc
    Process: 2650720 ExecStartPre=/bin/sh -c /var/opt/pwx/bin/runc delete -f portworx || true (code=exited, status=0/SUCCESS)
    Process: 2650729 ExecStart=/var/opt/pwx/bin/px-runc run --name portworx --oci /var/opt/pwx/oci (code=exited, status=203/EXEC)
   Main PID: 2650729 (code=exited, status=203/EXEC)
        CPU: 19ms

Oct 13 18:53:35 okd1w1.xxx systemd[1]: portworx.service: Main process exited, code=exited, status=203/EXEC
Oct 13 18:53:35 okd1w1.xxx systemd[1]: portworx.service: Failed with result 'exit-code'.
[root@okd1w1 bin]# journalctl -fu portworx*
-- Logs begin at Mon 2020-10-05 13:46:48 UTC. --
Oct 13 18:54:45 okd1w1.xxx systemd[1]: Started Portworx OCI Container.
Oct 13 18:54:45 okd1w1.xxx systemd[2654226]: portworx.service: Failed to execute command: Permission denied
Oct 13 18:54:45 okd1w1.xxx systemd[2654226]: portworx.service: Failed at step EXEC spawning /var/opt/pwx/bin/px-runc: Permission denied
Oct 13 18:54:45 okd1w1.xxx systemd[1]: portworx.service: Main process exited, code=exited, status=203/EXEC
Oct 13 18:54:45 okd1w1.xxx systemd[1]: portworx.service: Failed with result 'exit-code'.
Oct 13 18:54:45 okd1w1.xxx systemd[1]: Stopping Portworx FIFO logging reader...
Oct 13 18:54:45 okd1w1.xxx systemd[1]: portworx-output.service: Succeeded.
Oct 13 18:54:45 okd1w1.xxx systemd[1]: Stopped Portworx FIFO logging reader.
Oct 13 18:54:45 okd1w1.xxx systemd[1]: portworx.socket: Succeeded.
Oct 13 18:54:45 okd1w1.xxx systemd[1]: Closed Portworx logging FIFO.
Oct 13 18:54:50 okd1w1.xxx systemd[1]: portworx.service: Scheduled restart job, restart counter is at 348.
Oct 13 18:54:50 okd1w1.xxx systemd[1]: Stopped Portworx OCI Container.
Oct 13 18:54:50 okd1w1.xxx systemd[1]: Listening on Portworx logging FIFO.
Oct 13 18:54:50 okd1w1.xxx systemd[1]: Started Portworx FIFO logging reader.

– Logs begin at Sun 2020-10-11 07:38:38 UTC, end at Wed 2020-10-14 13:58:55 UTC. –
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: Closed Portworx logging FIFO.
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: portworx.socket: Succeeded.
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: Stopped Portworx FIFO logging reader.
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: portworx-output.service: Succeeded.
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: Stopping Portworx FIFO logging reader…
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: portworx.service: Failed with result ‘exit-code’.
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: portworx.service: Main process exited, code=exited, status=203/EXEC
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[2620908]: portworx.service: Failed at step EXEC spawning /var/opt/pwx/bin/px-runc: Permission denied
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[2620908]: portworx.service: Failed to execute command: Permission denied
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: Started Portworx OCI Container.
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: Starting Portworx OCI Container…
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: Started Portworx FIFO logging reader.
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: Listening on Portworx logging FIFO.
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: Stopped Portworx OCI Container.
Oct 14 13:58:51 worker-1.cluster.okd4.local systemd[1]: portworx.service: Scheduled restart job, restart counter is at 382.
Oct 14 13:58:46 worker-1.cluster.okd4.local systemd[1]: Closed Portworx logging FIFO.
Oct 14 13:58:46 worker-1.cluster.okd4.local systemd[1]: portworx.socket: Succeeded.
Oct 14 13:58:46 worker-1.cluster.okd4.local systemd[1]: Stopped Portworx FIFO logging reader.
Oct 14 13:58:46 worker-1.cluster.okd4.local systemd[1]: portworx-output.service: Succeeded.
Oct 14 13:58:46 worker-1.cluster.okd4.local systemd[1]: Stopping Portworx FIFO logging reader…
Oct 14 13:58:46 worker-1.cluster.okd4.local systemd[1]: portworx.service: Failed with result ‘exit-code’.
Oct 14 13:58:46 worker-1.cluster.okd4.local systemd[1]: portworx.service: Main process exited, code=exited, status=203/EXEC
Oct 14 13:58:46 worker-1.cluster.okd4.local systemd[2620723]: portworx.service: Failed at step EXEC spawning /var/opt/pwx/bin/px-runc: Permission denied
Oct 14 13:58:46 worker-1.cluster.okd4.local systemd[2620723]: portworx.service: Failed to execute command: Permission denied

More details when running locally :

[root@worker-1 ~]# systemctl stop portworx
[root@worker-1 etc]# /var/opt/pwx/bin/px-runc run --name portworx --oci /var/opt/pwx/oci
INFO[0000] Rootfs found at /var/opt/pwx/oci/rootfs
INFO[0000] PX binaries found at /var/opt/pwx/bin/px-runc
INFO[0000] Initializing as version 2.6.1.2-669fb0c (OCI)
INFO[0000] SPEC READ [967b01b1144b4a933e5db74e465fe35f /var/opt/pwx/oci/config.json]
INFO[0000] Enabling Sharedv4 NFS support …
INFO[0000] Setting up NFS service
INFO[0000] > Initialized service controls via DBus{type:dbus,svc:nfs-server.service,id:0xc42028bb20}
INFO[0000] > Service nfs-server.service not running
INFO[0000] > Service nfs-server.service not installed
WARN[0000] NFS Install still in cooldown for another 3m45s
ERRO[0000] Could not enable NFS service error=“NFS install skipped (in cooldown due to previous failures)”
INFO[0000] Found original px-runc arguments: -c px-cluster-d1b4a388-48cb-41cd-8eb3-9c507047024b -x kubernetes -b -A -f -secret_type k8s -r 17001 -marketplace_name OperatorHub -v /var/lib/kubelet:/var/lib/kubelet:shared -v /opt/pwx/oci/mounts/etc/hosts:/etc/hosts -v /opt/pwx/oci/mounts/tmp/px-termination-log:/tmp/px-termination-log -v /var/cores:/var/cores -v /var/run/dbus:/var/run/dbus -v /opt/pwx/oci/mounts/var/run/secrets/kubernetes.io/serviceaccount:/var/run/secrets/kubernetes.io/serviceaccount -e HOME=/root -e HOSTNAME=worker-1.cluster.okd4.local -e KUBERNETES_PORT=tcp://172.30.0.1:443 -e KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443 -e KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1 -e KUBERNETES_PORT_443_TCP_PORT=443 -e KUBERNETES_PORT_443_TCP_PROTO=tcp -e KUBERNETES_SERVICE_HOST=172.30.0.1 -e KUBERNETES_SERVICE_PORT=443 -e KUBERNETES_SERVICE_PORT_HTTPS=443 -e NSS_SDB_USE_CACHE=no -e PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin -e PORTWORX_API_PORT=tcp://172.30.26.110:9001 -e PORTWORX_API_PORT_9001_TCP=tcp://172.30.26.110:9001 -e PORTWORX_API_PORT_9001_TCP_ADDR=172.30.26.110 -e PORTWORX_API_PORT_9001_TCP_PORT=9001 -e PORTWORX_API_PORT_9001_TCP_PROTO=tcp -e PORTWORX_API_PORT_9020_TCP=tcp://172.30.26.110:9020 -e PORTWORX_API_PORT_9020_TCP_ADDR=172.30.26.110 -e PORTWORX_API_PORT_9020_TCP_PORT=9020 -e PORTWORX_API_PORT_9020_TCP_PROTO=tcp -e PORTWORX_API_PORT_9021_TCP=tcp://172.30.26.110:9021 -e PORTWORX_API_PORT_9021_TCP_ADDR=172.30.26.110 -e PORTWORX_API_PORT_9021_TCP_PORT=9021 -e PORTWORX_API_PORT_9021_TCP_PROTO=tcp -e PORTWORX_API_SERVICE_HOST=172.30.26.110 -e PORTWORX_API_SERVICE_PORT=9001 -e PORTWORX_API_SERVICE_PORT_PX_API=9001 -e PORTWORX_API_SERVICE_PORT_PX_REST_GATEWAY=9021 -e PORTWORX_API_SERVICE_PORT_PX_SDK=9020 -e PORTWORX_OPERATOR_METRICS_PORT=tcp://172.30.170.33:8999 -e PORTWORX_OPERATOR_METRICS_PORT_8999_TCP=tcp://172.30.170.33:8999 -e PORTWORX_OPERATOR_METRICS_PORT_8999_TCP_ADDR=172.30.170.33 -e PORTWORX_OPERATOR_METRICS_PORT_8999_TCP_PORT=8999 -e PORTWORX_OPERATOR_METRICS_PORT_8999_TCP_PROTO=tcp -e PORTWORX_OPERATOR_METRICS_SERVICE_HOST=172.30.170.33 -e PORTWORX_OPERATOR_METRICS_SERVICE_PORT=8999 -e PORTWORX_OPERATOR_METRICS_SERVICE_PORT_METRICS=8999 -e PORTWORX_SERVICE_PORT=tcp://172.30.131.17:9001 -e PORTWORX_SERVICE_PORT_9001_TCP=tcp://172.30.131.17:9001 -e PORTWORX_SERVICE_PORT_9001_TCP_ADDR=172.30.131.17 -e PORTWORX_SERVICE_PORT_9001_TCP_PORT=9001 -e PORTWORX_SERVICE_PORT_9001_TCP_PROTO=tcp -e PORTWORX_SERVICE_PORT_9019_TCP=tcp://172.30.131.17:9019 -e PORTWORX_SERVICE_PORT_9019_TCP_ADDR=172.30.131.17 -e PORTWORX_SERVICE_PORT_9019_TCP_PORT=9019 -e PORTWORX_SERVICE_PORT_9019_TCP_PROTO=tcp -e PORTWORX_SERVICE_PORT_9020_TCP=tcp://172.30.131.17:9020 -e PORTWORX_SERVICE_PORT_9020_TCP_ADDR=172.30.131.17 -e PORTWORX_SERVICE_PORT_9020_TCP_PORT=9020 -e PORTWORX_SERVICE_PORT_9020_TCP_PROTO=tcp -e PORTWORX_SERVICE_PORT_9021_TCP=tcp://172.30.131.17:9021 -e PORTWORX_SERVICE_PORT_9021_TCP_ADDR=172.30.131.17 -e PORTWORX_SERVICE_PORT_9021_TCP_PORT=9021 -e PORTWORX_SERVICE_PORT_9021_TCP_PROTO=tcp -e PORTWORX_SERVICE_SERVICE_HOST=172.30.131.17 -e PORTWORX_SERVICE_SERVICE_PORT=9001 -e PORTWORX_SERVICE_SERVICE_PORT_PX_API=9001 -e PORTWORX_SERVICE_SERVICE_PORT_PX_KVDB=9019 -e PORTWORX_SERVICE_SERVICE_PORT_PX_REST_GATEWAY=9021 -e PORTWORX_SERVICE_SERVICE_PORT_PX_SDK=9020 -e PX_LIGHTHOUSE_PORT=tcp://172.30.195.230:80 -e PX_LIGHTHOUSE_PORT_443_TCP=tcp://172.30.195.230:443 -e PX_LIGHTHOUSE_PORT_443_TCP_ADDR=172.30.195.230 -e PX_LIGHTHOUSE_PORT_443_TCP_PORT=443 -e PX_LIGHTHOUSE_PORT_443_TCP_PROTO=tcp -e PX_LIGHTHOUSE_PORT_80_TCP=tcp://172.30.195.230:80 -e PX_LIGHTHOUSE_PORT_80_TCP_ADDR=172.30.195.230 -e PX_LIGHTHOUSE_PORT_80_TCP_PORT=80 -e PX_LIGHTHOUSE_PORT_80_TCP_PROTO=tcp -e PX_LIGHTHOUSE_SERVICE_HOST=172.30.195.230 -e PX_LIGHTHOUSE_SERVICE_PORT=80 -e PX_LIGHTHOUSE_SERVICE_PORT_HTTP=80 -e PX_LIGHTHOUSE_SERVICE_PORT_HTTPS=443 -e PX_NAMESPACE=kube-system -e PX_SECRETS_NAMESPACE=kube-system -e PX_TEMPLATE_VERSION=v4 -e STORK_SERVICE_PORT=tcp://172.30.176.223:8099 -e STORK_SERVICE_PORT_443_TCP=tcp://172.30.176.223:443 -e STORK_SERVICE_PORT_443_TCP_ADDR=172.30.176.223 -e STORK_SERVICE_PORT_443_TCP_PORT=443 -e STORK_SERVICE_PORT_443_TCP_PROTO=tcp -e STORK_SERVICE_PORT_8099_TCP=tcp://172.30.176.223:8099 -e STORK_SERVICE_PORT_8099_TCP_ADDR=172.30.176.223 -e STORK_SERVICE_PORT_8099_TCP_PORT=8099 -e STORK_SERVICE_PORT_8099_TCP_PROTO=tcp -e STORK_SERVICE_SERVICE_HOST=172.30.176.223 -e STORK_SERVICE_SERVICE_PORT=8099 -e STORK_SERVICE_SERVICE_PORT_EXTENDER=8099 -e STORK_SERVICE_SERVICE_PORT_WEBHOOK=443 -e TERM=xterm -e container=oci -e PX_IMAGE=portworx/px-essentials:2.6.1.2 -e CONTAINER_RUNTIME=cri/cri-o -e PX_IMAGE_DIGEST=sha256:7b6f96e76c3594bc6f3144d8088382e911c65da057c2be37ddde0d917b1cf333 -e KUBELET_DIR=/var/lib/kubelet
INFO[0000] Rootfs found at /var/opt/pwx/oci/rootfs
INFO[0000] PX binaries found at /var/opt/pwx/bin/px-runc
INFO[0000] Initializing as version 2.6.1.2-669fb0c (OCI)
INFO[0000] Enabling Sharedv4 NFS support …
INFO[0000] Setting up NFS service
INFO[0000] > Initialized service controls via DBus{type:dbus,svc:nfs-server.service,id:0xc420469b20}
INFO[0000] > Service nfs-server.service not running
INFO[0000] > Service nfs-server.service not installed
WARN[0000] NFS Install still in cooldown for another 3m45s
ERRO[0000] Could not enable NFS service error=“NFS install skipped (in cooldown due to previous failures)”
INFO[0000] Fixing docker.sock mount:
INFO[0000] > Removing mount for /var/run/docker.sock:/var/run/docker.sock:[rbind rprivate]
INFO[0000] > Adding mount for /run:/var/host_run:[bind rprivate]
INFO[0000] > Soft-link /var/opt/pwx/oci/rootfs/run/docker.sock -> /var/host_run/docker.sock already exists
INFO[0000] Checking mountpoints for following shared directories: [/var/lib/kubelet /var/lib/origin /var/lib/osd]
INFO[0000] Found following mountpoints for shared dirs: map[/:{isMP=T,Opts=shared:1} /var/lib/origin:{isMP=f,Opts=shared:3,Parent=/var} /var/lib/osd:{isMP=f,Opts=shared:3,Parent=/var} /var/lib/kubelet:{isMP=f,Opts=shared:3,Parent=/var} /var:{isMP=T,Opts=shared:3,Parent=/}]
INFO[0000] SPEC UPDATED [758e2bd3515afb4442abc8a3a9af5a2f /var/opt/pwx/oci/config.json]
INFO[0000] > Updated mounts: add{/var/lib/origin:/var/lib/origin:shared}
INFO[0000] > Updated env: add{PX_SHARED=/var/lib/kubelet:shared:3;/var/lib/origin:shared:3;/var/lib/osd:shared:3 SHARED_v4_INSTALLATION_FAILURE_MSG=NFS install skipped (in cooldown due to previous failures)} rm{PX_SHARED=/var/lib/kubelet:shared:3;/var/lib/osd:shared:3 SHARED_v4_INSTALLATION_FAILURE_MSG=Could not install NFS service: Command ‘yum clean all && exec yum makecache’ failed: exit status 127}
WARN[0000] Could not link /var/opt/pwx/bin/pxctl to /usr/bin error=“symlink pxctl to /usr/bin/pxctl failed: symlink /var/opt/pwx/bin/pxctl /usr/bin/pxctl: read-only file system”
INFO[0000] PX-RunC arguments: -A -b -c px-cluster-d1b4a388-48cb-41cd-8eb3-9c507047024b -f -marketplace_name OperatorHub -r 17001 -secret_type k8s -x kubernetes
INFO[0000] PX-RunC mounts: /dev:/dev /opt/pwx/oci/mounts/etc/hosts:/etc/hosts /etc/iscsi:/etc/iscsi /etc/mdadm:/etc/mdadm /etc/nvme:/etc/nvme /etc/nvmet:/etc/nvmet /etc/pwx:/etc/pwx /etc/resolv.conf:/etc/resolv.conf:ro /etc/target:/etc/target /var/opt/pwx/bin:/export_bin /proc:/hostproc /lib/modules:/lib/modules proc:/proc:nosuid,noexec,nodev /run/docker:/run/docker /run/lock/iscsi:/run/lock/iscsi /run/log/journal:/run/log/journal:ro /run/lvm:/run/lvm /run/mdadm:/run/mdadm /run/udev:/run/udev sysfs:/sys:nosuid,noexec,nodev cgroup:/sys/fs/cgroup:nosuid,noexec,nodev /opt/pwx/oci/mounts/tmp/px-termination-log:/tmp/px-termination-log /usr/src:/usr/src /var/cores:/var/cores /run:/var/host_run:bind /var/lib/iscsi:/var/lib/iscsi /var/lib/kubelet:/var/lib/kubelet:shared /var/lib/origin:/var/lib/origin:shared /var/lib/osd:/var/lib/osd:shared /var/lock/iscsi:/var/lock/iscsi /var/log/journal:/var/log/journal:ro /var/run/dbus:/var/run/dbus /opt/pwx/oci/mounts/var/run/secrets/kubernetes.io/serviceaccount:/var/run/secrets/kubernetes.io/serviceaccount
INFO[0000] PX-RunC env: CONTAINER_RUNTIME=cri/cri-o GOMAXPROCS=64 GOTRACEBACK=crash HOME=/root HOSTNAME=worker-1.cluster.okd4.local KUBELET_DIR=/var/lib/kubelet KUBERNETES_PORT=tcp://172.30.0.1:443 KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443 KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1 KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_SERVICE_HOST=172.30.0.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 LVM_USE_HOST=1 NFS_SERVICE=nfs-server.service NSS_SDB_USE_CACHE=no PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PORTWORX_API_PORT=tcp://172.30.26.110:9001 PORTWORX_API_PORT_9001_TCP=tcp://172.30.26.110:9001 PORTWORX_API_PORT_9001_TCP_ADDR=172.30.26.110 PORTWORX_API_PORT_9001_TCP_PORT=9001 PORTWORX_API_PORT_9001_TCP_PROTO=tcp PORTWORX_API_PORT_9020_TCP=tcp://172.30.26.110:9020 PORTWORX_API_PORT_9020_TCP_ADDR=172.30.26.110 PORTWORX_API_PORT_9020_TCP_PORT=9020 PORTWORX_API_PORT_9020_TCP_PROTO=tcp PORTWORX_API_PORT_9021_TCP=tcp://172.30.26.110:9021 PORTWORX_API_PORT_9021_TCP_ADDR=172.30.26.110 PORTWORX_API_PORT_9021_TCP_PORT=9021 PORTWORX_API_PORT_9021_TCP_PROTO=tcp PORTWORX_API_SERVICE_HOST=172.30.26.110 PORTWORX_API_SERVICE_PORT=9001 PORTWORX_API_SERVICE_PORT_PX_API=9001 PORTWORX_API_SERVICE_PORT_PX_REST_GATEWAY=9021 PORTWORX_API_SERVICE_PORT_PX_SDK=9020 PORTWORX_OPERATOR_METRICS_PORT=tcp://172.30.170.33:8999 PORTWORX_OPERATOR_METRICS_PORT_8999_TCP=tcp://172.30.170.33:8999 PORTWORX_OPERATOR_METRICS_PORT_8999_TCP_ADDR=172.30.170.33 PORTWORX_OPERATOR_METRICS_PORT_8999_TCP_PORT=8999 PORTWORX_OPERATOR_METRICS_PORT_8999_TCP_PROTO=tcp PORTWORX_OPERATOR_METRICS_SERVICE_HOST=172.30.170.33 PORTWORX_OPERATOR_METRICS_SERVICE_PORT=8999 PORTWORX_OPERATOR_METRICS_SERVICE_PORT_METRICS=8999 PORTWORX_SERVICE_PORT=tcp://172.30.131.17:9001 PORTWORX_SERVICE_PORT_9001_TCP=tcp://172.30.131.17:9001 PORTWORX_SERVICE_PORT_9001_TCP_ADDR=172.30.131.17 PORTWORX_SERVICE_PORT_9001_TCP_PORT=9001 PORTWORX_SERVICE_PORT_9001_TCP_PROTO=tcp PORTWORX_SERVICE_PORT_9019_TCP=tcp://172.30.131.17:9019 PORTWORX_SERVICE_PORT_9019_TCP_ADDR=172.30.131.17 PORTWORX_SERVICE_PORT_9019_TCP_PORT=9019 PORTWORX_SERVICE_PORT_9019_TCP_PROTO=tcp PORTWORX_SERVICE_PORT_9020_TCP=tcp://172.30.131.17:9020 PORTWORX_SERVICE_PORT_9020_TCP_ADDR=172.30.131.17 PORTWORX_SERVICE_PORT_9020_TCP_PORT=9020 PORTWORX_SERVICE_PORT_9020_TCP_PROTO=tcp PORTWORX_SERVICE_PORT_9021_TCP=tcp://172.30.131.17:9021 PORTWORX_SERVICE_PORT_9021_TCP_ADDR=172.30.131.17 PORTWORX_SERVICE_PORT_9021_TCP_PORT=9021 PORTWORX_SERVICE_PORT_9021_TCP_PROTO=tcp PORTWORX_SERVICE_SERVICE_HOST=172.30.131.17 PORTWORX_SERVICE_SERVICE_PORT=9001 PORTWORX_SERVICE_SERVICE_PORT_PX_API=9001 PORTWORX_SERVICE_SERVICE_PORT_PX_KVDB=9019 PORTWORX_SERVICE_SERVICE_PORT_PX_REST_GATEWAY=9021 PORTWORX_SERVICE_SERVICE_PORT_PX_SDK=9020 PX_IMAGE=px-essentials:2.6.1.2 PX_IMAGE_DIGEST=sha256:7b6f96e76c3594bc6f3144d8088382e911c65da057c2be37ddde0d917b1cf333 PX_LIGHTHOUSE_PORT=tcp://172.30.195.230:80 PX_LIGHTHOUSE_PORT_443_TCP=tcp://172.30.195.230:443 PX_LIGHTHOUSE_PORT_443_TCP_ADDR=172.30.195.230 PX_LIGHTHOUSE_PORT_443_TCP_PORT=443 PX_LIGHTHOUSE_PORT_443_TCP_PROTO=tcp PX_LIGHTHOUSE_PORT_80_TCP=tcp://172.30.195.230:80 PX_LIGHTHOUSE_PORT_80_TCP_ADDR=172.30.195.230 PX_LIGHTHOUSE_PORT_80_TCP_PORT=80 PX_LIGHTHOUSE_PORT_80_TCP_PROTO=tcp PX_LIGHTHOUSE_SERVICE_HOST=172.30.195.230 PX_LIGHTHOUSE_SERVICE_PORT=80 PX_LIGHTHOUSE_SERVICE_PORT_HTTP=80 PX_LIGHTHOUSE_SERVICE_PORT_HTTPS=443 PX_LOGLEVEL=info PX_NAMESPACE=kube-system PX_RUNC=true PX_SECRETS_NAMESPACE=kube-system PX_SHARED=/var/lib/kubelet:shared:3;/var/lib/origin:shared:3;/var/lib/osd:shared:3 PX_TEMPLATE_VERSION=v4 PX_VERSION=2.6.1.2-669fb0c ‘SHARED_v4_INSTALLATION_FAILURE_MSG=NFS install skipped (in cooldown due to previous failures)’ STORK_SERVICE_PORT=tcp://172.30.176.223:8099 STORK_SERVICE_PORT_443_TCP=tcp://172.30.176.223:443 STORK_SERVICE_PORT_443_TCP_ADDR=172.30.176.223 STORK_SERVICE_PORT_443_TCP_PORT=443 STORK_SERVICE_PORT_443_TCP_PROTO=tcp STORK_SERVICE_PORT_8099_TCP=tcp://172.30.176.223:8099 STORK_SERVICE_PORT_8099_TCP_ADDR=172.30.176.223 STORK_SERVICE_PORT_8099_TCP_PORT=8099 STORK_SERVICE_PORT_8099_TCP_PROTO=tcp STORK_SERVICE_SERVICE_HOST=172.30.176.223 STORK_SERVICE_SERVICE_PORT=8099 STORK_SERVICE_SERVICE_PORT_EXTENDER=8099 STORK_SERVICE_SERVICE_PORT_WEBHOOK=443 TERM=xterm container=oci
INFO[0000] Service reinitialization requested. Restarting the service…
[root@worker-1 etc]# /var/opt/pwx/bin/px-runc run --name portworx --oci /var/opt/pwx/oci
INFO[0000] Rootfs found at /var/opt/pwx/oci/rootfs
INFO[0000] PX binaries found at /var/opt/pwx/bin/px-runc
INFO[0000] Initializing as version 2.6.1.2-669fb0c (OCI)
INFO[0000] SPEC READ [758e2bd3515afb4442abc8a3a9af5a2f /var/opt/pwx/oci/config.json]
INFO[0000] Enabling Sharedv4 NFS support …
INFO[0000] Setting up NFS service
INFO[0000] > Initialized service controls via DBus{type:dbus,svc:nfs-server.service,id:0xc4201c16e0}
INFO[0000] > Service nfs-server.service not running
INFO[0000] > Service nfs-server.service not installed
WARN[0000] NFS Install still in cooldown for another 3m42s
ERRO[0000] Could not enable NFS service error=“NFS install skipped (in cooldown due to previous failures)”
INFO[0000] Checking mountpoints for following shared directories: [/var/lib/kubelet /var/lib/origin /var/lib/osd]
INFO[0000] Found following mountpoints for shared dirs: map[/var/lib/kubelet:{isMP=f,Opts=shared:3,Parent=/var} /var:{isMP=T,Opts=shared:3,Parent=/} /:{isMP=T,Opts=shared:1} /var/lib/origin:{isMP=f,Opts=shared:3,Parent=/var} /var/lib/osd:{isMP=f,Opts=shared:3,Parent=/var}]
INFO[0000] PX-RunC arguments: -A -b -c px-cluster-d1b4a388-48cb-41cd-8eb3-9c507047024b -f -marketplace_name OperatorHub -r 17001 -secret_type k8s -x kubernetes
INFO[0000] PX-RunC mounts: /dev:/dev /opt/pwx/oci/mounts/etc/hosts:/etc/hosts /etc/iscsi:/etc/iscsi /etc/mdadm:/etc/mdadm /etc/nvme:/etc/nvme /etc/nvmet:/etc/nvmet /etc/pwx:/etc/pwx /etc/resolv.conf:/etc/resolv.conf:ro /etc/target:/etc/target /var/opt/pwx/bin:/export_bin /proc:/hostproc /lib/modules:/lib/modules proc:/proc:nosuid,noexec,nodev /run/docker:/run/docker /run/lock/iscsi:/run/lock/iscsi /run/log/journal:/run/log/journal:ro /run/lvm:/run/lvm /run/mdadm:/run/mdadm /run/udev:/run/udev sysfs:/sys:nosuid,noexec,nodev cgroup:/sys/fs/cgroup:nosuid,noexec,nodev /opt/pwx/oci/mounts/tmp/px-termination-log:/tmp/px-termination-log /usr/src:/usr/src /var/cores:/var/cores /run:/var/host_run:bind /var/lib/iscsi:/var/lib/iscsi /var/lib/kubelet:/var/lib/kubelet:shared /var/lib/origin:/var/lib/origin:shared /var/lib/osd:/var/lib/osd:shared /var/lock/iscsi:/var/lock/iscsi /var/log/journal:/var/log/journal:ro /var/run/dbus:/var/run/dbus /opt/pwx/oci/mounts/var/run/secrets/kubernetes.io/serviceaccount:/var/run/secrets/kubernetes.io/serviceaccount
INFO[0000] PX-RunC env: CONTAINER_RUNTIME=cri/cri-o GOMAXPROCS=64 GOTRACEBACK=crash HOME=/root HOSTNAME=worker-1.cluster.okd4.local KUBELET_DIR=/var/lib/kubelet KUBERNETES_PORT=tcp://172.30.0.1:443 KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443 KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1 KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_SERVICE_HOST=172.30.0.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 LVM_USE_HOST=1 NFS_SERVICE=nfs-server.service NSS_SDB_USE_CACHE=no PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PORTWORX_API_PORT=tcp://172.30.26.110:9001 PORTWORX_API_PORT_9001_TCP=tcp://172.30.26.110:9001 PORTWORX_API_PORT_9001_TCP_ADDR=172.30.26.110 PORTWORX_API_PORT_9001_TCP_PORT=9001 PORTWORX_API_PORT_9001_TCP_PROTO=tcp PORTWORX_API_PORT_9020_TCP=tcp://172.30.26.110:9020 PORTWORX_API_PORT_9020_TCP_ADDR=172.30.26.110 PORTWORX_API_PORT_9020_TCP_PORT=9020 PORTWORX_API_PORT_9020_TCP_PROTO=tcp PORTWORX_API_PORT_9021_TCP=tcp://172.30.26.110:9021 PORTWORX_API_PORT_9021_TCP_ADDR=172.30.26.110 PORTWORX_API_PORT_9021_TCP_PORT=9021 PORTWORX_API_PORT_9021_TCP_PROTO=tcp PORTWORX_API_SERVICE_HOST=172.30.26.110 PORTWORX_API_SERVICE_PORT=9001 PORTWORX_API_SERVICE_PORT_PX_API=9001 PORTWORX_API_SERVICE_PORT_PX_REST_GATEWAY=9021 PORTWORX_API_SERVICE_PORT_PX_SDK=9020 PORTWORX_OPERATOR_METRICS_PORT=tcp://172.30.170.33:8999 PORTWORX_OPERATOR_METRICS_PORT_8999_TCP=tcp://172.30.170.33:8999 PORTWORX_OPERATOR_METRICS_PORT_8999_TCP_ADDR=172.30.170.33 PORTWORX_OPERATOR_METRICS_PORT_8999_TCP_PORT=8999 PORTWORX_OPERATOR_METRICS_PORT_8999_TCP_PROTO=tcp PORTWORX_OPERATOR_METRICS_SERVICE_HOST=172.30.170.33 PORTWORX_OPERATOR_METRICS_SERVICE_PORT=8999 PORTWORX_OPERATOR_METRICS_SERVICE_PORT_METRICS=8999 PORTWORX_SERVICE_PORT=tcp://172.30.131.17:9001 PORTWORX_SERVICE_PORT_9001_TCP=tcp://172.30.131.17:9001 PORTWORX_SERVICE_PORT_9001_TCP_ADDR=172.30.131.17 PORTWORX_SERVICE_PORT_9001_TCP_PORT=9001 PORTWORX_SERVICE_PORT_9001_TCP_PROTO=tcp PORTWORX_SERVICE_PORT_9019_TCP=tcp://172.30.131.17:9019 PORTWORX_SERVICE_PORT_9019_TCP_ADDR=172.30.131.17 PORTWORX_SERVICE_PORT_9019_TCP_PORT=9019 PORTWORX_SERVICE_PORT_9019_TCP_PROTO=tcp PORTWORX_SERVICE_PORT_9020_TCP=tcp://172.30.131.17:9020 PORTWORX_SERVICE_PORT_9020_TCP_ADDR=172.30.131.17 PORTWORX_SERVICE_PORT_9020_TCP_PORT=9020 PORTWORX_SERVICE_PORT_9020_TCP_PROTO=tcp PORTWORX_SERVICE_PORT_9021_TCP=tcp://172.30.131.17:9021 PORTWORX_SERVICE_PORT_9021_TCP_ADDR=172.30.131.17 PORTWORX_SERVICE_PORT_9021_TCP_PORT=9021 PORTWORX_SERVICE_PORT_9021_TCP_PROTO=tcp PORTWORX_SERVICE_SERVICE_HOST=172.30.131.17 PORTWORX_SERVICE_SERVICE_PORT=9001 PORTWORX_SERVICE_SERVICE_PORT_PX_API=9001 PORTWORX_SERVICE_SERVICE_PORT_PX_KVDB=9019 PORTWORX_SERVICE_SERVICE_PORT_PX_REST_GATEWAY=9021 PORTWORX_SERVICE_SERVICE_PORT_PX_SDK=9020 PX_IMAGE=px-essentials:2.6.1.2 PX_IMAGE_DIGEST=sha256:7b6f96e76c3594bc6f3144d8088382e911c65da057c2be37ddde0d917b1cf333 PX_LIGHTHOUSE_PORT=tcp://172.30.195.230:80 PX_LIGHTHOUSE_PORT_443_TCP=tcp://172.30.195.230:443 PX_LIGHTHOUSE_PORT_443_TCP_ADDR=172.30.195.230 PX_LIGHTHOUSE_PORT_443_TCP_PORT=443 PX_LIGHTHOUSE_PORT_443_TCP_PROTO=tcp PX_LIGHTHOUSE_PORT_80_TCP=tcp://172.30.195.230:80 PX_LIGHTHOUSE_PORT_80_TCP_ADDR=172.30.195.230 PX_LIGHTHOUSE_PORT_80_TCP_PORT=80 PX_LIGHTHOUSE_PORT_80_TCP_PROTO=tcp PX_LIGHTHOUSE_SERVICE_HOST=172.30.195.230 PX_LIGHTHOUSE_SERVICE_PORT=80 PX_LIGHTHOUSE_SERVICE_PORT_HTTP=80 PX_LIGHTHOUSE_SERVICE_PORT_HTTPS=443 PX_LOGLEVEL=info PX_NAMESPACE=kube-system PX_RUNC=true PX_SECRETS_NAMESPACE=kube-system PX_SHARED=/var/lib/kubelet:shared:3;/var/lib/origin:shared:3;/var/lib/osd:shared:3 PX_TEMPLATE_VERSION=v4 PX_VERSION=2.6.1.2-669fb0c ‘SHARED_v4_INSTALLATION_FAILURE_MSG=NFS install skipped (in cooldown due to previous failures)’ STORK_SERVICE_PORT=tcp://172.30.176.223:8099 STORK_SERVICE_PORT_443_TCP=tcp://172.30.176.223:443 STORK_SERVICE_PORT_443_TCP_ADDR=172.30.176.223 STORK_SERVICE_PORT_443_TCP_PORT=443 STORK_SERVICE_PORT_443_TCP_PROTO=tcp STORK_SERVICE_PORT_8099_TCP=tcp://172.30.176.223:8099 STORK_SERVICE_PORT_8099_TCP_ADDR=172.30.176.223 STORK_SERVICE_PORT_8099_TCP_PORT=8099 STORK_SERVICE_PORT_8099_TCP_PROTO=tcp STORK_SERVICE_SERVICE_HOST=172.30.176.223 STORK_SERVICE_SERVICE_PORT=8099 STORK_SERVICE_SERVICE_PORT_EXTENDER=8099 STORK_SERVICE_SERVICE_PORT_WEBHOOK=443 TERM=xterm container=oci
INFO[0000] Switched ‘/var/opt/pwx/oci’ to PRIVATE mount propagation
INFO[0000] Found 2 usable runc binaries: /var/opt/pwx/bin/runc and /var/opt/pwx/bin/runc-fb
INFO[0000] Detected kernel release 5.6.19-300
INFO[0000] Exec: ["/var/opt/pwx/bin/runc" “run” “-b” “/var/opt/pwx/oci” “–no-new-keyring” “portworx”]
Executing with arguments: -A -b -c px-cluster-d1b4a388-48cb-41cd-8eb3-9c507047024b -f -marketplace_name OperatorHub -r 17001 -secret_type k8s -x kubernetes
Installed pxctl…
Wed Oct 14 19:22:06 UTC 2020 : Running version 2.6.1.2-669fb0c on Linux worker-1.cluster.okd4.local 5.6.19-300.fc32.x86_64 #1 SMP Wed Jun 17 16:10:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Version: Linux version 5.6.19-300.fc32.x86_64 () (gcc version 10.1.1 20200507 (Red Hat 10.1.1-1) (GCC)) #1 SMP Wed Jun 17 16:10:48 UTC 2020
time=“2020-10-14T19:22:06Z” level=error msg=“Cannot listen on UNIX socket: listen unix /run/docker/plugins/pxd.sock: bind: no such file or directory”
Error: failed to listen on pxd.sock
sed: can’t read /etc/mdadm/mdadm.conf: No such file or directory
checking /hostusr/src/kernels/5.6.19-300.fc32.x86_64
checking /hostusr/src/linux-headers-5.6.19-300.fc32.x86_64
checking /usr/src/kernels/5.6.19-300.fc32.x86_64
checking /usr/src/linux-headers-5.6.19-300.fc32.x86_64
checking /usr/src/linux-5.6.19-300-obj/x86_64/fc32.x86_64
checking /hostusr/src/linux-5.6.19-300-obj/x86_64/fc32.x86_64
checking /lib/modules/5.6.19-300.fc32.x86_64/build
checking /usr/src/linux
checking /var/lib/osd/pxfs/kernel_headers/usr/src/kernels/5.6.19-300.fc32.x86_64
checking /var/lib/osd/pxfs/kernel_headers/usr/src/linux-headers-5.6.19-300.fc32.x86_64
checking /var/lib/osd/pxfs/kernel_headers/usr/src/linux-5.6.19-300-obj/x86_64/fc32.x86_64
checking local archive, please wait…
Found: x86_64/5.6.19-300.fc32.x86_64
use partitions and all available disks.
Using cluster: px-cluster-d1b4a388-48cb-41cd-8eb3-9c507047024b
Port range start: 17001
Using scheduler: kubernetes
Warning: skipping device: /dev/sda2. Failed size check.
Warning: skipping device: /dev/sda3. Failed size check.
Warning: skipping device: /dev/sda1. Failed size check.
Device is in use: /dev/sda4, skipping…
Using storage device: /dev/sdb


****** Checking mdraid0 layout path for null **************


Wed Oct 14 19:22:52 UTC 2020 device scan start
Scanning for Btrfs filesystems
Wed Oct 14 19:22:52 UTC 2020 device scan finish
Warning: Dependency for filesystem does not exist.
Checking sysfs mount…
sysfs on /sys/firmware type sysfs (ro,relatime,seclabel)
sysfs mounted read-only. remounting…
mapping:
Setting portmap: 17001
“bootstrap”: true,
2020-10-14 19:22:54,961 CRIT Supervisor running as root (no user in config file)
2020-10-14 19:22:54,967 INFO supervisord started with pid 1
2020-10-14 19:22:56,021 INFO spawned: ‘reboot-diags’ with pid 417
2020-10-14 19:22:56,028 INFO spawned: ‘px-nfs’ with pid 418
2020-10-14 19:22:56,035 INFO spawned: ‘relayd’ with pid 419
2020-10-14 19:22:56,038 INFO spawned: ‘cron’ with pid 420
2020-10-14 19:22:56,042 INFO spawned: ‘px-etcd’ with pid 421
2020-10-14 19:22:56,052 INFO spawned: ‘lttng’ with pid 422
2020-10-14 19:22:56,055 INFO spawned: ‘exec’ with pid 423
2020-10-14 19:22:56,060 INFO spawned: ‘cache_flush’ with pid 424
2020-10-14 19:22:56,067 INFO spawned: ‘px-diag’ with pid 425
2020-10-14 19:22:56,074 INFO spawned: ‘px-healthmon’ with pid 426
2020-10-14 19:22:56,077 INFO spawned: ‘pxdaemon’ with pid 427
2020-10-14 19:22:56,087 INFO spawned: ‘px-ns’ with pid 428
2020-10-14 19:22:56,090 INFO spawned: ‘px_event_listener’ with pid 429
2020-10-14 19:22:56,092 INFO exited: reboot-diags (exit status 0; expected)
2020-10-14 19:22:56,092 INFO exited: px-nfs (exit status 0; expected)
2020-10-14 19:22:56,103 INFO success: cache_flush entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-10-14 19:22:56,136 INFO exited: cache_flush (exit status 0; expected)
Tracefile cleanup: Tracing disabled, remove all previous traces…
Clean out lttng tmpfs location: …
2020-10-14 19:22:57,354 INFO success: relayd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-10-14 19:22:57,354 INFO success: cron entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-10-14 19:22:57,354 INFO success: px-etcd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-10-14 19:22:57,354 INFO success: lttng entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-10-14 19:22:57,354 INFO success: exec entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-10-14 19:22:57,355 INFO success: px-diag entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-10-14 19:22:57,355 INFO success: px-healthmon entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-10-14 19:22:57,355 INFO success: px-ns entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-10-14 19:22:57,355 INFO success: px_event_listener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
time=“2020-10-14T19:22:57Z” level=info msg=“px-ns Starting…”
time=“2020-10-14T19:22:57Z” level=info msg=“InitPxClient No authentication enabled”
Installed NS trace handler for SIGHUP
Installed NS sig-handler for SIGUSR1
Installed NS sig-handler for SIGUSR2
Starting NS server
2020-10-14 19:23:01,883 INFO success: pxdaemon entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
Tracing is disabled, not starting trace processes.
PXPROCS[INFO]: Started px-storage with pid 500
bash: connect: Connection refused
bash: /dev/tcp/localhost/17006: Connection refused
PXPROCS[INFO]: px-storage not started yet…sleeping
PXPROCS[INFO]: Started px with pid 513
PXPROCS[INFO]: Started watchdog with pid 514
2020-10-14_19:23:10: PX-Watchdog: Starting watcher
2020-10-14_19:23:10: PX-Watchdog: Waiting for px process to start
2020-10-14_19:23:10: PX-Watchdog: (pid 513): Begin monitoring
time=“2020-10-14T19:23:13Z” level=info msg=“Registering [kernel] as a volume driver”
time=“2020-10-14T19:23:13Z” level=info msg=“Registered the Usage based Metering Agent…”
time=“2020-10-14T19:23:13Z” level=info msg=“Setting log level to info(4)”
time=“2020-10-14T19:23:13Z” level=info msg=“read config from env var” func=init package=boot
time=“2020-10-14T19:23:13Z” level=error msg=“Cannot listen on UNIX socket: listen unix /run/docker/plugins/pxd.sock: bind: no such file or directory”
time=“2020-10-14T19:23:13Z” level=warning msg=“Failed to start pxd-dummy: failed to listen on pxd.sock, ingnoring and continuing…”
time=“2020-10-14T19:23:13Z” level=info msg=“read config from config.json” func=init package=boot
time=“2020-10-14T19:23:13Z” level=info msg=“Alerts initialized successfully for this cluster”
time=“2020-10-14T19:23:13Z” level=info msg=“Node is not yet initialized” func=setNodeInfo package=boot
time=“2020-10-14T19:23:13Z” level=info msg=“Generated a new NodeID: 557ae087-2944-4113-ad43-d519d1837b5a”
time=“2020-10-14T19:23:13Z” level=info msg=“Using GW interface device:[eth0]…”
time=“2020-10-14T19:23:13Z” level=info msg=“Detected Machine Hardware Type as: vmware (Virtual Machine)”
time=“2020-10-14T19:23:13Z” level=info msg=“Bootstrapping internal kvdb service.” fn=kv-store.New id=557ae087-2944-4113-ad43-d519d1837b5a
time=“2020-10-14T19:23:14Z” level=info msg=“Starting kvdb on this node…” fn=kvdb-provisioner.ProvisionKvdbWithoutLock id=557ae087-2944-4113-ad43-d519d1837b5a
time=“2020-10-14T19:23:14Z” level=warning msg=“PX-CACHE: cache_blksize param() parsing failed Unit parse error: , ignored.”
time=“2020-10-14T19:23:20Z” level=info msg=“Made 1 pools”
time=“2020-10-14T19:23:20Z” level=info msg=“Benchmarking drive /dev/sdb”
time=“2020-10-14T19:23:33Z” level=info msg=“Storage pool WriteThroughput 43075000”
time=“2020-10-14T19:23:33Z” level=info msg="Mounting metadata pool: Cos:MEDIUM RaidLevel:“raid0” uuid:“2a9e34e2-eee1-48b6-ba01-b2ffa4624256” "
time=“2020-10-14T19:23:34Z” level=info msg=“HAL:Created volume:/var/.px/0/.reserve”
time=“2020-10-14T19:23:34Z” level=info msg=“Initializing journal: /var/.px/0/log”
time=“2020-10-14T19:23:34Z” level=info msg="Mounting metadata pool: Cos:MEDIUM RaidLevel:“raid0” uuid:“2a9e34e2-eee1-48b6-ba01-b2ffa4624256” "
time=“2020-10-14T19:23:34Z” level=info msg=“HAL:Created volume:/var/.px/0/.metadata”
time=“2020-10-14T19:23:34Z” level=info msg=“Applying labels to Pool 0”
time=“2020-10-14T19:23:34Z” level=info msg=“Sync pxpool=0,mdpoolid=0,initinprogress,mdvol labels to Pool 0”
time=“2020-10-14T19:23:34Z” level=info msg=“Created Metadata Volume: /var/.px/0/.metadata”
time=“2020-10-14T19:23:36Z” level=info msg=“Node (10.0.0.21) joining kvdb cluster.” fn=kv-store.Init id=557ae087-2944-4113-ad43-d519d1837b5a
time=“2020-10-14T19:23:37Z” level=info msg="Setting up internal kvdb with following parameters: " fn=kv-utils.StartKvdb id=557ae087-2944-4113-ad43-d519d1837b5a
time=“2020-10-14T19:23:37Z” level=info msg=“Initial Cluster Settings: map[557ae087-2944-4113-ad43-d519d1837b5a:[http:__portworx-1.internal.kvdb:17015]]” fn=kv-utils.StartKvdb id=557ae087-2944-4113-ad43-d519d1837b5a
time=“2020-10-14T19:23:37Z” level=info msg=“Kvdb IP: 10.0.0.21 Kvdb PeerPort: 17015 ClientPort: 17016” fn=kv-utils.StartKvdb id=557ae087-2944-4113-ad43-d519d1837b5a
time=“2020-10-14T19:23:37Z” level=info msg=“Kvdb Name: 557ae087-2944-4113-ad43-d519d1837b5a” fn=kv-utils.StartKvdb id=557ae087-2944-4113-ad43-d519d1837b5a
time=“2020-10-14T19:23:37Z” level=info msg=“Kvdb Cluster State: new” fn=kv-utils.StartKvdb id=557ae087-2944-4113-ad43-d519d1837b5a
time=“2020-10-14T19:23:37Z” level=info msg=“Kvdb Peer Domain Name: portworx-1.internal.kvdb” fn=kv-utils.StartKvdb id=557ae087-2944-4113-ad43-d519d1837b5a
time=“2020-10-14T19:23:38Z” level=info msg=“Successfully started kvdb on this node.” fn=kvdb-provisioner.ProvisionKvdbWithoutLock id=557ae087-2944-4113-ad43-d519d1837b5a
time=“2020-10-14T19:23:38Z” level=info msg=“Registered auditor for kvdb-response”
time=“2020-10-14T19:23:38Z” level=info msg=“Registered auditor for kvdb-limits”
time=“2020-10-14T19:23:38Z” level=info msg=“created kv instance” func=initKv package=boot
time=“2020-10-14T19:23:38Z” level=info msg=“Setting lock timeout to: 3m0s”
time=“2020-10-14T19:23:38Z” level=info msg=“creating kvdb metrics wrapper”
time=“2020-10-14T19:23:38Z” level=info msg=“initialized internal kvdb” func=init package=boot
time=“2020-10-14T19:23:38Z” level=info msg=“initialized osdconfig manager” func=init package=boot
time=“2020-10-14T19:23:38Z” level=info msg=“pushed config data to kvdb” func=InitAndBoot package=boot

So to make it work:
1/ You stop the service
2/ You run the comand line twice : /var/opt/pwx/bin/px-runc run --name portworx --oci /var/opt/pwx/oci

I re-installed OKD 4.5 from scratch with the october release. Same issue.

looks like I am in the same boat.