Failed to install Portworx

Hi Guys

I’m trying to install portworx on Ubuntu 18, but was not able to running the portworx pods

kube-system Warning Unhealthy portworx-api-98njb Readiness probe failed: Get dial tcp connect: connection refused 2 minutes ago
kube-system Warning Unhealthy portworx-api-66zhb Readiness probe failed: Get dial tcp connect: connection refused

Here is the portworx log

From the logs it looks the disk which you provisioned is already in use:

@k8s-master portworx[6628]: Device is in use: /dev/vdb1, skipping...
@k8s-master portworx[6628]: Head node:
@k8s-master portworx[6628]: No disks specified, starting as a storage-less node.

Portworx needs a raw disk (unformated) additional disk, you cannot use your root disk.
Can you share your lsblk and blkid output from worker node.

Hello @ryzam, thank you for posting on Portworx forums ! The logs you shared were useful.

PX failed to come up because you provided /dev/vdb1 as a disk but this disk appears to be in use.

These logs indicate this

Input arguments: /px-oci-mon -c px-cluster-0126654e-078b-475a-90ac-f4043abf2ac0 -s /dev/vdb1 -secret_type k8s -j auto -b --oem esse -x kubernetes"
@k8s-master portworx[6628]: Device is in use: /dev/vdb1, skipping...

The disk you provide to Portworx needs to be unmounted and should not have a filesystem on it. Can you upload output of lsblk and blkid from this node and we can confirm that /dev/vdb1 is in use by something else in the system.

Here is the lsblk result

loop0 7:0 0 9.3M 1 loop /snap/helm3/4
loop1 7:1 0 55M 1 loop /snap/core18/1705
loop2 7:2 0 9.4M 1 loop /snap/helm3/5
loop3 7:3 0 10M 1 loop /snap/helm/220
loop4 7:4 0 93.9M 1 loop /snap/core/9066
loop5 7:5 0 93.8M 1 loop /snap/core/8935
loop6 7:6 0 10M 1 loop /snap/helm/232
loop7 7:7 0 55M 1 loop /snap/core18/1754
vda 252:0 0 35G 0 disk
├─vda1 252:1 0 1M 0 part
└─vda2 252:2 0 35G 0 part /
vdb 252:16 0 10G 0 disk
└─vdb1 252:17 0 10G 0 part /disk2

/dev/vdb1: UUID=“c786c8dd-7730-4d99-a398-72d14eff293c” TYPE=“ext4” PARTUUID=“87431ee2-01”

vdb 252:16 0 10G 0 disk
└─vdb1 252:17 0 10G 0 part /disk2

I think this is what your giving backing disk for portworx, this doesn’t needs to be formated.
Make sure disk is not formatted and you deleted the partitions and clean up the traces on disk with wipefs -a /dev/vdb. Also if you can provision 50G of disk would be good.

Before proceeding clean up the portworx install and once your ready with above changes apply your spec file again.

Here is the command for cleanup : curl -fsL | bash

ok, let me try … will let you know … tq guys

Here is the latest logs after wipe /dev/vdb

Can you share the output of following:
1- Describe the pod share the output
2- lsblk & blkid output
3 - Login to one of worker node and collect journalctl -lu portworx* > node.logs and share it.
4 - Spec file used for installation. The output above is showing oci-mon:2.4, Did you tried to install 2.4 version of portworx ?

I’m using rancher to install portworx

  1. Pod output

  2. lsblk & blkid

  3. node.logs is too big

Can you zip and share the logs ?, from previous logs it looks you were doing 2.5 and from Rancher UI I see 2.4 anything you changed ?

Hi @sanjay.naikwadi, sorry for late reply.

Here is the zip log from 3 worker nodes.

I installed portworx from Rancher 2 UI using Catalog


What is the Rancher Version ? and What is the Chart Version ?

Honestly i’m a bit confuse which is the est way to install portworx on kubernetes kluster running on premise.

  1. Install portworx in Rancher Catalog
  2. Install Portworx using Spec Generator provided in Portworx Portal
  3. Install using Helm

Hi Ryzam,

We do support all ways which you mentioned, setup via spec generator is updated one and easy to use. We do have many using Rancher based catalog also.

Make sure you have the additional disk on worker nodes, and those disk doesn’t needs to formated with any filesystem, leave it as RAW.

You can read more about the minimum requirement :

Let me know when you would be available let have a quick syncup on this.

Is it possible for me to give access to our poc server? So that you can guide what exact problem. I already test 3 of the options none success.

Hi Ryzam,

Send me access details to my email id : If your online now we can do a quick call also.

already email to you

Looks like previous install was not cleaned up properly, below are the steps I followed on your cluster.

  1. Clean up the install :
    curl -fsL | bash

  2. Generated the Spec file.

And applied this file from Rancher Shell both command I used the Shell.

Try running pxctl status from your worker node and let me know if you see the cluster status up and running.

Thank you @sanjay.naikwadi :+1: