Error loading node identity: Could not find any storage device(s) with PX footprint

Hi,

I’m getting the error in topic while trying to perform a basic air-gapped installation on RKE
Versions are :

  • kubernetes 1.19.6
  • portorx : 2.6.0 (px-essentials)

I’ve generated the specs on PX-central using stork/CSI. I have use the daemonset (not the operator)
I’m using KVDB and DATA volumes as LVM logical volume.

During my installation i have the following errors in portworx container of portworx pods:

error loading node identity: Could not find any storage device(s) with PX footprint

After investigation, it seems the container doesn’t have my LVM logical volumes available in those containers:

[root@kube01 ~]# k exec -it portworx-zr8bl – blkid
Defaulting container name to portworx.
Use ‘kubectl describe pod/portworx-zr8bl -n kube-system’ to see all of the containers in this pod.
/dev/sda: UUID=“4b50c2aa-0e83-4435-9515-20f3e4801ddc” TYPE=“ext4”
/dev/sdc: UUID=“f955e927-291c-4925-a933-53bd43ffb7f5” TYPE=“ext4”
/dev/sdd: UUID=“ac3ab5ea-0005-4faf-ab64-ad25239e96b8” TYPE=“ext4”
/dev/sde: UUID=“yH8bHz-7BIT-RuGS-oap9-2Acp-YQbu-E3wALi” TYPE=“LVM2_member”
/dev/sdf: UUID=“xFPnQV-3kdH-oNoT-Djm6-nO2O-MhoL-56ZNyN” TYPE=“LVM2_member”
/dev/sdg1: SEC_TYPE=“msdos” UUID=“993E-8CDF” TYPE=“vfat” PARTLABEL=“EFI System Partition” PARTUUID=“77fced7e-50ec-4998-85a1-7ce8dbe732a5”
/dev/sdg2: UUID=“11dd7918-6054-404c-8b82-f951e06ca08c” TYPE=“xfs” PARTUUID=“04705f73-3356-42f7-ae87-b2d070a9faae”
/dev/sdg3: UUID=“HA4ojR-JrV5-LmLR-7dCb-2OM3-cLHg-ekIm27” TYPE=“LVM2_member” PARTUUID=“9c093791-5b50-46b2-aa33-d9bc1d62f2e5”
/dev/mapper/ol-root: UUID=“d86a4f86-ed0b-41be-afd6-d0dff65967f0” TYPE=“xfs”
/dev/mapper/ol-swap: UUID=“7e058b4b-4eed-4ab7-b8db-3285de27f8ee” TYPE=“swap”
/dev/mapper/ol-dockerlv: UUID=“2e7fcafd-1be6-40bc-aae2-1bc3e0ee6964” TYPE=“xfs”

This is normal because there is no filesystem of those logical volumes so there is no UUID

I’ve created one, then the device appeared in my container and i was able to get another error message complaining about the fact there is a filesystem on KVDB volume. i wipe it (wipefs -a) on my kube worker and it worked:

[root@kube01 ~]# k exec -it portworx-9282s – blkid
Defaulting container name to portworx.
Use ‘kubectl describe pod/portworx-9282s -n kube-system’ to see all of the containers in this pod.
/dev/sda: UUID=“4b50c2aa-0e83-4435-9515-20f3e4801ddc” TYPE=“ext4”
/dev/sde: UUID=“6p2h0d-1WFN-fy09-MApK-pydi-Bfcc-y4AN59” TYPE=“LVM2_member”
/dev/sdd: UUID=“ac3ab5ea-0005-4faf-ab64-ad25239e96b8” TYPE=“ext4”
/dev/sdc: UUID=“f955e927-291c-4925-a933-53bd43ffb7f5” TYPE=“ext4”
/dev/sdf: UUID=“9TT5bF-OTJI-Bg4O-Orpp-JO66-zxPg-Tc92Fu” TYPE=“LVM2_member”
/dev/sdg1: SEC_TYPE=“msdos” UUID=“8D38-5E4A” TYPE=“vfat” PARTLABEL=“EFI System Partition” PARTUUID=“d22d0c01-3c9a-4d24-88f8-661f724337ac”
/dev/sdg2: UUID=“98fc1fe0-eac6-42f5-9bf2-bd5dd8495704” TYPE=“xfs” PARTUUID=“f4dae261-5ad3-43a9-9f33-af9f98eb8b0e”
/dev/sdg3: UUID=“h67DWe-Ro9z-IdyV-MuPq-YA92-8wr0-1ZPOM8” TYPE=“LVM2_member” PARTUUID=“a2bf258b-8953-40b9-879c-47ff5cc6a0e6”
/dev/mapper/ol-root: UUID=“16634ec2-2948-442a-b9d1-98f6ea01135e” TYPE=“xfs”
/dev/mapper/ol-swap: UUID=“5ebc7318-23f1-4127-b7bf-41737d4c16dc” TYPE=“swap”
/dev/mapper/ol-dockerlv: UUID=“d91febfb-e512-4de5-8c23-80746f4bf7d4” TYPE=“xfs”
/dev/mapper/pxdatavg-pxdatalv: LABEL=“pxpool=0,mdpoolid=0,mdvol,u=af3cd7f6-134d-42ea-9755-b46cd825ee41,i=c1e512ef-75f3-4330-ba71-a6ef1800f737,n=0” UUID=“b9121cc1-1ac1-435a-ae61-9fd1ba5ccf90” UUID_SUB=“92dacafd-bbad-4d02-843a-449b9c13796e” TYPE=“btrfs”
/dev/mapper/pxkvdbvg-pxkvdblv: LABEL=“kvdbvol” UUID=“bc8e6d00-a072-4e7e-bba0-c4ad1b814a58” TYPE=“xfs”

After that, my PX-node status is fine (note: this is a poc, volume size doesn’t matter):

[root@kube01 ~]# /opt/pwx/bin/pxctl status
Status: PX is operational
License: PX-Essential (ERROR: License is expired, PX-Central server not reachable)
Node ID: c1e512ef-75f3-4330-ba71-a6ef1800f737
IP: 10.126.25.220
Local Storage Pool: 1 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 80 GiB 6.0 GiB Online default default
Local Storage Devices: 1 device
Device Path Media Type Size Last-Scan
0:1 /dev/mapper/pxdatavg-pxdatalv STORAGE_MEDIUM_MAGNETIC 80 GiB 21 Jan 22 09:23 CET
total - 80 GiB
Cache Devices:
* No cache devices
Kvdb Device:
Device Path Size
/dev/mapper/pxkvdbvg-pxkvdblv 24 GiB
* Internal kvdb on this node is using this dedicated kvdb device to store its data.
Cluster Summary
Cluster ID: px-cluster-523fd839-bd9e-4203-8e7e-3a60e176c7c7
Cluster UUID: af3cd7f6-134d-42ea-9755-b46cd825ee41
Scheduler: kubernetes
Nodes: 1 node(s) with storage (1 online)
IP ID SchedulerNodeName StorageNode Used Capacity Status StorageStatus Version Kernel OS
10.126.25.220 c1e512ef-75f3-4330-ba71-a6ef1800f737 kube01 Yes 6.0 GiB 80 GiB Online Up (This node) 2.6.0.2-d505d8d 3.10.0-1160.el7.x86_64 Oracle Linux Server 7.9
Warnings:
WARNING: Persistent journald logging is not enabled on this node.
Global Storage Pool
Total Used : 6.0 GiB
Total Capacity : 80 GiB

Could you help on what am i missing plz? Are we suppose to create FS on the LVM logical volume?