Hi. I’ve successfully deployed portworx 2.8 via the operator to my on premise kubernetes cluster (ubuntu 20.04 vms), using the Vsphere clouddrive feature to create vmdks for the storage cluster nodes. All this appears to be running and the “board is green” so to speak.
When I attempt to create a PVC specifying a portworx storage class, I receive the following error:
Warning ProvisioningFailed 78s (x11 over 9m28s) persistentvolume-controller Failed to provision volume with StorageClass "px-db": rpc error: code = Internal desc = Unable to get default policy details rpc error: code = Internal desc = Unable to retrive default policy details: KVDB connection failed, either node has networking issues or KVDB members are down or KVDB cluster is unhealthy. All operations (get/update/delete) are unavailable.
The PVC stays in a pending state forever. When this occurs, I don’t see anything in the logs of the portworx containers, just the error from kubernetes above.
My kvdb members output looks good to me:
Kvdb Cluster Members:
ID PEER URLs CLIENT URLs LEADER HEALTHY DBSIZE
f9f4f929-4a25-4382-838e-3219cbb5c3f3 [http://portworx-1.internal.kvdb:9018] [http://10.44.58.27:9019] true true 224 KiB
cb3c76d8-b9e8-4026-ad8a-c4c4fe05bba4 [http://portworx-2.internal.kvdb:9018] [http://10.44.58.26:9019] false true 208 KiB
2e793acb-9257-4de5-ad23-b40ca9173303 [http://portworx-3.internal.kvdb:9018] [http://10.44.58.25:9019] false true 208 KiB
My cluster status:
Status: PX is operational
Telemetry: Healthy
License: PX-Essential (lease renewal in 23h, 48m)
Node ID: f9f4f929-4a25-4382-838e-3219cbb5c3f3
IP: 10.44.58.27
Local Storage Pool: 1 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 200 GiB 10 GiB Online default default
Local Storage Devices: 1 device
Device Path Media Type Size Last-Scan
0:1 /dev/sdb STORAGE_MEDIUM_MAGNETIC 200 GiB 08 Nov 21 22:33 UTC
total - 200 GiB
Cache Devices:
* No cache devices
Kvdb Device:
Device Path Size
/dev/sdc 32 GiB
* Internal kvdb on this node is using this dedicated kvdb device to store its data.
Cluster Summary
Cluster ID: px-cluster-cf1
Cluster UUID: ****
Scheduler: kubernetes
Nodes: 3 node(s) with storage (3 online)
IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
10.44.58.27 f9f4f929-4a25-4382-838e-3219cbb5c3f3 ****** Disabled Yes 10 GiB 200 GiB Online Up (This node) 2.8.0.0-1ef62f8 5.4.0-89-generic Ubuntu 20.04.3 LTS
10.44.58.26 cb3c76d8-b9e8-4026-ad8a-c4c4fe05bba4 ***** Disabled Yes 10 GiB 200 GiB Online Up 2.8.0.0-1ef62f8 5.4.0-89-generic Ubuntu 20.04.3 LTS
10.44.58.25 2e793acb-9257-4de5-ad23-b40ca9173303 ***** Disabled Yes 10 GiB 200 GiB Online Up 2.8.0.0-1ef62f8 5.4.0-89-generic Ubuntu 20.04.3 LTS
Global Storage Pool
Total Used : 30 GiB
Total Capacity : 600 GiB
Any ideas?
Thanks,
Jeremy