PX on self-managed k8s on GCP cloud

Using Portworx essentials to test Portworx on self-managed Kubernetes cluster on GCP cloud.

Use case is, Internal KVDB with dynamic provisioning of disks and everything works fine. Using CSI provisioner (pxd.portworx.com) storage class for dynamic provisioning for volumes for the apps. Everything works fine.

What I noticed is, in addition to the disk type I specified (pd-standard) in the daemonset, additional disks (of type pd-ssd) are provisioned in the cloud. Would like to understand more about the additional disks getting provisioned. Could someone share more insights?

Attached a screenshot that shows disk information from the cloud.


When Portworx uses internal KVDB it will use SSD type disks. This is because Internal KVDB runs ETCD and etcd recommends SSDs be used as backing storage for the ideal performance and operation. If you click on an individual disk, it will likely have a data and metadata tag to indicate if it is used for KVDB metadata or application data.


Thanks for explaining, Ryan.