Portworx on Openshift

Hi,

I am trying to install Portworx on Openshift 3.11.+
During generating Spec file for Portworx, we do get an option to choose the etcd to be used.
During my initial testing I used the the portworx in built etcd. But now I would like to use the OpenShift managed etcd.
I do not know any pros/ cons related to this option. Can someone help me understand this better?
Question I have are:

  1. What are the pros/ cons of using etcd provided by portworx?
  2. What are the pros/ cons of using etcd provided by OpneShift?

Also, in the inventory file for installing OpenShift there is an option to use the storage class name that user want to use for storing OpenShift logs openshift_logging_es_pvc_storage_class_name. I would like to use my portworx-sc for this operation. Currently we are using glusterfs for storage and want to move to Portworx soon. Changing this value, what are the issues? As per my understanding, as glusterfs can be managed by OpenShift, this glusterfs storage class can be used by OpneShift logging and will be available to use. But in terms of Portworx, we install Portworx after installing OpenShift. So this portworx-sc will not be available before we install Portworx and that may create an issue. How to resolve this?

Hi bharatbhavsar7,

It is not recommended to share the K8s etcd, the reason being is that etcd is already pretty loaded for operations by K8s. There are a lot of k8s components that watch and update that etcd.
You can setup an external etcd for Portworx if you prefer, this doc list various approaches for installing an external ETCD cluster and provides recommendations on best practices https://docs.portworx.com/reference/knowledge-base/etcd/

We have a doc that talks about our internal kvdb in depth you can read all about it here https://docs.portworx.com/concepts/internal-kvdb/

Regarding your storage class question; to be able to create and use Portworx storageclass, Portworx first needs to be installed on your cluster so you can consume Portworx volumes there is no way around it.
I need to dig more about the logging question you are asking. In the meanwhile if you have further questions we would be happy to assist you.

Thanks @Luay_Alem
It definitely helped me understand KVDB managed by Portworx better after going through doc you shared.

I have one more question, related to this. In generating specs.yaml, it is advised to use seperate device for matadata storage of KVDB. For our system, we have cluster of 8 nodes with each node having 4 devices attached. What if we do not specify the separate device for this or specify one of these devices which can also be used for Portworx?

For example if I have 4 devices

  1. /dev/vda
  2. /dev/vdb
  3. /dev/vdc
  4. /dev/vdd

And I specified these all 4 devices for Portworx and one of these 4 I also specified for Matadata, will that be okay? Or Device that is being used for Matadata can’t be used for Portworx?

Each device we attach to vm has capacity of 2.6T each

hi @Luay_Alem

Also, related to Cluster name that we pass as argument in spec.yaml: do we have to generate spec.yaml every-time we create new cluster?
Or we can use same yaml for creating cluster on different systems assuming other arguments/ detaisl are same?

For example, if I have 2 Openshift cluster with 6 nodes each, can I use same yaml to deploy portworx on both the clusters?

basically trying to understand what is the use of uuid that is generated for cluster name

It is recommended to have a separate metadata device, especially in production. This is primarily for 2 reasons:

  1. Eliminate IO contention between storage IOs and metadata IOs, by having separate devices for storage and metadata
  2. Fault isolation: If the metadata device fails, you don’t lose your storage data and vice versa

It is recommended to generate the spec for every cluster install… If all the clusters have the same configurations, you may re-use the spec, provided that you change the clusterID. The cluster name/ cluster ID is a unique identifier for your portworx cluster, and it may be associated with portworx license.

Thanks @sathya
Related to unique id, what if I generate it locally and then update the yaml with that?
If that is okay, which method is supposed to be used?
As I am using python, should I be using any specific method to generate it?
as uuid.uuid1() generates it based on hostname and system current time
whereas uuid.uuid4() generate a random id.

Reason is, I am automating the process of installing Portworx on Openshift cluster. Which means, generating this specfile through tool eachtime is not possible. And my clsuter configs are going to be same, so I can reuse this spec file. But due to this uuid, it is kind of blocker to automate this process.
Please advice.

yes, you can update the yaml before applying, and either of the methods should be fine