I keep running into this issue installing PX on GKE from the market place:
failed to get resource type: failed to get the resource REST mapping for GroupVersionKind(stork.libopenstorage.org/v1alpha1 , Kind=Migration): no matches for kind “Migration” in version “stork.libopenstorage.org/v1alpha1 ”
at github…
@phyxavier
Can you upload output of following commands here?
List all stork pods
kubectl get pods --all-namespaces -l name=stork
Get logs from stork pods
kubectl logs -l name=stork -n kube-system
If your stork pods are not in kube-system, you might need to change kube-system to the pod’s namespace in above command.
Hi @phyxavier
I tested launching portworx from marketplace and didn’t had any issue.
Following is my setup details :
1 - Using Ubuntu OS instead of COS
2 - Enabled the following Scope : "Allow all access to API "
3 - N1-S4 instance selected - ( 3node cluster)
4 - Default 1.13.11 GKE version used.
5 - Portworx installation done in default namespace.
I was able to successfully deploy and no above errors were seen.
Can you let me know where did you saw this error ? Is it while installation ? Can you share what all options you had selected while creation of GKE cluster?
Regards
Sanjay
Describes shows the following:
Events:
Type Reason Age From Message
Normal Pulling 42m (x62 over 7h32m) kubelet, gke-werkstorage-default-pool-2c8f45a9-j98r pulling image “gcr.io/cloud-marketplace/portworx-public/portworx-enterprise/lh-config-sync@sha256:0e1b7b3aa2e1f5ee2c49b321a9eb3019532c249f618b6d5cb76762eae9ef0ac1 ”
Warning BackOff 2m32s (x1453 over 7h27m) kubelet, gke-werkstorage-default-pool-2c8f45a9-j98r Back-off restarting failed container
Hi @phyxavier
Can you attach the logs from portworx pod (portworx-8dtgq) ?
Regards
Sanjay
{
"textPayload": "@gke-werkstorage-default-pool-2c8f45a9-kd01 portworx[21682]: 2019-12-13_19:53:01: PX-Watchdog: (pid 31945): PX REST server died or did not started. return code 7. Timeout 480\n",
"insertId": "9fn3ugfbq9a5a",
"resource": {
"type": "container",
"labels": {
"zone": "us-central1-a",
"pod_id": "portworx-swqzz",
"container_name": "portworx",
"namespace_id": "default",
"instance_id": "5669083157607910552"
}
},
"timestamp": "2019-12-13T19:53:02.308532913Z",
"severity": "INFO",
"labels": {
"[container.googleapis.com/pod_name](http://container.googleapis.com/pod_name) ": "portworx-swqzz",
"[container.googleapis.com/stream](http://container.googleapis.com/stream) ": "stdout",
"[container.googleapis.com/namespace_name](http://container.googleapis.com/namespace_name) ": "default",
"[compute.googleapis.com/resource_name](http://compute.googleapis.com/resource_name) ": "gke-werkstorage-default-pool-2c8f45a9-kd01"
},
"logName": "projects/zeroclusterf/logs/portworx",
"receiveTimestamp": "2019-12-13T19:53:04.152606717Z"
},
{
"textPayload": "time=\"2019-12-13T19:53:01Z\" level=warning msg=\"Could not retrieve PX node status\" error=\"Get [http://127.0.0.1:9001/v1/cluster/nodehealth](http://127.0.0.1:9001/v1/cluster/nodehealth): dial tcp [127.0.0.1:9001](http://127.0.0.1:9001) : connect: connection refused\"\n",
"insertId": "g3gwfmfadchfe",
"resource": {
"type": "container",
"labels": {
"container_name": "portworx",
"namespace_id": "default",
"instance_id": "7794876890050628760",
"zone": "us-central1-a",
"pod_id": "portworx-8dtgq",
}
},
"timestamp": "2019-12-13T19:53:01.854483106Z",
"severity": "ERROR",
"labels": {
"[compute.googleapis.com/resource_name](http://compute.googleapis.com/resource_name) ": "gke-werkstorage-default-pool-2c8f45a9-j98r",
"[container.googleapis.com/pod_name](http://container.googleapis.com/pod_name) ": "portworx-8dtgq",
"[container.googleapis.com/stream](http://container.googleapis.com/stream) ": "stderr",
"[container.googleapis.com/namespace_name](http://container.googleapis.com/namespace_name) ": "default"
},
"logName": "projects/zeroclusterf/logs/portworx",
"receiveTimestamp": "2019-12-13T19:53:03.888935160Z"
},
I believe the real issue is the init containers
{
"textPayload": "time=\"2019-12-13T19:57:45Z\" level=fatal msg=\"Error initializing lighthouse config. timed out performing task\"\n",
"insertId": "yp1yvsfbkl3qs",
"resource": {
"type": "container",
"labels": {
"zone": "us-central1-a",
"pod_id": "px-lighthouse-6956876b8c-s9hkv",
"container_name": "config-init",
"namespace_id": "default",
"instance_id": "7794876890050628760"
}
},
"timestamp": "2019-12-13T19:57:45.107168588Z",
"severity": "ERROR",
"labels": {
"[container.googleapis.com/pod_name](http://container.googleapis.com/pod_name) ": "px-lighthouse-6956876b8c-s9hkv",
"[container.googleapis.com/stream](http://container.googleapis.com/stream) ": "stderr",
"[container.googleapis.com/namespace_name](http://container.googleapis.com/namespace_name) ": "default",
"[compute.googleapis.com/resource_name](http://compute.googleapis.com/resource_name) ": "gke-werkstorage-default-pool-2c8f45a9-j98r"
},
"logName": "projects/zeroclusterf/logs/config-init",
"receiveTimestamp": "2019-12-13T19:57:50.605441492Z"
},
{
"textPayload": "2019/12/13 19:57:36 Get [http://portworx-service:9001/config](http://portworx-service:9001/config): dial tcp [10.4.25.189:9001](http://10.4.25.189:9001) : connect: connection refused Next retry in: 10s\n",
"insertId": "1b68kb2f7myv41",
"resource": {
"type": "container",
"labels": {
"zone": "us-central1-a",
"pod_id": "px-lighthouse-6956876b8c-s9hkv",
"container_name": "config-init",
"namespace_id": "default",
"instance_id": "7794876890050628760"
}
},
"timestamp": "2019-12-13T19:57:36.577590011Z",
"severity": "ERROR",
"labels": {
"[compute.googleapis.com/resource_name](http://compute.googleapis.com/resource_name) ": "gke-werkstorage-default-pool-2c8f45a9-j98r",
"[container.googleapis.com/pod_name](http://container.googleapis.com/pod_name) ": "px-lighthouse-6956876b8c-s9hkv",
"[container.googleapis.com/stream](http://container.googleapis.com/stream) ": "stderr",
"[container.googleapis.com/namespace_name](http://container.googleapis.com/namespace_name) ": "default"
},
"logName": "projects/zeroclusterf/logs/config-init",
"receiveTimestamp": "2019-12-13T19:57:43.495371480Z"
},
Can you share the logs from the pod I mentioned ?
kubectl get pods --all-namespaces -l name=stork
NAMESPACE NAME READY STATUS RESTARTS AGE
default stork-68cc4c8d5f-gmbch 1/1 Running 1 14h
default stork-68cc4c8d5f-mbxks 1/1 Running 1 14h
default stork-68cc4c8d5f-rhd4b 1/1 Running 0 14h
kubectl logs -l name=stork -n kube-system
Heres the result of kubectl logs portworx-8dtgq
port works.log
time="2019-12-13T21:39:39Z" level=info msg="Input arguments: /px-oci-mon -s type=pd-ssd,size=500 -b -c portworx-cluster -secret_type k8s -x kubernetes -metadata type=pd-ssd,size=64 -max_storage_nodes_per_zone 3"
time="2019-12-13T21:39:39Z" level=info msg="Updated arguments: /px-oci-mon -s type=pd-ssd,size=500 -b -c portworx-cluster -secret_type k8s -x kubernetes -metadata type=pd-ssd,size=64 -max_storage_nodes_per_zone 3"
time="2019-12-13T21:39:39Z" level=info msg="Service handler initialized via as DBus{type:dbus,svc:portworx.service,id:0xc420183300}"
time="2019-12-13T21:39:39Z" level=info msg="Activating REST server"
time="2019-12-13T21:39:39Z" level=info msg="Negotiated Docker API version: 1.32"
time="2019-12-13T21:39:39Z" level=info msg="Using Docker as container handler"
time="2019-12-13T21:39:39Z" level=info msg="Detected NetworkMode container:d1a67f3e0746c660db2aad94c1d8b8ff38ce272f7b41fa636f6d1bfc671c0225 -> polling d1a67f3e0746 for network settings"
time="2019-12-13T21:39:39Z" level=info msg="Detected HostNetwork setting - will track portworx status via REST"
time="2019-12-13T21:39:39Z" level=info msg="Requested PX-OCI from portworx/px-enterprise:2.0.3.3 via env. variable"
time="2019-12-13T21:39:39Z" level=info msg="Preparing to download Portworx image..."
This file has been truncated. show original
Hi,
Is this the fresh Install ? Looks like you have tried installation multiple times.
Are this the complete logs ?