Hi all,
I have 2 internal K8S clusters (build by K3S)
1 default storageclass csi-driver-nfs , I have 1 QNAP NAS device as NFS server
I installed k10 by Helm 3
I installed Portworx backup using Helm 3 , I installed Stork using yaml deployment.
I added 2 k8s clusters into Portworx cental , I added aws s3 account .
I can backup a simple nginx app with PV 50MB for logs from cluster A and restore it to cluster B successfully
I created pre-exec and post-exec rules for mysql as this doc https://docs.portworx.com/portworx-backup-on-prem/use-px-backup/backup-stateful-applications/mysql
When I try to backup an application with mysql deployment and volume and offload CSI snapshots to backup location , the backup job fail with error
BACKUP STATUS - FAILED
Backup failed at stage Final for volume:
this is json detail
{
"metadata": {
"name": "ocs-inventory-backup",
"uid": "eff5403d-ed4b-4120-937b-63defb028716",
"org_id": "default",
"create_time": {
"seconds": 1697543519,
"nanos": 197123059
},
"last_update_time": {
"seconds": 1697543936,
"nanos": 284655573
},
"create_time_in_sec": 1697543519,
"ownership": {
"owner": "086209a4-22a9-4cdc-90bd-2eca3c079968",
"groups": [
{
"id": "px-admin-group",
"access": 3
}
],
"public": {}
},
"DSTYPE": "BACKUP",
"DSNAMESPACE": "ocs-inventory",
"DSSTATUSTYPE": "FAILED",
"_DSSIZE": 0
},
"backup_info": {
"cluster": "k3s-hq",
"namespaces": [
"ocs-inventory"
],
"status": {
"status": 4,
"reason": "Backup failed at stage Final for volume: "
},
"volumes": [
{
"name": "pvc-4ff12ffb-e929-4862-8500-078d13041d1c",
"namespace": "ocs-inventory",
"pvc": "ocs-pvc",
"status": {
"status": 4,
"reason": "Backup failed at stage Final for volume: "
},
"driver_name": "kdmp",
"storage_class": "storageclass-csi-nfs-sgnnas03",
"pvc_id": "4ff12ffb-e929-4862-8500-078d13041d1c",
"provisioner": "nfs.csi.k8s.io"
}
],
"stage": 6,
"pre_exec_rule": "backup-mysql-pre-exec-rules",
"post_exec_rule": "backup-mysql-post-exec-rules",
"cr_name": "ocs-inventory-backup-eff5403",
"csi_snapshot_class_name": "csi-nfs-snapclass",
"backup_location_ref": {
"name": "s3-hq",
"uid": "1529338f-d48a-439e-8232-0f1b7250f1f6"
},
"pre_exec_rule_ref": {
"name": "backup-mysql-pre-exec-rules",
"uid": "c1d2f157-2615-4e29-a238-c8201ee9dbb9"
},
"post_exec_rule_ref": {
"name": "backup-mysql-post-exec-rules",
"uid": "0e6f9f05-5fe6-4c3b-99fd-6fedf7b38bff"
},
"cloud_credential_ref": {},
"backup_type": {
"type": 1
},
"cr_uid": "08a57899-ab0f-4004-b1bc-f17703e39f8c",
"user_backupshare_access": 3,
"cluster_ref": {
"name": "k3s-hq",
"uid": "0b787809-71cf-4bbf-ac6d-24db7129730c"
},
"target_namespace": "ocs-inventory",
"counts": {
"volumes": 1,
"resources": 0,
"pxd": 0,
"aws": 0,
"azure": 0,
"gce": 0,
"csi": 0,
"kdmp": 1,
"statCounts": []
}
},
"backup_selected": false,
"isResourcesBackupInProgress": false,
"isDeletionInProgress": false,
"isDeletionPending": false,
"isFailed": true
}
the mysql deployment is simple
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: ocs-inventory
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
serviceAccountName: ocs-sa
containers:
- image: mysql:5.7
name: mysql
envFrom:
- configMapRef:
name: ocs-cm
- secretRef:
name: ocs-config
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-pv
mountPath: /var/lib/mysql
subPath: mysqldata
- name: mysql-config
mountPath: /etc/my.cnf
subPath: my.cnf
readOnly: true
volumes:
- name: mysql-pv
persistentVolumeClaim:
claimName: ocs-pvc
- name: mysql-config
configMap:
name: mysql-cm
How can I troubleshoot it ? Please give me some advice, thank you very much.