添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

Error Logs:
time=“2023-03-25T17:57:03Z” level=info msg=“crunchy-pgbackrest starts”
time=“2023-03-25T17:57:03Z” level=info msg=“debug flag set to false”
time=“2023-03-25T17:57:03Z” level=info msg=“backrest backup command requested”
time=“2023-03-25T17:57:03Z” level=info msg=“backrest command will be executed for both local and s3 storage”
time=“2023-03-25T17:57:03Z” level=info msg=“command to execute is [pgbackrest backup --stanza=db --type=full --repo1-retention-full=2 --db-host=10.128.2.56 --db-path=/pgdata/dev-eng-pg-cluster-repl3 ; pgbackrest backup --stanza=db --type=full --repo1-retention-full=2 --db-host=10.128.2.56 --db-path=/pgdata/dev-eng-pg-cluster-repl3 --repo1-type=s3 --no-repo1-s3-verify-tls]”
time=“2023-03-25T17:57:17Z” level=info msg=“output=”
time=“2023-03-25T17:57:17Z” level=info msg=“stderr=[ERROR: [037]: backup command requires option: repo1-s3-bucket\n]”
time=“2023-03-25T17:57:17Z” level=fatal msg=“command terminated with exit code 37”

Thanks in advance!

Thanks for the reply. I have tried using different forms of the bucket name with no luck, hopefully you see something I am missing. Below is my latest attempt at the cr section:

    storages:
      s3-us-west:
        type: s3
        bucket: my-backup-bucket
        region: us-west-2
        endpointUrl: s3.amazonaws.com
        verifyTLS: true
        uriStyle: path
   schedule:
      - name: "s3-backup"
        schedule: "*/3 * * * *"
        keep: 2
        type: full
        storage: s3-us-west

Contents of schedule configmap that is created:
{“version”:“”,“name”:“s3-backup”,“created”:“0001-01-01T00:00:00Z”,“schedule”:“*/3 * * * *”,“namespace”:“pgo”,“type”:“pgbackrest”,“cluster”:“dev-eng-pg-cluster”,“pgbackrest”:{“deployment”:“dev-eng-pg-cluster”,“label”:“”,“container”:“database”,“type”:“full”,“storageType”:“s3”,“options”:“–repo1-retention-full=2”},“policy”:{“secret”:“”,“name”:“”,“imagePrefix”:“”,“imageTag”:“”,“database”:“”}}

Error log:
Wed Apr 5 13:42:01 UTC 2023 INFO: Image mode found: pgbackrest
Wed Apr 5 13:42:01 UTC 2023 INFO: Starting in ‘pgbackrest’ mode
time=“2023-04-05T13:42:01Z” level=info msg=“crunchy-pgbackrest starts”
time=“2023-04-05T13:42:01Z” level=info msg=“debug flag set to false”
time=“2023-04-05T13:42:01Z” level=info msg=“backrest backup command requested”
time=“2023-04-05T13:42:01Z” level=info msg=“backrest command will be executed for both local and s3 storage”
time=“2023-04-05T13:42:01Z” level=info msg=“command to execute is [pgbackrest backup --stanza=db --type=full --repo1-retention-full=2 --db-host=10.129.2.13 --db-path=/pgdata/dev-eng-pg-cluster-repl2 ; pgbackrest backup --stanza=db --type=full --repo1-retention-full=2 --db-host=10.129.2.13 --db-path=/pgdata/dev-eng-pg-cluster-repl2 --repo1-type=s3 --no-repo1-s3-verify-tls]”
time=“2023-04-05T13:42:14Z” level=info msg=“output=
time=“2023-04-05T13:42:14Z” level=info msg=“stderr=[ERROR: [037]: backup command requires option: repo1-s3-bucket\n]”
time=“2023-04-05T13:42:14Z” level=fatal msg=“command terminated with exit code 37”

Interesting, so if we need to change storages the cluster needs to be recreated? I am assuming schedules can be changed at anytime? That is good to know. Before I do that, where would an s3 bucket prefix go in the configuration? I didn’t see a prefix field as an option so can it be part of the bucket name field, ie. my-bucket-name/folder1/folder2?

Thanks for the help…

Well, it’s the major design limitation of 1.x pg operator family which is a reason why pg operator 2.0.0 (beta) came out. 2.0.0 allows to change backup settings.
If you need to offset the root of inside your bucket feel free to use repoPath option. It will allow you to add intermediate folders to bucket name.

Thanks again for the reply and it looks like we are getting closer. I specified the repoPath /dev/postgres_cluster and during cluster creation it does look like the s3 bucket is getting updated with information. But eventually the backuprest pod errors out with the below error. In AWS both the files indicated below as missing do actually exist. Any thoughts?

Wed Apr 5 16:45:29 UTC 2023 INFO: Image mode found: pgbackrest
Wed Apr 5 16:45:29 UTC 2023 INFO: Starting in ‘pgbackrest’ mode
time=“2023-04-05T16:45:30Z” level=info msg=“crunchy-pgbackrest starts”
time=“2023-04-05T16:45:30Z” level=info msg=“debug flag set to false”
time=“2023-04-05T16:45:30Z” level=info msg=“backrest backup command requested”
time=“2023-04-05T16:45:30Z” level=info msg=“backrest command will be executed for both local and s3 storage”
time=“2023-04-05T16:45:30Z” level=info msg=“command to execute is [pgbackrest backup --type=full --db-host=10.129.2.152 --db-path=/pgdata/dev-pg-cluster ; pgbackrest backup --type=full --db-host=10.129.2.152 --db-path=/pgdata/dev-pg-cluster --repo1-type=s3]”
time=“2023-04-05T16:46:38Z” level=info msg=“output=
time=“2023-04-05T16:46:38Z” level=info msg="stderr=[WARN: option ‘repo1-retention-full’ is not set for ‘repo1-retention-full-type=count’, the repository may run out of space\n HINT: to retain full backups indefinitely (without warning), set option ‘repo1-retention-full’ to the maximum.\nERROR: [055]: unable to load info file ‘/dev/postgres-cluster/backup/db/backup.info’ or ‘/dev/postgres-cluster/backup/db/backup.info.copy’:\n FileMissingError: unable to open missing file ‘/dev/postgres-cluster/backup/db/backup.info’ for read\n FileMissingError: unable to open missing file ‘/dev/postgres-cluster/backup/db/backup.info.copy’ for read\n HINT: backup.info cannot be opened and is required to perform a backup.\n HINT: has a stanza-create been performed?\nWARN: option ‘repo1-retention-full’ is not set for ‘repo1-retention-full-type=count’, the repository may run out of space\n HINT: to retain full backups indefinitely (without warning), set option 'repo1-retention-…
time=“2023-04-05T16:46:38Z” level=fatal msg=“command terminated with exit code 82”

Yes, the stanza created just fine and it looks like it sent a backup to AWS but then that error.

I decided to move to version 1.4 of the operator to see if this helps but now I am getting permission issues as I am installing on OpenShift.

2023-04-05 18:47:55,658 INFO: No PostgreSQL configuration items changed, nothing to reload.
2023-04-05 18:47:55,663 INFO: Lock owner: None; I am dev-eng-pg-cluster-99665545b-qvv65
2023-04-05 18:47:55,727 INFO: trying to bootstrap a new cluster
The files belonging to this database system will be owned by user “postgres”.
This user must also own the server process.
The database cluster will be initialized with locale “en_US.utf-8”.
The default text search configuration will be set to “english”.
Data page checksums are enabled.
creating directory /pgdata/dev-eng-pg-cluster … initdb: error: could not create directory “/pgdata/dev-eng-pg-cluster”: Permission denied
pg_ctl: database system initialization failed
2023-04-05 18:47:55,796 INFO: removing initialize key after failed attempt to bootstrap the cluster

annotations: k8s.ovn.org/pod-networks: >- {"default":{"ip_addresses":["10.129.2.169/23"],"mac_address":"0a:58:0a:81:02:a9","gateway_ips":["10.129.2.1"],"ip_address":"10.129.2.169/23","gateway_ip":"10.129.2.1"}} k8s.v1.cni.cncf.io/network-status: |- "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.129.2.169" "mac": "0a:58:0a:81:02:a9", "default": true, "dns": {} k8s.v1.cni.cncf.io/networks-status: |- "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.129.2.169" "mac": "0a:58:0a:81:02:a9", "default": true, "dns": {} keep-backups: 'true' keep-data: 'true' openshift.io/scc: anyuid status: >- {"conn_url":"postgres://10.129.2.169:5432/postgres","api_url":"http://10.129.2.169:8009/patroni","state":"stopped","role":"uninitialized","version":"2.1.4"} resourceVersion: '32456817' name: dev-eng-pg-cluster-99665545b-6txbx uid: f377f440-0f7e-45e3-ad39-854b7ac96f15 creationTimestamp: '2023-04-05T19:08:58Z' managedFields: - manager: Go-http-client operation: Update apiVersion: v1 time: '2023-04-05T19:08:58Z' fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:labels': 'f:service-name': {} - manager: kube-controller-manager operation: Update apiVersion: v1 time: '2023-04-05T19:08:58Z' fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:annotations': .: {} 'f:keep-backups': {} 'f:keep-data': {} 'f:generateName': {} 'f:labels': 'f:pod-template-hash': {} 'f:pgo-pg-database': {} 'f:pg-pod-anti-affinity': {} 'f:crunchy-pgha-scope': {} .: {} 'f:pgo-version': {} 'f:pgouser': {} 'f:pg-cluster': {} 'f:vendor': {} 'f:name': {} 'f:deployment-name': {} 'f:ownerReferences': .: {} 'k:{"uid":"1dcd828f-1630-46af-a3b6-611580ffb292"}': {} 'f:spec': 'f:volumes': 'k:{"name":"sshd"}': .: {} 'f:name': {} 'f:secret': .: {} 'f:defaultMode': {} 'f:secretName': {} 'k:{"name":"pgdata"}': .: {} 'f:name': {} 'f:persistentVolumeClaim': .: {} 'f:claimName': {} 'k:{"name":"podinfo"}': .: {} 'f:downwardAPI': .: {} 'f:defaultMode': {} 'f:items': {} 'f:name': {} 'k:{"name":"primary-volume"}': .: {} 'f:name': {} 'f:secret': .: {} 'f:defaultMode': {} 'f:secretName': {} 'k:{"name":"tmp"}': .: {} 'f:emptyDir': .: {} 'f:medium': {} 'f:sizeLimit': {} 'f:name': {} 'k:{"name":"ssh-config"}': .: {} 'f:name': {} 'f:secret': .: {} 'f:defaultMode': {} 'f:items': {} 'f:secretName': {} 'k:{"name":"tls-replication"}': .: {} 'f:emptyDir': .: {} 'f:medium': {} 'f:sizeLimit': {} 'f:name': {} .: {} 'k:{"name":"user-volume"}': .: {} 'f:name': {} 'f:secret': .: {} 'f:defaultMode': {} 'f:secretName': {} 'k:{"name":"pgbackrest-config"}': .: {} 'f:name': {} 'f:projected': .: {} 'f:defaultMode': {} 'f:sources': {} 'k:{"name":"root-volume"}': .: {} 'f:name': {} 'f:secret': .: {} 'f:defaultMode': {} 'f:secretName': {} 'k:{"name":"dshm"}': .: {} 'f:emptyDir': .: {} 'f:medium': {} 'f:name': {} 'k:{"name":"report"}': .: {} 'f:emptyDir': .: {} 'f:medium': {} 'f:sizeLimit': {} 'f:name': {} 'k:{"name":"tls-server"}': .: {} 'f:name': {} 'f:projected': .: {} 'f:defaultMode': {} 'f:sources': {} 'k:{"name":"pgconf-volume"}': .: {} 'f:name': {} 'f:projected': .: {} 'f:defaultMode': {} 'f:sources': {} 'f:containers': 'k:{"name":"database"}': 'f:image': {} 'f:volumeMounts': 'k:{"mountPath":"/pgconf"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/pgconf/pguser"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/sshd"}': .: {} 'f:mountPath': {} 'f:name': {} 'f:readOnly': {} 'k:{"mountPath":"/tmp"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/etc/podinfo"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/pgdata"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/etc/pgbackrest/conf.d"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/dev/shm"}': .: {} 'f:mountPath': {} 'f:name': {} .: {} 'k:{"mountPath":"/pgconf/tls-replication"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/pgconf/tls"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/pgconf/pgreplicator"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/pgconf/pgsuper"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/etc/ssh"}': .: {} 'f:mountPath': {} 'f:name': {} 'f:readOnly': {} 'f:terminationMessagePolicy': {} .: {} 'f:resources': .: {} 'f:limits': .: {} 'f:cpu': {} 'f:memory': {} 'f:requests': .: {} 'f:cpu': {} 'f:memory': {} 'f:livenessProbe': .: {} 'f:exec': .: {} 'f:command': {} 'f:failureThreshold': {} 'f:initialDelaySeconds': {} 'f:periodSeconds': {} 'f:successThreshold': {} 'f:timeoutSeconds': {} 'f:env': 'k:{"name":"PGBACKREST_REPO1_S3_BUCKET"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_REPO1_HOST_CMD"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PATRONI_SCOPE"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:fieldRef': {} 'k:{"name":"PGHA_STANDBY"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_PG1_SOCKET_PATH"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHOST"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_REPO1_PATH"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_PG1_PORT"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PATRONI_KUBERNETES_LABELS"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_REPO1_S3_KEY_SECRET"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:secretKeyRef': {} 'k:{"name":"LD_PRELOAD"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"MODE"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_REPO1_HOST"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHA_REPLICA_REINIT_ON_START_FAIL"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHA_PGBACKREST_S3_VERIFY_TLS"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHA_SYNC_REPLICATION"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_REPO1_S3_REGION"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PATRONI_KUBERNETES_NAMESPACE"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:fieldRef': {} 'k:{"name":"ENABLE_SSHD"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_REPO1_S3_ENDPOINT"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHA_USER"}': .: {} 'f:name': {} 'f:value': {} .: {} 'k:{"name":"BACKREST_SKIP_CREATE_STANZA"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHA_INIT"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:configMapKeyRef': {} 'k:{"name":"PGHA_TLS_ONLY"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PATRONI_POSTGRESQL_DATA_DIR"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHA_PGBACKREST_LOCAL_S3_STORAGE"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHA_TLS_ENABLED"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHA_PGBACKREST_LOCAL_GCS_STORAGE"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHA_PGBACKREST"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHA_PASSWORD_TYPE"}': .: {} 'f:name': {} 'k:{"name":"PGBACKREST_STANZA"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"NSS_WRAPPER_GROUP"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PATRONI_LOG_LEVEL"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PATRONI_KUBERNETES_SCOPE_LABEL"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_REPO1_TYPE"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHA_DATABASE"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_REPO1_S3_CA_FILE"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_DB_PATH"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_REPO1_S3_URI_STYLE"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGHA_PG_PORT"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_REPO1_S3_KEY"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:secretKeyRef': {} 'k:{"name":"NSS_WRAPPER_PASSWD"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGBACKREST_LOG_PATH"}': .: {} 'f:name': {} 'f:value': {} 'f:readinessProbe': .: {} 'f:exec': .: {} 'f:command': {} 'f:failureThreshold': {} 'f:initialDelaySeconds': {} 'f:periodSeconds': {} 'f:successThreshold': {} 'f:timeoutSeconds': {} 'f:securityContext': .: {} 'f:allowPrivilegeEscalation': {} 'f:privileged': {} 'f:readOnlyRootFilesystem': {} 'f:terminationMessagePath': {} 'f:imagePullPolicy': {} 'f:ports': .: {} 'k:{"containerPort":5432,"protocol":"TCP"}': .: {} 'f:containerPort': {} 'f:name': {} 'f:protocol': {} 'k:{"containerPort":8009,"protocol":"TCP"}': .: {} 'f:containerPort': {} 'f:name': {} 'f:protocol': {} 'f:name': {} 'f:dnsPolicy': {} 'f:serviceAccount': {} 'f:restartPolicy': {} 'f:schedulerName': {} 'f:terminationGracePeriodSeconds': {} 'f:serviceAccountName': {} 'f:enableServiceLinks': {} 'f:securityContext': .: {} 'f:supplementalGroups': {} 'f:affinity': .: {} 'f:podAntiAffinity': .: {} 'f:preferredDuringSchedulingIgnoredDuringExecution': {} - manager: dev-eng-qccd4-master-1.novalocal operation: Update apiVersion: v1 time: '2023-04-05T19:08:59Z' fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:annotations': 'f:k8s.ovn.org/pod-networks': {} - manager: multus operation: Update apiVersion: v1 time: '2023-04-05T19:09:12Z' fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:annotations': 'f:k8s.v1.cni.cncf.io/network-status': {} 'f:k8s.v1.cni.cncf.io/networks-status': {} subresource: status - manager: Patroni operation: Update apiVersion: v1 time: '2023-04-05T19:09:13Z' fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:annotations': 'f:status': {} - manager: kubelet operation: Update apiVersion: v1 time: '2023-04-05T19:09:18Z' fieldsType: FieldsV1 fieldsV1: 'f:status': 'f:conditions': 'k:{"type":"ContainersReady"}': .: {} 'f:lastProbeTime': {} 'f:lastTransitionTime': {} 'f:message': {} 'f:reason': {} 'f:status': {} 'f:type': {} 'k:{"type":"Initialized"}': .: {} 'f:lastProbeTime': {} 'f:lastTransitionTime': {} 'f:status': {} 'f:type': {} 'k:{"type":"Ready"}': .: {} 'f:lastProbeTime': {} 'f:lastTransitionTime': {} 'f:message': {} 'f:reason': {} 'f:status': {} 'f:type': {} 'f:containerStatuses': {} 'f:hostIP': {} 'f:phase': {} 'f:podIP': {} 'f:podIPs': .: {} 'k:{"ip":"10.129.2.169"}': .: {} 'f:ip': {} 'f:startTime': {} subresource: status namespace: pgo ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: dev-eng-pg-cluster-99665545b uid: 1dcd828f-1630-46af-a3b6-611580ffb292 controller: true blockOwnerDeletion: true labels: pgouser: admin pgo-version: 1.4.0 service-name: dev-eng-pg-cluster pg-cluster: dev-eng-pg-cluster vendor: crunchydata name: dev-eng-pg-cluster deployment-name: dev-eng-pg-cluster pg-pod-anti-affinity: preferred crunchy-pgha-scope: dev-eng-pg-cluster pod-template-hash: 99665545b pgo-pg-database: 'true' spec: restartPolicy: Always serviceAccountName: pgo-pg imagePullSecrets: - name: postgres-operator-dockercfg-f7xj9 - name: pgo-pg-dockercfg-dmhw6 priority: 0 schedulerName: default-scheduler enableServiceLinks: true affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: vendor operator: In values: - crunchydata - key: pg-pod-anti-affinity operator: Exists - key: pg-cluster operator: In values: - dev-eng-pg-cluster topologyKey: kubernetes.io/hostname terminationGracePeriodSeconds: 30 preemptionPolicy: PreemptLowerPriority nodeName: dev-eng-qccd4-worker-0-s6qxn securityContext: seLinuxOptions: level: 's0:c30,c25' supplementalGroups: - 1001 containers: - resources: limits: cpu: '6' memory: 12Gi requests: cpu: '1' memory: 256Mi readinessProbe: exec: command: - /opt/crunchy/bin/postgres-ha/health/pgha-readiness.sh initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: database livenessProbe: exec: command: - /opt/crunchy/bin/postgres-ha/health/pgha-liveness.sh initialDelaySeconds: 30 timeoutSeconds: 10 periodSeconds: 15 successThreshold: 1 failureThreshold: 3 - name: MODE value: postgres - name: PGHA_PG_PORT value: '5432' - name: PGHA_USER value: postgres - name: PGHA_INIT valueFrom: configMapKeyRef: name: dev-eng-pg-cluster-pgha-config key: init - name: PATRONI_POSTGRESQL_DATA_DIR value: /pgdata/dev-eng-pg-cluster - name: PGBACKREST_REPO1_S3_BUCKET value: hart-okd-backups - name: PGBACKREST_REPO1_S3_ENDPOINT value: s3.us-west-2.amazonaws.com - name: PGBACKREST_REPO1_S3_REGION value: us-west-2 - name: PGBACKREST_REPO1_S3_KEY valueFrom: secretKeyRef: name: dev-eng-pg-cluster-backrest-repo-config key: aws-s3-key - name: PGBACKREST_REPO1_S3_KEY_SECRET valueFrom: secretKeyRef: name: dev-eng-pg-cluster-backrest-repo-config key: aws-s3-key-secret - name: PGBACKREST_REPO1_S3_CA_FILE value: /sshd/aws-s3-ca.crt - name: PGBACKREST_REPO1_HOST_CMD value: /usr/local/bin/archive-push-s3.sh - name: PGBACKREST_REPO1_S3_URI_STYLE value: path - name: PGHA_PGBACKREST_S3_VERIFY_TLS value: 'false' - name: PGBACKREST_STANZA value: db - name: PGBACKREST_REPO1_HOST value: dev-eng-pg-cluster-backrest-shared-repo - name: BACKREST_SKIP_CREATE_STANZA value: 'true' - name: PGHA_PGBACKREST value: 'true' - name: PGBACKREST_REPO1_PATH value: /dev-eng/postgres-cluster/ - name: PGBACKREST_DB_PATH value: /pgdata/dev-eng-pg-cluster - name: ENABLE_SSHD value: 'true' - name: PGBACKREST_LOG_PATH value: /tmp - name: PGBACKREST_PG1_SOCKET_PATH value: /tmp - name: PGBACKREST_PG1_PORT value: '5432' - name: PGBACKREST_REPO1_TYPE value: posix - name: PGHA_PGBACKREST_LOCAL_S3_STORAGE value: 'true' - name: PGHA_PGBACKREST_LOCAL_GCS_STORAGE value: 'false' - name: PGHA_DATABASE value: pgdb - name: PGHA_REPLICA_REINIT_ON_START_FAIL value: 'true' - name: PGHA_SYNC_REPLICATION value: 'false' - name: PGHA_TLS_ENABLED value: 'true' - name: PGHA_TLS_ONLY value: 'false' - name: PGHA_PASSWORD_TYPE - name: PGHA_STANDBY value: 'false' - name: PATRONI_KUBERNETES_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: PATRONI_KUBERNETES_SCOPE_LABEL value: crunchy-pgha-scope - name: PATRONI_SCOPE valueFrom: fieldRef: apiVersion: v1 fieldPath: 'metadata.labels[''crunchy-pgha-scope'']' - name: PATRONI_KUBERNETES_LABELS value: '{vendor: "crunchydata"}' - name: PATRONI_LOG_LEVEL value: INFO - name: PGHOST value: /tmp - name: LD_PRELOAD value: /usr/lib64/libnss_wrapper.so - name: NSS_WRAPPER_PASSWD value: /tmp/nss_wrapper/postgres/passwd - name: NSS_WRAPPER_GROUP value: /tmp/nss_wrapper/postgres/group securityContext: capabilities: drop: - MKNOD privileged: false readOnlyRootFilesystem: true allowPrivilegeEscalation: false ports: - name: postgres containerPort: 5432 protocol: TCP - name: patroni containerPort: 8009 protocol: TCP imagePullPolicy: IfNotPresent volumeMounts: - name: pgdata mountPath: /pgdata - name: user-volume mountPath: /pgconf/pguser - name: primary-volume mountPath: /pgconf/pgreplicator - name: root-volume mountPath: /pgconf/pgsuper - name: tls-server mountPath: /pgconf/tls - name: tls-replication mountPath: /pgconf/tls-replication - name: sshd readOnly: true mountPath: /sshd - name: ssh-config readOnly: true mountPath: /etc/ssh - name: pgconf-volume mountPath: /pgconf - name: dshm mountPath: /dev/shm - name: pgbackrest-config mountPath: /etc/pgbackrest/conf.d - name: podinfo mountPath: /etc/podinfo - name: tmp mountPath: /tmp - name: kube-api-access-dzkl8 readOnly: true mountPath: /var/run/secrets/kubernetes.io/serviceaccount terminationMessagePolicy: File image: 'percona/percona-postgresql-operator:1.4.0-ppg14-postgres-ha' serviceAccount: pgo-pg volumes: - name: pgdata persistentVolumeClaim: claimName: dev-eng-pg-cluster - name: user-volume secret: secretName: dev-eng-pg-cluster-pguser-secret defaultMode: 420 - name: primary-volume secret: secretName: dev-eng-pg-cluster-primaryuser-secret defaultMode: 420 - name: sshd secret: secretName: dev-eng-pg-cluster-backrest-repo-config defaultMode: 420 - name: ssh-config secret: secretName: dev-eng-pg-cluster-backrest-repo-config items: - key: config path: ssh_config defaultMode: 420 - name: root-volume secret: secretName: dev-eng-pg-cluster-postgres-secret defaultMode: 420 - name: tls-server projected: sources: - secret: name: dev-eng-pg-cluster-ssl-keypair - secret: name: dev-eng-pg-cluster-replication-ssl-keypair items: - key: tls.key path: tls-replication.key - key: tls.crt path: tls-replication.crt - secret: name: dev-eng-pg-cluster-ssl-ca defaultMode: 288 - name: tls-replication emptyDir: medium: Memory sizeLimit: 2Mi - name: report emptyDir: medium: Memory sizeLimit: 64Mi - name: dshm emptyDir: medium: Memory - name: tmp emptyDir: medium: Memory sizeLimit: 16Mi - name: pgbackrest-config projected: sources: - configMap: name: dev-eng-pg-cluster-config-backrest optional: true - secret: name: dev-eng-pg-cluster-config-backrest optional: true defaultMode: 420 - name: pgconf-volume projected: sources: - configMap: name: dev-eng-pg-cluster-pgha-config optional: true defaultMode: 420 - name: podinfo downwardAPI: items: - path: cpu_limit resourceFieldRef: containerName: database resource: limits.cpu divisor: 1m - path: cpu_request resourceFieldRef: containerName: database resource: requests.cpu divisor: 1m - path: mem_limit resourceFieldRef: containerName: database resource: limits.memory divisor: '0' - path: mem_request resourceFieldRef: containerName: database resource: requests.memory divisor: '0' - path: labels fieldRef: apiVersion: v1 fieldPath: metadata.labels - path: annotations fieldRef: apiVersion: v1 fieldPath: metadata.annotations defaultMode: 420 - name: kube-api-access-dzkl8 projected: sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: name: kube-root-ca.crt items: - key: ca.crt path: ca.crt - downwardAPI: items: - path: namespace fieldRef: apiVersion: v1 fieldPath: metadata.namespace - configMap: name: openshift-service-ca.crt items: - key: service-ca.crt path: service-ca.crt defaultMode: 420 dnsPolicy: ClusterFirst tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 - key: node.kubernetes.io/memory-pressure operator: Exists effect: NoSchedule status: phase: Running conditions: - type: Initialized status: 'True' lastProbeTime: null lastTransitionTime: '2023-04-05T19:08:59Z' - type: Ready status: 'False' lastProbeTime: null lastTransitionTime: '2023-04-05T19:08:59Z' reason: ContainersNotReady message: 'containers with unready status: [database]' - type: ContainersReady status: 'False' lastProbeTime: null lastTransitionTime: '2023-04-05T19:08:59Z' reason: ContainersNotReady message: 'containers with unready status: [database]' - type: PodScheduled status: 'True' lastProbeTime: null lastTransitionTime: '2023-04-05T19:08:59Z' hostIP: 10.0.1.154 podIP: 10.129.2.169 podIPs: - ip: 10.129.2.169 startTime: '2023-04-05T19:08:59Z' containerStatuses: - restartCount: 1 started: false ready: false name: database state: waiting: reason: CrashLoopBackOff message: >- back-off 10s restarting failed container=database pod=dev-eng-pg-cluster-99665545b-6txbx_pgo(f377f440-0f7e-45e3-ad39-854b7ac96f15) imageID: >- docker.io/percona/percona-postgresql-operator@sha256:93b69b07914f1ed43812013b7bfc75692369bc39fc24349853584165fc192e45 image: 'docker.io/percona/percona-postgresql-operator:1.4.0-ppg14-postgres-ha' lastState: terminated: exitCode: 0 reason: Completed startedAt: '2023-04-05T19:09:15Z' finishedAt: '2023-04-05T19:09:17Z' containerID: >- cri-o://4d110307223888180138d2a48e3d4804953f7180921490e9e55eb6cd2d58558c containerID: 'cri-o://4d110307223888180138d2a48e3d4804953f7180921490e9e55eb6cd2d58558c' qosClass: Burstable

It looks like you need to set disable_fsgroup: false as it is stated in Install on OpenShift - Percona Operator for PostgreSQL operator installation doc.
Please reinstall the operator and try to redeploy the cluster

Hello,
If you are referring to the following line, I made sure to do that initially. Just in case I below away the namspace and started over again with the same issue.

sed -i ‘/disable_auto_failover: “false”/a \ \ \ \ disable_fsgroup: “false”’ deploy/operator.yaml

kind: Pod
apiVersion: v1
metadata:
  generateName: dev-eng-pg-cluster-99665545b-
  annotations:
    k8s.ovn.org/pod-networks: >-
      {"default":{"ip_addresses":["10.129.2.174/23"],"mac_address":"0a:58:0a:81:02:ae","gateway_ips":["10.129.2.1"],"ip_address":"10.129.2.174/23","gateway_ip":"10.129.2.1"}}
    k8s.v1.cni.cncf.io/network-status: |-
          "name": "ovn-kubernetes",
          "interface": "eth0",
          "ips": [
              "10.129.2.174"
          "mac": "0a:58:0a:81:02:ae",
          "default": true,
          "dns": {}
    k8s.v1.cni.cncf.io/networks-status: |-
          "name": "ovn-kubernetes",
          "interface": "eth0",
          "ips": [
              "10.129.2.174"
          "mac": "0a:58:0a:81:02:ae",
          "default": true,
          "dns": {}
    keep-backups: 'true'
    keep-data: 'true'
    openshift.io/scc: anyuid
    status: >-
      {"conn_url":"postgres://10.129.2.174:5432/postgres","api_url":"http://10.129.2.174:8009/patroni","state":"stopped","role":"uninitialized","version":"2.1.4"}
  resourceVersion: '32470037'
  name: dev-eng-pg-cluster-99665545b-hp4gv
  uid: c8946c7b-b216-41ca-b64d-5fe83fa4aeda
  creationTimestamp: '2023-04-05T19:37:24Z'
  managedFields:
    - manager: Go-http-client
      operation: Update
      apiVersion: v1
      time: '2023-04-05T19:37:24Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:labels':
            'f:service-name': {}
    - manager: kube-controller-manager
      operation: Update
      apiVersion: v1
      time: '2023-04-05T19:37:24Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:keep-backups': {}
            'f:keep-data': {}
          'f:generateName': {}
          'f:labels':
            'f:pod-template-hash': {}
            'f:pgo-pg-database': {}
            'f:pg-pod-anti-affinity': {}
            'f:crunchy-pgha-scope': {}
            .: {}
            'f:pgo-version': {}
            'f:pgouser': {}
            'f:pg-cluster': {}
            'f:vendor': {}
            'f:name': {}
            'f:deployment-name': {}
          'f:ownerReferences':
            .: {}
            'k:{"uid":"9d53aff3-3453-4ea2-af77-f5d3e8db26ed"}': {}
        'f:spec':
          'f:volumes':
            'k:{"name":"sshd"}':
              .: {}
              'f:name': {}
              'f:secret':
                .: {}
                'f:defaultMode': {}
                'f:secretName': {}
            'k:{"name":"pgdata"}':
              .: {}
              'f:name': {}
              'f:persistentVolumeClaim':
                .: {}
                'f:claimName': {}
            'k:{"name":"podinfo"}':
              .: {}
              'f:downwardAPI':
                .: {}
                'f:defaultMode': {}
                'f:items': {}
              'f:name': {}
            'k:{"name":"primary-volume"}':
              .: {}
              'f:name': {}
              'f:secret':
                .: {}
                'f:defaultMode': {}
                'f:secretName': {}
            'k:{"name":"tmp"}':
              .: {}
              'f:emptyDir':
                .: {}
                'f:medium': {}
                'f:sizeLimit': {}
              'f:name': {}
            'k:{"name":"ssh-config"}':
              .: {}
              'f:name': {}
              'f:secret':
                .: {}
                'f:defaultMode': {}
                'f:items': {}
                'f:secretName': {}
            'k:{"name":"tls-replication"}':
              .: {}
              'f:emptyDir':
                .: {}
                'f:medium': {}
                'f:sizeLimit': {}
              'f:name': {}
            .: {}
            'k:{"name":"user-volume"}':
              .: {}
              'f:name': {}
              'f:secret':
                .: {}
                'f:defaultMode': {}
                'f:secretName': {}
            'k:{"name":"pgbackrest-config"}':
              .: {}
              'f:name': {}
              'f:projected':
                .: {}
                'f:defaultMode': {}
                'f:sources': {}
            'k:{"name":"root-volume"}':
              .: {}
              'f:name': {}
              'f:secret':
                .: {}
                'f:defaultMode': {}
                'f:secretName': {}
            'k:{"name":"dshm"}':
              .: {}
              'f:emptyDir':
                .: {}
                'f:medium': {}
              'f:name': {}
            'k:{"name":"report"}':
              .: {}
              'f:emptyDir':
                .: {}
                'f:medium': {}
                'f:sizeLimit': {}
              'f:name': {}
            'k:{"name":"tls-server"}':
              .: {}
              'f:name': {}
              'f:projected':
                .: {}
                'f:defaultMode': {}
                'f:sources': {}
            'k:{"name":"pgconf-volume"}':
              .: {}
              'f:name': {}
              'f:projected':
                .: {}
                'f:defaultMode': {}
                'f:sources': {}
          'f:containers':
            'k:{"name":"database"}':
              'f:image': {}
              'f:volumeMounts':
                'k:{"mountPath":"/pgconf"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/pgconf/pguser"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/sshd"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                  'f:readOnly': {}
                'k:{"mountPath":"/tmp"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/etc/podinfo"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/pgdata"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/etc/pgbackrest/conf.d"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/dev/shm"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                .: {}
                'k:{"mountPath":"/pgconf/tls-replication"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/pgconf/tls"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/pgconf/pgreplicator"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/pgconf/pgsuper"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/etc/ssh"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                  'f:readOnly': {}
              'f:terminationMessagePolicy': {}
              .: {}
              'f:resources':
                .: {}
                'f:limits':
                  .: {}
                  'f:cpu': {}
                  'f:memory': {}
                'f:requests':
                  .: {}
                  'f:cpu': {}
                  'f:memory': {}
              'f:livenessProbe':
                .: {}
                'f:exec':
                  .: {}
                  'f:command': {}
                'f:failureThreshold': {}
                'f:initialDelaySeconds': {}
                'f:periodSeconds': {}
                'f:successThreshold': {}
                'f:timeoutSeconds': {}
              'f:env':
                'k:{"name":"PGBACKREST_REPO1_S3_BUCKET"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_REPO1_HOST_CMD"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PATRONI_SCOPE"}':
                  .: {}
                  'f:name': {}
                  'f:valueFrom':
                    .: {}
                    'f:fieldRef': {}
                'k:{"name":"PGHA_STANDBY"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_PG1_SOCKET_PATH"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHOST"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_REPO1_PATH"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_PG1_PORT"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PATRONI_KUBERNETES_LABELS"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_REPO1_S3_KEY_SECRET"}':
                  .: {}
                  'f:name': {}
                  'f:valueFrom':
                    .: {}
                    'f:secretKeyRef': {}
                'k:{"name":"LD_PRELOAD"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"MODE"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_REPO1_HOST"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHA_REPLICA_REINIT_ON_START_FAIL"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHA_PGBACKREST_S3_VERIFY_TLS"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHA_SYNC_REPLICATION"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_REPO1_S3_REGION"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PATRONI_KUBERNETES_NAMESPACE"}':
                  .: {}
                  'f:name': {}
                  'f:valueFrom':
                    .: {}
                    'f:fieldRef': {}
                'k:{"name":"ENABLE_SSHD"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_REPO1_S3_ENDPOINT"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHA_USER"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                .: {}
                'k:{"name":"BACKREST_SKIP_CREATE_STANZA"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHA_INIT"}':
                  .: {}
                  'f:name': {}
                  'f:valueFrom':
                    .: {}
                    'f:configMapKeyRef': {}
                'k:{"name":"PGHA_TLS_ONLY"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PATRONI_POSTGRESQL_DATA_DIR"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHA_PGBACKREST_LOCAL_S3_STORAGE"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHA_TLS_ENABLED"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHA_PGBACKREST_LOCAL_GCS_STORAGE"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHA_PGBACKREST"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHA_PASSWORD_TYPE"}':
                  .: {}
                  'f:name': {}
                'k:{"name":"PGBACKREST_STANZA"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"NSS_WRAPPER_GROUP"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PATRONI_LOG_LEVEL"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PATRONI_KUBERNETES_SCOPE_LABEL"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_REPO1_TYPE"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHA_DATABASE"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_REPO1_S3_CA_FILE"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_DB_PATH"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_REPO1_S3_URI_STYLE"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGHA_PG_PORT"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_REPO1_S3_KEY"}':
                  .: {}
                  'f:name': {}
                  'f:valueFrom':
                    .: {}
                    'f:secretKeyRef': {}
                'k:{"name":"NSS_WRAPPER_PASSWD"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"PGBACKREST_LOG_PATH"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
              'f:readinessProbe':
                .: {}
                'f:exec':
                  .: {}
                  'f:command': {}
                'f:failureThreshold': {}
                'f:initialDelaySeconds': {}
                'f:periodSeconds': {}
                'f:successThreshold': {}
                'f:timeoutSeconds': {}
              'f:securityContext':
                .: {}
                'f:allowPrivilegeEscalation': {}
                'f:privileged': {}
                'f:readOnlyRootFilesystem': {}
              'f:terminationMessagePath': {}
              'f:imagePullPolicy': {}
              'f:ports':
                .: {}
                'k:{"containerPort":5432,"protocol":"TCP"}':
                  .: {}
                  'f:containerPort': {}
                  'f:name': {}
                  'f:protocol': {}
                'k:{"containerPort":8009,"protocol":"TCP"}':
                  .: {}
                  'f:containerPort': {}
                  'f:name': {}
                  'f:protocol': {}
              'f:name': {}
          'f:dnsPolicy': {}
          'f:serviceAccount': {}
          'f:restartPolicy': {}
          'f:schedulerName': {}
          'f:terminationGracePeriodSeconds': {}
          'f:serviceAccountName': {}
          'f:enableServiceLinks': {}
          'f:securityContext':
            .: {}
            'f:supplementalGroups': {}
          'f:affinity':
            .: {}
            'f:podAntiAffinity':
              .: {}
              'f:preferredDuringSchedulingIgnoredDuringExecution': {}
    - manager: dev-eng-qccd4-master-1.novalocal
      operation: Update
      apiVersion: v1
      time: '2023-04-05T19:37:25Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            'f:k8s.ovn.org/pod-networks': {}
    - manager: multus
      operation: Update
      apiVersion: v1
      time: '2023-04-05T19:37:35Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            'f:k8s.v1.cni.cncf.io/network-status': {}
            'f:k8s.v1.cni.cncf.io/networks-status': {}
      subresource: status
    - manager: Patroni
      operation: Update
      apiVersion: v1
      time: '2023-04-05T19:37:37Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            'f:status': {}
    - manager: kubelet
      operation: Update
      apiVersion: v1
      time: '2023-04-05T19:37:41Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:status':
          'f:conditions':
            'k:{"type":"ContainersReady"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:message': {}
              'f:reason': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"Initialized"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"Ready"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:message': {}
              'f:reason': {}
              'f:status': {}
              'f:type': {}
          'f:containerStatuses': {}
          'f:hostIP': {}
          'f:phase': {}
          'f:podIP': {}
          'f:podIPs':
            .: {}
            'k:{"ip":"10.129.2.174"}':
              .: {}
              'f:ip': {}
          'f:startTime': {}
      subresource: status
  namespace: pgo
  ownerReferences:
    - apiVersion: apps/v1
      kind: ReplicaSet
      name: dev-eng-pg-cluster-99665545b
      uid: 9d53aff3-3453-4ea2-af77-f5d3e8db26ed
      controller: true
      blockOwnerDeletion: true
  labels:
    pgouser: admin
    pgo-version: 1.4.0
    service-name: dev-eng-pg-cluster
    pg-cluster: dev-eng-pg-cluster
    vendor: crunchydata
    name: dev-eng-pg-cluster
    deployment-name: dev-eng-pg-cluster
    pg-pod-anti-affinity: preferred
    crunchy-pgha-scope: dev-eng-pg-cluster
    pod-template-hash: 99665545b
    pgo-pg-database: 'true'
spec:
  restartPolicy: Always
  serviceAccountName: pgo-pg
  imagePullSecrets:
    - name: postgres-operator-dockercfg-5k8t8
    - name: pgo-pg-dockercfg-5mm5p
  priority: 0
  schedulerName: default-scheduler
  enableServiceLinks: true
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          podAffinityTerm:
            labelSelector:
              matchExpressions:
                - key: vendor
                  operator: In
                  values:
                    - crunchydata
                - key: pg-pod-anti-affinity
                  operator: Exists
                - key: pg-cluster
                  operator: In
                  values:
                    - dev-eng-pg-cluster
            topologyKey: kubernetes.io/hostname
  terminationGracePeriodSeconds: 30
  preemptionPolicy: PreemptLowerPriority
  nodeName: dev-eng-qccd4-worker-0-s6qxn
  securityContext:
    seLinuxOptions:
      level: 's0:c27,c4'
    supplementalGroups:
      - 1001
  containers:
    - resources:
        limits:
          cpu: '6'
          memory: 12Gi
        requests:
          cpu: '1'
          memory: 256Mi
      readinessProbe:
        exec:
          command:
            - /opt/crunchy/bin/postgres-ha/health/pgha-readiness.sh
        initialDelaySeconds: 15
        timeoutSeconds: 1
        periodSeconds: 10
        successThreshold: 1
        failureThreshold: 3
      terminationMessagePath: /dev/termination-log
      name: database
      livenessProbe:
        exec:
          command:
            - /opt/crunchy/bin/postgres-ha/health/pgha-liveness.sh
        initialDelaySeconds: 30
        timeoutSeconds: 10
        periodSeconds: 15
        successThreshold: 1
        failureThreshold: 3
        - name: MODE
          value: postgres
        - name: PGHA_PG_PORT
          value: '5432'
        - name: PGHA_USER
          value: postgres
        - name: PGHA_INIT
          valueFrom:
            configMapKeyRef:
              name: dev-eng-pg-cluster-pgha-config
              key: init
        - name: PATRONI_POSTGRESQL_DATA_DIR
          value: /pgdata/dev-eng-pg-cluster
        - name: PGBACKREST_REPO1_S3_BUCKET
          value: hart-okd-backups
        - name: PGBACKREST_REPO1_S3_ENDPOINT
          value: s3.us-west-2.amazonaws.com
        - name: PGBACKREST_REPO1_S3_REGION
          value: us-west-2
        - name: PGBACKREST_REPO1_S3_KEY
          valueFrom:
            secretKeyRef:
              name: dev-eng-pg-cluster-backrest-repo-config
              key: aws-s3-key
        - name: PGBACKREST_REPO1_S3_KEY_SECRET
          valueFrom:
            secretKeyRef:
              name: dev-eng-pg-cluster-backrest-repo-config
              key: aws-s3-key-secret
        - name: PGBACKREST_REPO1_S3_CA_FILE
          value: /sshd/aws-s3-ca.crt
        - name: PGBACKREST_REPO1_HOST_CMD
          value: /usr/local/bin/archive-push-s3.sh
        - name: PGBACKREST_REPO1_S3_URI_STYLE
          value: path
        - name: PGHA_PGBACKREST_S3_VERIFY_TLS
          value: 'false'
        - name: PGBACKREST_STANZA
          value: db
        - name: PGBACKREST_REPO1_HOST
          value: dev-eng-pg-cluster-backrest-shared-repo
        - name: BACKREST_SKIP_CREATE_STANZA
          value: 'true'
        - name: PGHA_PGBACKREST
          value: 'true'
        - name: PGBACKREST_REPO1_PATH
          value: /dev-eng/postgres-cluster/
        - name: PGBACKREST_DB_PATH
          value: /pgdata/dev-eng-pg-cluster
        - name: ENABLE_SSHD
          value: 'true'
        - name: PGBACKREST_LOG_PATH
          value: /tmp
        - name: PGBACKREST_PG1_SOCKET_PATH
          value: /tmp
        - name: PGBACKREST_PG1_PORT
          value: '5432'
        - name: PGBACKREST_REPO1_TYPE
          value: posix
        - name: PGHA_PGBACKREST_LOCAL_S3_STORAGE
          value: 'true'
        - name: PGHA_PGBACKREST_LOCAL_GCS_STORAGE
          value: 'false'
        - name: PGHA_DATABASE
          value: pgdb
        - name: PGHA_REPLICA_REINIT_ON_START_FAIL
          value: 'true'
        - name: PGHA_SYNC_REPLICATION
          value: 'false'
        - name: PGHA_TLS_ENABLED
          value: 'true'
        - name: PGHA_TLS_ONLY
          value: 'false'
        - name: PGHA_PASSWORD_TYPE
        - name: PGHA_STANDBY
          value: 'false'
        - name: PATRONI_KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: PATRONI_KUBERNETES_SCOPE_LABEL
          value: crunchy-pgha-scope
        - name: PATRONI_SCOPE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: 'metadata.labels[''crunchy-pgha-scope'']'
        - name: PATRONI_KUBERNETES_LABELS
          value: '{vendor: "crunchydata"}'
        - name: PATRONI_LOG_LEVEL
          value: INFO
        - name: PGHOST
          value: /tmp
        - name: LD_PRELOAD
          value: /usr/lib64/libnss_wrapper.so
        - name: NSS_WRAPPER_PASSWD
          value: /tmp/nss_wrapper/postgres/passwd
        - name: NSS_WRAPPER_GROUP
          value: /tmp/nss_wrapper/postgres/group
      securityContext:
        capabilities:
          drop:
            - MKNOD
        privileged: false
        readOnlyRootFilesystem: true
        allowPrivilegeEscalation: false
      ports:
        - name: postgres
          containerPort: 5432
          protocol: TCP
        - name: patroni
          containerPort: 8009
          protocol: TCP
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - name: pgdata
          mountPath: /pgdata
        - name: user-volume
          mountPath: /pgconf/pguser
        - name: primary-volume
          mountPath: /pgconf/pgreplicator
        - name: root-volume
          mountPath: /pgconf/pgsuper
        - name: tls-server
          mountPath: /pgconf/tls
        - name: tls-replication
          mountPath: /pgconf/tls-replication
        - name: sshd
          readOnly: true
          mountPath: /sshd
        - name: ssh-config
          readOnly: true
          mountPath: /etc/ssh
        - name: pgconf-volume
          mountPath: /pgconf
        - name: dshm
          mountPath: /dev/shm
        - name: pgbackrest-config
          mountPath: /etc/pgbackrest/conf.d
        - name: podinfo
          mountPath: /etc/podinfo
        - name: tmp
          mountPath: /tmp
        - name: kube-api-access-4g9zx
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePolicy: File
      image: 'percona/percona-postgresql-operator:1.4.0-ppg14-postgres-ha'
  serviceAccount: pgo-pg
  volumes:
    - name: pgdata
      persistentVolumeClaim:
        claimName: dev-eng-pg-cluster
    - name: user-volume
      secret:
        secretName: dev-eng-pg-cluster-pguser-secret
        defaultMode: 420
    - name: primary-volume
      secret:
        secretName: dev-eng-pg-cluster-primaryuser-secret
        defaultMode: 420
    - name: sshd
      secret:
        secretName: dev-eng-pg-cluster-backrest-repo-config
        defaultMode: 420
    - name: ssh-config
      secret:
        secretName: dev-eng-pg-cluster-backrest-repo-config
        items:
          - key: config
            path: ssh_config
        defaultMode: 420
    - name: root-volume
      secret:
        secretName: dev-eng-pg-cluster-postgres-secret
        defaultMode: 420
    - name: tls-server
      projected:
        sources:
          - secret:
              name: dev-eng-pg-cluster-ssl-keypair
          - secret:
              name: dev-eng-pg-cluster-replication-ssl-keypair
              items:
                - key: tls.key
                  path: tls-replication.key
                - key: tls.crt
                  path: tls-replication.crt
          - secret:
              name: dev-eng-pg-cluster-ssl-ca
        defaultMode: 288
    - name: tls-replication
      emptyDir:
        medium: Memory
        sizeLimit: 2Mi
    - name: report
      emptyDir:
        medium: Memory
        sizeLimit: 64Mi
    - name: dshm
      emptyDir:
        medium: Memory
    - name: tmp
      emptyDir:
        medium: Memory
        sizeLimit: 16Mi
    - name: pgbackrest-config
      projected:
        sources:
          - configMap:
              name: dev-eng-pg-cluster-config-backrest
              optional: true
          - secret:
              name: dev-eng-pg-cluster-config-backrest
              optional: true
        defaultMode: 420
    - name: pgconf-volume
      projected:
        sources:
          - configMap:
              name: dev-eng-pg-cluster-pgha-config
              optional: true
        defaultMode: 420
    - name: podinfo
      downwardAPI:
        items:
          - path: cpu_limit
            resourceFieldRef:
              containerName: database
              resource: limits.cpu
              divisor: 1m
          - path: cpu_request
            resourceFieldRef:
              containerName: database
              resource: requests.cpu
              divisor: 1m
          - path: mem_limit
            resourceFieldRef:
              containerName: database
              resource: limits.memory
              divisor: '0'
          - path: mem_request
            resourceFieldRef:
              containerName: database
              resource: requests.memory
              divisor: '0'
          - path: labels
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
          - path: annotations
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
        defaultMode: 420
    - name: kube-api-access-4g9zx
      projected:
        sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              name: kube-root-ca.crt
              items:
                - key: ca.crt
                  path: ca.crt
          - downwardAPI:
              items:
                - path: namespace
                  fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
          - configMap:
              name: openshift-service-ca.crt
              items:
                - key: service-ca.crt
                  path: service-ca.crt
        defaultMode: 420
  dnsPolicy: ClusterFirst
  tolerations:
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/memory-pressure
      operator: Exists
      effect: NoSchedule
status:
  phase: Running
  conditions:
    - type: Initialized
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2023-04-05T19:37:25Z'
    - type: Ready
      status: 'False'
      lastProbeTime: null
      lastTransitionTime: '2023-04-05T19:37:25Z'
      reason: ContainersNotReady
      message: 'containers with unready status: [database]'
    - type: ContainersReady
      status: 'False'
      lastProbeTime: null
      lastTransitionTime: '2023-04-05T19:37:25Z'
      reason: ContainersNotReady
      message: 'containers with unready status: [database]'
    - type: PodScheduled
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2023-04-05T19:37:25Z'
  hostIP: 10.0.1.154
  podIP: 10.129.2.174
  podIPs:
    - ip: 10.129.2.174
  startTime: '2023-04-05T19:37:25Z'
  containerStatuses:
    - restartCount: 1
      started: false
      ready: false
      name: database
      state:
        waiting:
          reason: CrashLoopBackOff
          message: >-
            back-off 10s restarting failed container=database
            pod=dev-eng-pg-cluster-99665545b-hp4gv_pgo(c8946c7b-b216-41ca-b64d-5fe83fa4aeda)
      imageID: >-
        docker.io/percona/percona-postgresql-operator@sha256:93b69b07914f1ed43812013b7bfc75692369bc39fc24349853584165fc192e45
      image: 'docker.io/percona/percona-postgresql-operator:1.4.0-ppg14-postgres-ha'
      lastState:
        terminated:
          exitCode: 0
          reason: Completed
          startedAt: '2023-04-05T19:37:38Z'
          finishedAt: '2023-04-05T19:37:39Z'
          containerID: >-
            cri-o://368f50cf719489ffec8a5ca06acc2e576104156f996ade78aa6a4e24a7d5f06d
      containerID: 'cri-o://368f50cf719489ffec8a5ca06acc2e576104156f996ade78aa6a4e24a7d5f06d'
  qosClass: Burstable
              

I also attached what the operator.yaml file looks like after running the sed command in the instructions:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: pgo-deployer-sa
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: pgo-deployer-cr
rules:
  - apiGroups:
    resources:
      - namespaces
    verbs:
      - get
      - list
      - create
      - patch
      - delete
  - apiGroups:
    resources:
      - pods
    verbs:
      - list
  - apiGroups:
    resources:
      - secrets
    verbs:
      - list
      - get
      - create
      - delete
      - patch
  - apiGroups:
    resources:
      - configmaps
      - services
      - persistentvolumeclaims
    verbs:
      - get
      - create
      - delete
      - list
      - patch
  - apiGroups:
    resources:
      - serviceaccounts
    verbs:
      - get
      - create
      - delete
      - patch
      - list
  - apiGroups:
      - apps
      - extensions
    resources:
      - deployments
      - replicasets
    verbs:
      - get
      - list
      - watch
      - create
      - delete
  - apiGroups:
      - apiextensions.k8s.io
    resources:
      - customresourcedefinitions
    verbs:
      - get
      - create
      - delete
      - patch
  - apiGroups:
      - rbac.authorization.k8s.io
    resources:
      - clusterroles
      - clusterrolebindings
      - roles
      - rolebindings
    verbs:
      - get
      - create
      - delete
      - bind
      - escalate
  - apiGroups:
      - rbac.authorization.k8s.io
    resources:
      - roles
    verbs:
      - create
      - delete
  - apiGroups:
      - batch
    resources:
      - jobs
    verbs:
      - delete
      - list
  - apiGroups:
      - pg.percona.com
    resources:
      - perconapgclusters
      - perconapgclusters/status
      - pgclusters
      - pgreplicas
      - pgpolicies
      - pgtasks
    verbs:
      - delete
      - list
apiVersion: v1
kind: ConfigMap
metadata:
  name: pgo-deployer-cm
data:
  values.yaml: |-
    archive_mode: "true"
    archive_timeout: "60"
    ccp_image_pull_secret: ""
    ccp_image_pull_secret_manifest: ""
    create_rbac: "true"
    delete_operator_namespace: "false"
    delete_watched_namespaces: "false"
    disable_telemetry: "false"
    namespace: "pgo"
    namespace_mode: "disabled"
    pgo_image_prefix: "percona/percona-postgresql-operator"
    pgo_image_pull_policy: "Always"
    pgo_image_pull_secret: ""
    pgo_image_pull_secret_manifest: ""
    pgo_image_tag: "1.4.0"
    pgo_operator_namespace: "pgo"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: pgo-deployer-crb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: pgo-deployer-cr
subjects:
  - kind: ServiceAccount
    name: pgo-deployer-sa
    namespace: pgo
apiVersion: batch/v1
kind: Job
metadata:
  name: pgo-deploy
spec:
  backoffLimit: 0
  template:
    metadata:
      name: pgo-deploy
    spec:
      serviceAccountName: pgo-deployer-sa
      restartPolicy: Never
      containers:
        - name: pgo-deploy
          image: percona/percona-postgresql-operator:1.4.0-pgo-deployer
          imagePullPolicy: Always
            - name: DEPLOY_ACTION
              value: install
          volumeMounts:
            - name: deployer-conf
              mountPath: "/conf"
      volumes:
        - name: deployer-conf
          configMap:
            name: pgo-deployer-cm

Once I added disable_fsgroup manually, we look to be installing as normal.

Now we are back to the backup error:

Wed Apr 5 20:26:45 UTC 2023 INFO: Image mode found: pgbackrest
Wed Apr 5 20:26:45 UTC 2023 INFO: Starting in ‘pgbackrest’ mode
time=“2023-04-05T20:26:45Z” level=info msg=“crunchy-pgbackrest starts”
time=“2023-04-05T20:26:45Z” level=info msg=“debug flag set to false”
time=“2023-04-05T20:26:45Z” level=info msg=“backrest backup command requested”
time=“2023-04-05T20:26:45Z” level=info msg=“backrest command will be executed for both local and s3 storage”
time=“2023-04-05T20:26:45Z” level=info msg=“command to execute is [pgbackrest backup --type=full --db-host=10.129.2.175 --db-path=/pgdata/dev-eng-pg-cluster ; pgbackrest backup --type=full --db-host=10.129.2.175 --db-path=/pgdata/dev-eng-pg-cluster --repo1-type=s3 --no-repo1-s3-verify-tls]”
time=“2023-04-05T20:29:34Z” level=info msg=“output=
time=“2023-04-05T20:29:34Z” level=info msg="stderr=[WARN: option ‘repo1-retention-full’ is not set for ‘repo1-retention-full-type=count’, the repository may run out of space\n HINT: to retain full backups indefinitely (without warning), set option ‘repo1-retention-full’ to the maximum.\nERROR: [055]: unable to load info file ‘/dev-eng/postgres-cluster/backup/db/backup.info’ or ‘/dev-eng/postgres-cluster/backup/db/backup.info.copy’:\n FileMissingError: unable to open missing file ‘/dev-eng/postgres-cluster/backup/db/backup.info’ for read\n FileMissingError: unable to open missing file ‘/dev-eng/postgres-cluster/backup/db/backup.info.copy’ for read\n HINT: backup.info cannot be opened and is required to perform a backup.\n HINT: has a stanza-create been performed?\nWARN: option ‘repo1-retention-full’ is not set for ‘repo1-retention-full-type=count’, the repository may run out of space\n HINT: to retain full backups indefinitely (without warning), set option 'repo1-retention-…
time=“2023-04-05T20:29:34Z” level=fatal msg=“command terminated with exit code 82”

It looks like the culprit is adding a value to repoPath. While have the value /dev-eng/postgres-cluster/ in repoPath instructed the backups to go the right s3 bucket prefix, my-bucket-name/dev-eng/postgres-cluster/ it causes the FileMissingError stated above.

Is this a known issue by chance? Is there a different way to specify the bucket prefix other than repoPath?

Thanks…