-
Notifications
You must be signed in to change notification settings - Fork 601
Open
Labels
triagePending triagePending triage
Description
Is there an existing issue already for this bug?
- I have searched for an existing issue, and could not find anything. I believe this is a new bug.
I have read the troubleshooting guide
- I have read the troubleshooting guide and I think this is a new bug.
I am running a supported version of CloudNativePG
- I have read the troubleshooting guide and I think this is a new bug.
Contact Details
No response
Version
1.28 (latest patch)
What version of Kubernetes are you using?
1.34
What is your Kubernetes environment?
Other
How did you install the operator?
Helm
What happened?
We encountered an issue when attempting to upgrade a PostgreSQL cluster to a new major version after performing a restore from a backup. The primary node upgrades successfully, but some replica nodes fail during the upgrade process.
Steps to reproduce:
- Deploy a CloudNativePG cluster (example configuration provided below) with PostgreSQL version 17.7-system-trixie.
- Perform a manual backup using the barman-cloud plugin.
- upgrade source cluster to version 18.1-system-trixie.
- Restore the cluster to a new instance (postgresql-grafana-1) using the backup.
- After success restore, attempt to upgrade the restored cluster to PostgreSQL version 18.1-system-trixie.
Describe behavior:
- The primary node (postgresql-grafana-1-1) upgrades successfully.
- The replica node (postgresql-grafana-1-4) fails during the upgrade (join success).
Cluster resource
*source cluster*
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: postgresql-grafana
spec:
description: "Grafana postgresql"
imageName: ghcr.io/cloudnative-pg/postgresql:17.7-system-trixie
imageName: ghcr.io/cloudnative-pg/postgresql:18.1-system-trixie
imagePullPolicy: IfNotPresent
instances: 3
startDelay: 300
stopDelay: 300
resources:
requests:
memory: 512Mi
cpu: 100m
limits:
memory: 2Gi
cpu: 2
affinity:
enablePodAntiAffinity: true
podAntiAffinityType: required
topologyKey: kubernetes.io/hostname
enablePDB: true
storage:
storageClass: storage-ssd
size: 3Gi
primaryUpdateStrategy: unsupervised
postgresql:
parameters:
shared_buffers: 512MB
effective_cache_size: 2GB
work_mem: 8MB
maintenance_work_mem: 128MB
random_page_cost: "1.1"
effective_io_concurrency: "200"
max_wal_size: 2GB
min_wal_size: 512MB
checkpoint_timeout: 15min
checkpoint_completion_target: "0.9"
wal_compression: "on"
max_worker_processes: "4"
max_parallel_workers_per_gather: "1"
max_parallel_workers: "2"
max_connections: "100"
password_encryption: scram-sha-256
synchronous:
dataDurability: required
failoverQuorum: true
method: any
number: 1
pg_hba:
- host all all 10.244.0.0/16 md5
enableSuperuserAccess: true
superuserSecret:
name: postgresql-grafana-superadmin-creds
bootstrap:
initdb:
database: grafana
owner: grafana
secret:
name: postgresql-grafana-owner-creds
plugins:
- name: barman-cloud.cloudnative-pg.io
isWALArchiver: true
parameters:
barmanObjectName: postgresql-grafana
serverName: postgresql-grafana
*restore cluster*
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: postgresql-grafana-1
spec:
description: "Grafana postgresql"
# on restore
imageName: ghcr.io/cloudnative-pg/postgresql:17.7-system-trixie
# after success restore, try upgrade
#imageName: ghcr.io/cloudnative-pg/postgresql:18.1-system-trixie
imagePullPolicy: IfNotPresent
instances: 3
startDelay: 300
stopDelay: 300
resources:
requests:
memory: 512Mi
cpu: 100m
limits:
memory: 2Gi
cpu: 2
affinity:
enablePodAntiAffinity: true
podAntiAffinityType: required
topologyKey: kubernetes.io/hostname
enablePDB: true
storage:
storageClass: storage-ssd
size: 3Gi
primaryUpdateStrategy: unsupervised
postgresql:
parameters:
shared_buffers: 512MB
effective_cache_size: 2GB
work_mem: 8MB
maintenance_work_mem: 128MB
random_page_cost: "1.1"
effective_io_concurrency: "200"
max_wal_size: 2GB
min_wal_size: 512MB
checkpoint_timeout: 15min
checkpoint_completion_target: "0.9"
wal_compression: "on"
max_slot_wal_keep_size: "-1"
max_worker_processes: "4"
max_parallel_workers_per_gather: "1"
max_parallel_workers: "2"
max_connections: "100"
password_encryption: scram-sha-256
synchronous:
dataDurability: required
failoverQuorum: true
method: any
number: 1
pg_hba:
- host all all 10.244.0.0/16 md5
enableSuperuserAccess: true
superuserSecret:
name: postgresql-grafana-superadmin-creds
bootstrap:
recovery:
source: origin
recoveryTarget:
targetTime: "2026-01-16 05:11:00.00000+00"
externalClusters:
- name: origin
plugin:
name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectName: postgresql-grafana
serverName: postgresql-grafana
plugins:
- name: barman-cloud.cloudnative-pg.io
isWALArchiver: true
parameters:
barmanObjectName: postgresql-grafana
serverName: postgresql-grafana-1Relevant log output
{"level":"info","ts":"2026-01-17T06:52:57.35089833Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:57.349 UTC","user_name":"postgres","database_name":"grafana","process_id":"111","connection_from":"[local]","session_id":"696b31c9.6f","session_line_num":"1","session_start_time":"2026-01-17 06:52:57 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"error","ts":"2026-01-17T06:52:57.351027615Z","msg":"Error collecting user query","query":"pg_stat_archiver","logging_pod":"postgresql-grafana-1-4","targetDatabase":"grafana","error":"failed to connect to `user=postgres database=grafana`: /controller/run/.s.PGSQL.5432 (/controller/run): server error: FATAL: the database system is starting up (SQLSTATE 57P03)","stacktrace":"github.com/cloudnative-pg/machinery/pkg/log.(*logger).Error\n\tpkg/mod/github.com/cloudnative-pg/machinery@v0.3.1/pkg/log/log.go:125\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres/metrics.(*QueriesCollector).createMetricsFromUserQueries\n\tpkg/management/postgres/metrics/collector.go:191\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres/metrics.(*QueriesCollector).Update\n\tpkg/management/postgres/metrics/collector.go:100\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres/webserver/metricserver.(*Exporter).updateMetricsFromQueries\n\tpkg/management/postgres/webserver/metricserver/pg_collector.go:350\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres/webserver/metricserver.(*Exporter).Collect\n\tpkg/management/postgres/webserver/metricserver/pg_collector.go:334\ngithub.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1\n\tpkg/mod/github.com/prometheus/client_golang@v1.23.2/prometheus/registry.go:456"}
{"level":"info","ts":"2026-01-17T06:52:57.354116578Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:57.353 UTC","user_name":"postgres","database_name":"grafana","process_id":"112","connection_from":"[local]","session_id":"696b31c9.70","session_line_num":"1","session_start_time":"2026-01-17 06:52:57 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"error","ts":"2026-01-17T06:52:57.356059078Z","msg":"Error collecting user query","query":"pg_replication_slots","logging_pod":"postgresql-grafana-1-4","targetDatabase":"grafana","error":"failed to connect to `user=postgres database=grafana`: /controller/run/.s.PGSQL.5432 (/controller/run): server error: FATAL: the database system is starting up (SQLSTATE 57P03)","stacktrace":"github.com/cloudnative-pg/machinery/pkg/log.(*logger).Error\n\tpkg/mod/github.com/cloudnative-pg/machinery@v0.3.1/pkg/log/log.go:125\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres/metrics.(*QueriesCollector).createMetricsFromUserQueries\n\tpkg/management/postgres/metrics/collector.go:191\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres/metrics.(*QueriesCollector).Update\n\tpkg/management/postgres/metrics/collector.go:100\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres/webserver/metricserver.(*Exporter).updateMetricsFromQueries\n\tpkg/management/postgres/webserver/metricserver/pg_collector.go:350\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres/webserver/metricserver.(*Exporter).Collect\n\tpkg/management/postgres/webserver/metricserver/pg_collector.go:334\ngithub.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1\n\tpkg/mod/github.com/prometheus/client_golang@v1.23.2/prometheus/registry.go:456"}
{"level":"info","ts":"2026-01-17T06:52:57.356396455Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:57.355 UTC","user_name":"postgres","database_name":"grafana","process_id":"113","connection_from":"[local]","session_id":"696b31c9.71","session_line_num":"1","session_start_time":"2026-01-17 06:52:57 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:52:57.358545659Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:57.358 UTC","user_name":"postgres","database_name":"grafana","process_id":"114","connection_from":"[local]","session_id":"696b31c9.72","session_line_num":"1","session_start_time":"2026-01-17 06:52:57 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"error","ts":"2026-01-17T06:52:57.358628125Z","msg":"Error collecting user query","query":"backends_waiting","logging_pod":"postgresql-grafana-1-4","targetDatabase":"grafana","error":"failed to connect to `user=postgres database=grafana`: /controller/run/.s.PGSQL.5432 (/controller/run): server error: FATAL: the database system is starting up (SQLSTATE 57P03)","stacktrace":"github.com/cloudnative-pg/machinery/pkg/log.(*logger).Error\n\tpkg/mod/github.com/cloudnative-pg/machinery@v0.3.1/pkg/log/log.go:125\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres/metrics.(*QueriesCollector).createMetricsFromUserQueries\n\tpkg/management/postgres/metrics/collector.go:191\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres/metrics.(*QueriesCollector).Update\n\tpkg/management/postgres/metrics/collector.go:100\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres/webserver/metricserver.(*Exporter).updateMetricsFromQueries\n\tpkg/management/postgres/webserver/metricserver/pg_collector.go:350\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres/webserver/metricserver.(*Exporter).Collect\n\tpkg/management/postgres/webserver/metricserver/pg_collector.go:334\ngithub.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1\n\tpkg/mod/github.com/prometheus/client_golang@v1.23.2/prometheus/registry.go:456"}
{"level":"info","ts":"2026-01-17T06:52:57.448720424Z","msg":"Instance is still down, will retry in 1 second","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"postgresql-grafana-1","namespace":"example-postgresqls"},"namespace":"example-postgresqls","name":"postgresql-grafana-1","reconcileID":"f30b81df-7af9-479f-ac2b-70ae22ae3170","instance":"postgresql-grafana-1-4","cluster":"postgresql-grafana-1","namespace":"example-postgresqls","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:52:57.452985711Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:57.447 UTC","user_name":"postgres","database_name":"postgres","process_id":"121","connection_from":"[local]","session_id":"696b31c9.79","session_line_num":"1","session_start_time":"2026-01-17 06:52:57 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:52:58.090825656Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:58.090 UTC","user_name":"postgres","database_name":"postgres","process_id":"122","connection_from":"[local]","session_id":"696b31ca.7a","session_line_num":"1","session_start_time":"2026-01-17 06:52:58 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:52:58.290010471Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:58.289 UTC","user_name":"postgres","database_name":"postgres","process_id":"123","connection_from":"[local]","session_id":"696b31ca.7b","session_line_num":"1","session_start_time":"2026-01-17 06:52:58 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:52:58.305306818Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:58.305 UTC","user_name":"postgres","database_name":"postgres","process_id":"124","connection_from":"[local]","session_id":"696b31ca.7c","session_line_num":"1","session_start_time":"2026-01-17 06:52:58 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:52:58.369862017Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:58.367 UTC","user_name":"postgres","database_name":"postgres","process_id":"125","connection_from":"[local]","session_id":"696b31ca.7d","session_line_num":"1","session_start_time":"2026-01-17 06:52:58 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:52:58.562902177Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:58.562 UTC","user_name":"postgres","database_name":"postgres","process_id":"127","connection_from":"[local]","session_id":"696b31ca.7f","session_line_num":"1","session_start_time":"2026-01-17 06:52:58 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:52:58.563193943Z","msg":"Instance is still down, will retry in 1 second","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"postgresql-grafana-1","namespace":"example-postgresqls"},"namespace":"example-postgresqls","name":"postgresql-grafana-1","reconcileID":"a4c5a0c0-4b02-4a94-a371-4c6ec8c43ebd","instance":"postgresql-grafana-1-4","cluster":"postgresql-grafana-1","namespace":"example-postgresqls","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:52:58.632197971Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:58.631 UTC","user_name":"postgres","database_name":"postgres","process_id":"128","connection_from":"[local]","session_id":"696b31ca.80","session_line_num":"1","session_start_time":"2026-01-17 06:52:58 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:52:59.676328001Z","msg":"Instance is still down, will retry in 1 second","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"postgresql-grafana-1","namespace":"example-postgresqls"},"namespace":"example-postgresqls","name":"postgresql-grafana-1","reconcileID":"c029ea03-296a-4035-9b49-756e9063f504","instance":"postgresql-grafana-1-4","cluster":"postgresql-grafana-1","namespace":"example-postgresqls","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:52:59.676308177Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:59.674 UTC","user_name":"postgres","database_name":"postgres","process_id":"130","connection_from":"[local]","session_id":"696b31cb.82","session_line_num":"1","session_start_time":"2026-01-17 06:52:59 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:52:59.90925141Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:52:59.908 UTC","user_name":"postgres","database_name":"postgres","process_id":"131","connection_from":"[local]","session_id":"696b31cb.83","session_line_num":"1","session_start_time":"2026-01-17 06:52:59 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:53:00.109568184Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:53:00.109 UTC","user_name":"postgres","database_name":"postgres","process_id":"132","connection_from":"[local]","session_id":"696b31cc.84","session_line_num":"1","session_start_time":"2026-01-17 06:53:00 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:53:00.124691169Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:53:00.124 UTC","user_name":"postgres","database_name":"postgres","process_id":"133","connection_from":"[local]","session_id":"696b31cc.85","session_line_num":"1","session_start_time":"2026-01-17 06:53:00 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:53:00.174966932Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:53:00.174 UTC","process_id":"24","session_id":"696b31be.18","session_line_num":"5","session_start_time":"2026-01-17 06:52:46 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"restored log file \"000000010000000000000013\" from archive","backend_type":"startup","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:53:00.185080587Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:53:00.183 UTC","user_name":"postgres","database_name":"postgres","process_id":"134","connection_from":"[local]","session_id":"696b31cc.86","session_line_num":"1","session_start_time":"2026-01-17 06:53:00 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:53:00.258110671Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:53:00.257 UTC","process_id":"24","session_id":"696b31be.18","session_line_num":"6","session_start_time":"2026-01-17 06:52:46 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"entering standby mode","backend_type":"startup","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:53:00.258178898Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:53:00.257 UTC","process_id":"24","session_id":"696b31be.18","session_line_num":"7","session_start_time":"2026-01-17 06:52:46 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"XX000","message":"requested timeline 2 is not a child of this server's history","detail":"Latest checkpoint in file \"backup_label\" is at 0/13000080 on timeline 1, but in the history of the requested timeline, the server forked off from that timeline at 0/70226C8.","backend_type":"startup","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:53:00.262935014Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:53:00.262 UTC","process_id":"17","session_id":"696b31be.11","session_line_num":"6","session_start_time":"2026-01-17 06:52:46 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"startup process (PID 24) exited with exit code 1","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:53:00.263006187Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:53:00.262 UTC","process_id":"17","session_id":"696b31be.11","session_line_num":"7","session_start_time":"2026-01-17 06:52:46 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"aborting startup due to startup process failure","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:53:00.26557041Z","logger":"postgres","msg":"record","logging_pod":"postgresql-grafana-1-4","record":{"log_time":"2026-01-17 06:53:00.265 UTC","process_id":"17","session_id":"696b31be.11","session_line_num":"8","session_start_time":"2026-01-17 06:52:46 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"database system is shut down","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2026-01-17T06:53:00.278967185Z","msg":"postmaster exited","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","postmasterExitStatus":"exit status 1","postMasterPID":17}
{"level":"info","ts":"2026-01-17T06:53:00.279025929Z","msg":"Extracting pg_controldata information","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","reason":"postmaster has exited"}
{"level":"info","ts":"2026-01-17T06:53:00.281424535Z","logger":"pg_controldata","msg":"pg_control version number: 1800\nCatalog version number: 202506291\nDatabase system identifier: 7596216200390782991\nDatabase cluster state: shut down in recovery\npg_control last modified: Sat Jan 17 06:44:45 2026\nLatest checkpoint location: 0/13000080\nLatest checkpoint's REDO location: 0/13000028\nLatest checkpoint's REDO WAL file: 000000010000000000000013\nLatest checkpoint's TimeLineID: 1\nLatest checkpoint's PrevTimeLineID: 1\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID: 0:3168\nLatest checkpoint's NextOID: 24597\nLatest checkpoint's NextMultiXactId: 1\nLatest checkpoint's NextMultiOffset: 0\nLatest checkpoint's oldestXID: 730\nLatest checkpoint's oldestXID's DB: 1\nLatest checkpoint's oldestActiveXID: 3167\nLatest checkpoint's oldestMultiXid: 1\nLatest checkpoint's oldestMulti's DB: 4\nLatest checkpoint's oldestCommitTsXid:0\nLatest checkpoint's newestCommitTsXid:0\nTime of latest checkpoint: Sat Jan 17 06:40:27 2026\nFake LSN counter for unlogged rels: 0/3E8\nMinimum recovery ending location: 0/0\nMin recovery ending loc's timeline: 0\nBackup start location: 0/0\nBackup end location: 0/0\nEnd-of-backup record required: no\nwal_level setting: logical\nwal_log_hints setting: on\nmax_connections setting: 100\nmax_worker_processes setting: 4\nmax_wal_senders setting: 10\nmax_prepared_xacts setting: 0\nmax_locks_per_xact setting: 64\ntrack_commit_timestamp setting: off\nMaximum data alignment: 8\nDatabase block size: 8192\nBlocks per segment of large relation: 131072\nWAL block size: 8192\nBytes per WAL segment: 16777216\nMaximum length of identifiers: 64\nMaximum columns in an index: 32\nMaximum size of a TOAST chunk: 1996\nSize of a large-object chunk: 2048\nDate/time type storage: 64-bit integers\nFloat8 argument passing: by value\nData page checksum version: 0\nDefault char data signedness: signed\nMock authentication nonce: 23dda52eb2f4a0002233cdc924fb9443b40a12c252f69d8884785ea5972175dc\n","pipe":"stdout","logging_pod":"postgresql-grafana-1-4"}
{"level":"error","ts":"2026-01-17T06:53:00.281618012Z","msg":"PostgreSQL process exited with errors","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","error":"exit status 1","stacktrace":"github.com/cloudnative-pg/machinery/pkg/log.(*logger).Error\n\tpkg/mod/github.com/cloudnative-pg/machinery@v0.3.1/pkg/log/log.go:125\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/instance/run/lifecycle.(*PostgresLifecycle).Start.func1\n\tinternal/cmd/manager/instance/run/lifecycle/lifecycle.go:108\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/instance/run/lifecycle.(*PostgresLifecycle).Start\n\tinternal/cmd/manager/instance/run/lifecycle/lifecycle.go:116\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.22.4/pkg/manager/runnable_group.go:260"}
{"level":"info","ts":"2026-01-17T06:53:00.28192231Z","msg":"Stopping and waiting for non leader election runnables","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4"}
{"level":"error","ts":"2026-01-17T06:53:00.281997142Z","msg":"error received after stop sequence was engaged","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","error":"exit status 1","stacktrace":"sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.22.4/pkg/manager/internal.go:517"}
{"level":"info","ts":"2026-01-17T06:53:00.282030875Z","msg":"Stopping and waiting for warmup runnables","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:53:00.28208261Z","msg":"Stopping and waiting for leader election runnables","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:53:00.282610022Z","msg":"Webserver exited","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","address":":9187"}
{"level":"info","ts":"2026-01-17T06:53:00.282648023Z","msg":"Shutdown signal received, waiting for all workers to finish","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-subscription","controllerGroup":"postgresql.cnpg.io","controllerKind":"Subscription"}
{"level":"info","ts":"2026-01-17T06:53:00.282655814Z","msg":"Shutdown signal received, waiting for all workers to finish","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-external-server","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2026-01-17T06:53:00.282660839Z","msg":"Shutdown signal received, waiting for all workers to finish","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-tablespaces","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2026-01-17T06:53:00.28266555Z","msg":"Shutdown signal received, waiting for all workers to finish","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-publication","controllerGroup":"postgresql.cnpg.io","controllerKind":"Publication"}
{"level":"info","ts":"2026-01-17T06:53:00.282669921Z","msg":"Shutdown signal received, waiting for all workers to finish","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-database","controllerGroup":"postgresql.cnpg.io","controllerKind":"Database"}
{"level":"info","ts":"2026-01-17T06:53:00.282674503Z","msg":"Shutdown signal received, waiting for all workers to finish","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2026-01-17T06:53:00.282706332Z","msg":"All workers finished","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-external-server","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2026-01-17T06:53:00.282715889Z","msg":"All workers finished","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-tablespaces","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2026-01-17T06:53:00.282724398Z","msg":"Exited log pipe","logger":"instance-manager","fileName":"/controller/log/postgres","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:53:00.282831918Z","logger":"roles_reconciler","msg":"Terminated RoleSynchronizer loop","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:53:00.282859035Z","msg":"All workers finished","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-subscription","controllerGroup":"postgresql.cnpg.io","controllerKind":"Subscription"}
{"level":"info","ts":"2026-01-17T06:53:00.28287131Z","msg":"Exited log pipe","logger":"instance-manager","fileName":"/controller/log/postgres.json","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:53:00.282886159Z","msg":"All workers finished","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2026-01-17T06:53:00.282890695Z","msg":"All workers finished","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-publication","controllerGroup":"postgresql.cnpg.io","controllerKind":"Publication"}
{"level":"info","ts":"2026-01-17T06:53:00.282895005Z","msg":"All workers finished","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","controller":"instance-database","controllerGroup":"postgresql.cnpg.io","controllerKind":"Database"}
{"level":"info","ts":"2026-01-17T06:53:00.282903164Z","msg":"Exited log pipe","logger":"instance-manager","fileName":"/controller/log/postgres.csv","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:53:00.283209927Z","msg":"Webserver exited","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","address":":8000"}
{"level":"info","ts":"2026-01-17T06:53:03.407946892Z","msg":"Webserver exited","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4","address":"localhost:8010"}
{"level":"info","ts":"2026-01-17T06:53:03.408034331Z","msg":"Stopping and waiting for caches","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:53:03.408334849Z","msg":"Stopping and waiting for webhooks","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:53:03.408427429Z","msg":"Stopping and waiting for HTTP servers","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:53:03.408452397Z","msg":"Wait completed, proceeding to shutdown the manager","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4"}
{"level":"info","ts":"2026-01-17T06:53:03.408504093Z","msg":"Checking for free disk space for WALs after PostgreSQL finished","logger":"instance-manager","logging_pod":"postgresql-grafana-1-4"}Code of Conduct
- I agree to follow this project's Code of Conduct
dosubot
Metadata
Metadata
Assignees
Labels
triagePending triagePending triage