Postgres image with PostGIS and pg_auto_failover extensions that allows running a high-availability Postgres cluster with query load-balancing. # How to use this image The expected configuration is at least two `worker` nodes with one `monitor` node. The container is expected to have one or more peer configuration containers that implement query load balancing (activated using `AUTOCONFIG_LOCAL_PEER` and/or `AUTOCONFIG_REMOTE_PEERS`). One of such implementations is [docker-pgpool](/SW/docker-pgpool) that implements read-only/read-write splitting and load balancing using `pgpool`. Another is [docker-pgtraefik](/SW/docker-pgtraefik) that load balances all traffic to the primary node. The configuration peers act as servers and this image connects to them. They can run either locally communicating over a socket or remotely over HTTP. # Environmental variables ## General - `AUTOCONFIG_LOG_LEVEL` Log level of the application that manages the execution (one of `trace`, `debug`, `info`, `warn`, `error`, `fatal`). Default `info`. Optional. - `AUTOCONFIG_MODE` Determines the role of the container. Either - empty/not set to get the behaviour of plain Postgres image - `monitor` this container becomes a monitor (only one container should have this set) - `postgres` this container becomes a worker (usually at least two containers have this) No default. Required - `AUTOCONFIG_FORMATION` Name of the pg_auto_failover formation. Default `default`. Optional - `AUTOCONFIG_MONITOR_HOST` Hostname of the monitor node. Default none. Required. - `AUTOCONFIG_MONITOR_PORT` Postgres port on the monitor node. Default `PGPORT` and if not set `5432`. Optional. - `AUTOCONFIG_LOCAL_PEER` True if configuration peer container is running locally and is reachable over a socket. Default `false`. Optional. - `AUTOCONFIG_SOCKET` Path to the socket of the locally run peer configuration container. Default `/var/run/pg_autoconfig.sock`. Optional. - `AUTOCONFIG_REMOTE_PEERS` Hostnames with remote configuration peers separated by comma. Each hostname has format `hostname[:port]` (default port 5420). Default none. Optional. ## Security - `AUTOCONFIG_MONITOR_PASSWORD` or `AUTOCONFIG_MONITOR_PASSWORD_FILE` password (resp. path to file with the password) that is set on the monitor node for the user `autoctl_node` and worker nodes use it to report their status in pg_auto_failover. Default none. Required. - `AUTOCONFIG_REPLICATION_PASSWORD` or `AUTOCONFIG_REPLICATION_PASSWORD_FILE` password (resp. path to file with the password) that is set on the worker nodes for the user `pgautofailover_replicator` that allows other nodes to fetch state from the currently primary node - `POSTGRES_USERNAME` Postgres admin user name - `POSTGRES_PASSWORD` or `POSTGRES_PASSWORD_FILE` password (resp. path to file with the password) that is set for the user Postgres admin user - `AUTOCONFIG_LINK_HBA_CONF` path to `pg_hba.conf` file that shall be linked to after the Postgres storage gets initialized (Postgres refuses to initialize non-empty directory with `pg_hba.conf` volume-binded from docker host). Default none. Optional. This image does optionally deploy but does not modify the Postgres file `pg_hba.conf` using `AUTOCONFIG_LINK_HBA_CONF`. If you do not provide one then `pg_auto_failover` modifies the one you already have. However following rules are required for `pg_auto_failure` to work on monitor: - `local all all` - `host pg_auto_failover autoctl_node scram-sha-256` (for each worker) on worker: - `local all all` - `host all pgautofailover_monitor trust` - `host replication pgautofailover_replicator scram-sha-256` (for each oher worker) ## pg_auto_failover - `XDG_CONFIG_HOME` Persistent storage for pg_auto_failover configuration - `XDG_DATA_HOME` Persistent state of pg_auto_failover See others in the pg_auto_failover documentation. ## Postgres - `PGDATA` path to Postgres persistent storage. - `PGPORT` port for Postgres to listen on. Default 5432. Optional. - `POSTGRES_USERNAME` (see above) - `POSTGRES_PASSWORD` or `POSTGRES_PASSWORD_FILE` (see above) See others in postgres docker image documentation. # Runtime status You can check the state of the cluster by executing `pg_autoctl show state --formation ` in the container.