blob: b952174b6cfaaec7ee4223795817e7a7108615e8 (
plain) (
blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
|
## Disclaimer
Do **NOT** use the following backup methods on the Stolon Cluster:
1. copying the data directory
2. `pg_dump`
3. `pg_dumpall`
The first one will lead to corrupted/inconsistent files.
The second and third ones put too much pressure on the cluster.
Basically, you will destroy it, in the following ways:
- Load will increase, requests will timeout
- RAM will increase, the daemon will be OOM (Out Of Memory) killed by Linux
- Potentially, the WAL log will grow a lot
## A binary backup with `pg_basebackup`
The only acceptable solution is `pg_basebackup` with **some throttling configured**.
Later, if you want a SQL dump, you can inject this binary backup on an ephemeral database you spawned solely for this purpose on a non-production machine.
First, start by fetching from Consul the identifiers of the replication account.
Do not use the root account setup in Stolon, it will not work.
First setup a SSH tunnel on your machine that bind postgresql, eg:
```bash
ssh -L 5432:psql-proxy.service.2.cluster.deuxfleurs.fr:5432 ...
```
Then export your password in `PGPASSWORD` and launch the backup:
```bash
export PGPASSWORD=xxx
pg_basebackup \
--host=127.0.0.1 \
--username=replicator \
--pgdata=/tmp/sql \
--format=tar \
--wal-method=none \
--gzip \
--compress=6 \
--progress \
--max-rate=2M
```
*Take a cup of coffe, it will take some times...*
## Importing the backup
## Dump SQL
|