aboutsummaryrefslogtreecommitdiff
path: root/doc/book/cookbook/recovering.md
diff options
context:
space:
mode:
authorAlex <alex@adnab.me>2023-06-14 10:57:32 +0000
committerAlex <alex@adnab.me>2023-06-14 10:57:32 +0000
commit5e291c64b3539c11cd4f107852686c7865b8b036 (patch)
tree62e8bf6e275ce045e29baf7a660706899e54ed8e /doc/book/cookbook/recovering.md
parent01346143ca09eab262f0d8f8a0a744c2f6d667cc (diff)
parent9092c71a01311f8f7174fa03facdb4d95a7b1389 (diff)
downloadgarage-5e291c64b3539c11cd4f107852686c7865b8b036.tar.gz
garage-5e291c64b3539c11cd4f107852686c7865b8b036.zip
Merge pull request 'Documentation updates' (#587) from doc-updates into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/587
Diffstat (limited to 'doc/book/cookbook/recovering.md')
-rw-r--r--doc/book/cookbook/recovering.md110
1 files changed, 0 insertions, 110 deletions
diff --git a/doc/book/cookbook/recovering.md b/doc/book/cookbook/recovering.md
deleted file mode 100644
index 2129a7f3..00000000
--- a/doc/book/cookbook/recovering.md
+++ /dev/null
@@ -1,110 +0,0 @@
-+++
-title = "Recovering from failures"
-weight = 50
-+++
-
-Garage is meant to work on old, second-hand hardware.
-In particular, this makes it likely that some of your drives will fail, and some manual intervention will be needed.
-Fear not! For Garage is fully equipped to handle drive failures, in most common cases.
-
-## A note on availability of Garage
-
-With nodes dispersed in 3 zones or more, here are the guarantees Garage provides with the 3-way replication strategy (3 copies of all data, which is the recommended replication mode):
-
-- The cluster remains fully functional as long as the machines that fail are in only one zone. This includes a whole zone going down due to power/Internet outage.
-- No data is lost as long as the machines that fail are in at most two zones.
-
-Of course this only works if your Garage nodes are correctly configured to be aware of the zone in which they are located.
-Make sure this is the case using `garage status` to check on the state of your cluster's configuration.
-
-In case of temporarily disconnected nodes, Garage should automatically re-synchronize
-when the nodes come back up. This guide will deal with recovering from disk failures
-that caused the loss of the data of a node.
-
-
-## First option: removing a node
-
-If you don't have spare parts (HDD, SDD) to replace the failed component, and if there are enough remaining nodes in your cluster
-(at least 3), you can simply remove the failed node from Garage's configuration.
-Note that if you **do** intend to replace the failed parts by new ones, using this method followed by adding back the node is **not recommended** (although it should work),
-and you should instead use one of the methods detailed in the next sections.
-
-Removing a node is done with the following command:
-
-```bash
-garage layout remove <node_id>
-garage layout show # review the changes you are making
-garage layout apply # once satisfied, apply the changes
-```
-
-(you can get the `node_id` of the failed node by running `garage status`)
-
-This will repartition the data and ensure that 3 copies of everything are present on the nodes that remain available.
-
-
-
-## Replacement scenario 1: only data is lost, metadata is fine
-
-The recommended deployment for Garage uses an SSD to store metadata, and an HDD to store blocks of data.
-In the case where only a single HDD crashes, the blocks of data are lost but the metadata is still fine.
-
-This is very easy to recover by setting up a new HDD to replace the failed one.
-The node does not need to be fully replaced and the configuration doesn't need to change.
-We just need to tell Garage to get back all the data blocks and store them on the new HDD.
-
-First, set up a new HDD to store Garage's data directory on the failed node, and restart Garage using
-the existing configuration. Then, run:
-
-```bash
-garage repair -a --yes blocks
-```
-
-This will re-synchronize blocks of data that are missing to the new HDD, reading them from copies located on other nodes.
-
-You can check on the advancement of this process by doing the following command:
-
-```bash
-garage stats -a
-```
-
-Look out for the following output:
-
-```
-Block manager stats:
- resync queue length: 26541
-```
-
-This indicates that one of the Garage node is in the process of retrieving missing data from other nodes.
-This number decreases to zero when the node is fully synchronized.
-
-
-## Replacement scenario 2: metadata (and possibly data) is lost
-
-This scenario covers the case where a full node fails, i.e. both the metadata directory and
-the data directory are lost, as well as the case where only the metadata directory is lost.
-
-To replace the lost node, we will start from an empty metadata directory, which means
-Garage will generate a new node ID for the replacement node.
-We will thus need to remove the previous node ID from Garage's configuration and replace it by the ID of the new node.
-
-If your data directory is stored on a separate drive and is still fine, you can keep it, but it is not necessary to do so.
-In all cases, the data will be rebalanced and the replacement node will not store the same pieces of data
-as were originally stored on the one that failed. So if you keep the data files, the rebalancing
-might be faster but most of the pieces will be deleted anyway from the disk and replaced by other ones.
-
-First, set up a new drive to store the metadata directory for the replacement node (a SSD is recommended),
-and for the data directory if necessary. You can then start Garage on the new node.
-The restarted node should generate a new node ID, and it should be shown with `NO ROLE ASSIGNED` in `garage status`.
-The ID of the lost node should be shown in `garage status` in the section for disconnected/unavailable nodes.
-
-Then, replace the broken node by the new one, using:
-
-```bash
-garage layout assign <new_node_id> --replace <old_node_id> \
- -c <capacity> -z <zone> -t <node_tag>
-garage layout show # review the changes you are making
-garage layout apply # once satisfied, apply the changes
-```
-
-Garage will then start synchronizing all required data on the new node.
-This process can be monitored using the `garage stats -a` command.