diff options
Diffstat (limited to 'doc/book/cookbook')
-rw-r--r-- | doc/book/cookbook/_index.md | 6 | ||||
-rw-r--r-- | doc/book/cookbook/encryption.md | 116 | ||||
-rw-r--r-- | doc/book/cookbook/monitoring.md | 6 | ||||
-rw-r--r-- | doc/book/cookbook/real-world.md | 8 | ||||
-rw-r--r-- | doc/book/cookbook/recovering.md | 110 | ||||
-rw-r--r-- | doc/book/cookbook/reverse-proxy.md | 44 | ||||
-rw-r--r-- | doc/book/cookbook/systemd.md | 15 | ||||
-rw-r--r-- | doc/book/cookbook/upgrading.md | 85 |
8 files changed, 183 insertions, 207 deletions
diff --git a/doc/book/cookbook/_index.md b/doc/book/cookbook/_index.md index 07bf6ebf..ff90ad52 100644 --- a/doc/book/cookbook/_index.md +++ b/doc/book/cookbook/_index.md @@ -1,7 +1,7 @@ +++ title="Cookbook" template = "documentation.html" -weight = 2 +weight = 20 sort_by = "weight" +++ @@ -37,7 +37,3 @@ This chapter could also be referred as "Tutorials" or "Best practices". - **[Monitoring Garage](@/documentation/cookbook/monitoring.md)** This page explains the Prometheus metrics available for monitoring the Garage cluster/nodes. - -- **[Recovering from failures](@/documentation/cookbook/recovering.md):** Garage's first selling point is resilience - to hardware failures. This section explains how to recover from such a failure in the - best possible way. diff --git a/doc/book/cookbook/encryption.md b/doc/book/cookbook/encryption.md new file mode 100644 index 00000000..21a5cbc6 --- /dev/null +++ b/doc/book/cookbook/encryption.md @@ -0,0 +1,116 @@ ++++ +title = "Encryption" +weight = 50 ++++ + +Encryption is a recurring subject when discussing Garage. +Garage does not handle data encryption by itself, but many things can +already be done with Garage's current feature set and the existing ecosystem. + +This page takes a high level approach to security in general and data encryption +in particular. + + +# Examining your need for encryption + +- Why do you want encryption in Garage? + +- What is your threat model? What are you fearing? + - A stolen HDD? + - A curious administrator? + - A malicious administrator? + - A remote attacker? + - etc. + +- What services do you want to protect with encryption? + - An existing application? Which one? (eg. Nextcloud) + - An application that you are writing + +- Any expertise you may have on the subject + +This page explains what Garage provides, and how you can improve the situation by yourself +by adding encryption at different levels. + +We would be very curious to know your needs and thougs about ideas such as +encryption practices and things like key management, as we want Garage to be a +serious base platform for the developpment of secure, encrypted applications. +Do not hesitate to come talk to us if you have any thoughts or questions on the +subject. + + +# Capabilities provided by Garage + +## Traffic is encrypted between Garage nodes + +RPCs between Garage nodes are encrypted. More specifically, contrary to many +distributed software, it is impossible in Garage to have clear-text RPC. We +use the [kuska handshake](https://github.com/Kuska-ssb/handshake) library which +implements a protocol that has been clearly reviewed, Secure ScuttleButt's +Secret Handshake protocol. This is why setting a `rpc_secret` is mandatory, +and that's also why your nodes have super long identifiers. + +## HTTP API endpoints provided by Garage are in clear text + +Adding TLS support built into Garage is not currently planned. + +## Garage stores data in plain text on the filesystem + +Garage does not handle data encryption at rest by itself, and instead delegates +to the user to add encryption, either at the storage layer (LUKS, etc) or on +the client side (or both). There are no current plans to add data encryption +directly in Garage. + +Implementing data encryption directly in Garage might make things simpler for +end users, but also raises many more questions, especially around key +management: for encryption of data, where could Garage get the encryption keys +from ? If we encrypt data but keep the keys in a plaintext file next to them, +it's useless. We probably don't want to have to manage secrets in garage as it +would be very hard to do in a secure way. Maybe integrate with an external +system such as Hashicorp Vault? + + +# Adding data encryption using external tools + +## Encrypting traffic between a Garage node and your client + +You have multiple options to have encryption between your client and a node: + + - Setup a reverse proxy with TLS / ACME / Let's encrypt + - Setup a Garage gateway locally, and only contact the garage daemon on `localhost` + - Only contact your Garage daemon over a secure, encrypted overlay network such as Wireguard + +## Encrypting data at rest + +Protects against the following threats: + +- Stolen HDD + +Crucially, does not protect againt malicious sysadmins or remote attackers that +might gain access to your servers. + +Methods include full-disk encryption with tools such as LUKS. + +## Encrypting data on the client side + +Protects againt the following threats: + +- A honest-but-curious administrator +- A malicious administrator that tries to corrupt your data +- A remote attacker that can read your server's data + +Implementations are very specific to the various applications. Examples: + +- Matrix: uses the OLM protocol for E2EE of user messages. Media files stored + in Matrix are probably encrypted using symmetric encryption, with a key that is + distributed in the end-to-end encrypted message that contains the link to the object. + +- XMPP: clients normally support either OMEMO / OpenPGP for the E2EE of user + messages. Media files are encrypted per + [XEP-0454](https://xmpp.org/extensions/xep-0454.html). + +- Aerogramme: use the user's password as a key to decrypt data in the user's bucket + +- Cyberduck: comes with support for + [Cryptomator](https://docs.cyberduck.io/cryptomator/) which allows users to + create client-side vaults to encrypt files in before they are uploaded to a + cloud storage endpoint. diff --git a/doc/book/cookbook/monitoring.md b/doc/book/cookbook/monitoring.md index 8313daa9..b204dbbe 100644 --- a/doc/book/cookbook/monitoring.md +++ b/doc/book/cookbook/monitoring.md @@ -49,9 +49,5 @@ add the following lines in your Prometheus scrape config: To visualize the scraped data in Grafana, you can either import our [Grafana dashboard for Garage](https://git.deuxfleurs.fr/Deuxfleurs/garage/raw/branch/main/script/telemetry/grafana-garage-dashboard-prometheus.json) or make your own. -We detail below the list of exposed metrics and their meaning. - -## List of exported metrics - -See our [dedicated page](@/documentation/reference-manual/monitoring.md) in the Reference manual section. +The list of exported metrics is available on our [dedicated page](@/documentation/reference-manual/monitoring.md) in the Reference manual section. diff --git a/doc/book/cookbook/real-world.md b/doc/book/cookbook/real-world.md index 08266b23..7061069f 100644 --- a/doc/book/cookbook/real-world.md +++ b/doc/book/cookbook/real-world.md @@ -197,6 +197,12 @@ The `garage` binary has two purposes: Ensure an appropriate `garage` binary (the same version as your Docker image) is available in your path. If your configuration file is at `/etc/garage.toml`, the `garage` binary should work with no further change. +You can also use an alias as follows to use the Garage binary inside your docker container: + +```bash +alias garage="docker exec -ti <container name> /garage" +``` + You can test your `garage` CLI utility by running a simple command such as: ```bash @@ -339,7 +345,7 @@ garage layout apply ``` **WARNING:** if you want to use the layout modification commands in a script, -make sure to read [this page](@/documentation/reference-manual/layout.md) first. +make sure to read [this page](@/documentation/operations/layout.md) first. ## Using your Garage cluster diff --git a/doc/book/cookbook/recovering.md b/doc/book/cookbook/recovering.md deleted file mode 100644 index 2129a7f3..00000000 --- a/doc/book/cookbook/recovering.md +++ /dev/null @@ -1,110 +0,0 @@ -+++ -title = "Recovering from failures" -weight = 50 -+++ - -Garage is meant to work on old, second-hand hardware. -In particular, this makes it likely that some of your drives will fail, and some manual intervention will be needed. -Fear not! For Garage is fully equipped to handle drive failures, in most common cases. - -## A note on availability of Garage - -With nodes dispersed in 3 zones or more, here are the guarantees Garage provides with the 3-way replication strategy (3 copies of all data, which is the recommended replication mode): - -- The cluster remains fully functional as long as the machines that fail are in only one zone. This includes a whole zone going down due to power/Internet outage. -- No data is lost as long as the machines that fail are in at most two zones. - -Of course this only works if your Garage nodes are correctly configured to be aware of the zone in which they are located. -Make sure this is the case using `garage status` to check on the state of your cluster's configuration. - -In case of temporarily disconnected nodes, Garage should automatically re-synchronize -when the nodes come back up. This guide will deal with recovering from disk failures -that caused the loss of the data of a node. - - -## First option: removing a node - -If you don't have spare parts (HDD, SDD) to replace the failed component, and if there are enough remaining nodes in your cluster -(at least 3), you can simply remove the failed node from Garage's configuration. -Note that if you **do** intend to replace the failed parts by new ones, using this method followed by adding back the node is **not recommended** (although it should work), -and you should instead use one of the methods detailed in the next sections. - -Removing a node is done with the following command: - -```bash -garage layout remove <node_id> -garage layout show # review the changes you are making -garage layout apply # once satisfied, apply the changes -``` - -(you can get the `node_id` of the failed node by running `garage status`) - -This will repartition the data and ensure that 3 copies of everything are present on the nodes that remain available. - - - -## Replacement scenario 1: only data is lost, metadata is fine - -The recommended deployment for Garage uses an SSD to store metadata, and an HDD to store blocks of data. -In the case where only a single HDD crashes, the blocks of data are lost but the metadata is still fine. - -This is very easy to recover by setting up a new HDD to replace the failed one. -The node does not need to be fully replaced and the configuration doesn't need to change. -We just need to tell Garage to get back all the data blocks and store them on the new HDD. - -First, set up a new HDD to store Garage's data directory on the failed node, and restart Garage using -the existing configuration. Then, run: - -```bash -garage repair -a --yes blocks -``` - -This will re-synchronize blocks of data that are missing to the new HDD, reading them from copies located on other nodes. - -You can check on the advancement of this process by doing the following command: - -```bash -garage stats -a -``` - -Look out for the following output: - -``` -Block manager stats: - resync queue length: 26541 -``` - -This indicates that one of the Garage node is in the process of retrieving missing data from other nodes. -This number decreases to zero when the node is fully synchronized. - - -## Replacement scenario 2: metadata (and possibly data) is lost - -This scenario covers the case where a full node fails, i.e. both the metadata directory and -the data directory are lost, as well as the case where only the metadata directory is lost. - -To replace the lost node, we will start from an empty metadata directory, which means -Garage will generate a new node ID for the replacement node. -We will thus need to remove the previous node ID from Garage's configuration and replace it by the ID of the new node. - -If your data directory is stored on a separate drive and is still fine, you can keep it, but it is not necessary to do so. -In all cases, the data will be rebalanced and the replacement node will not store the same pieces of data -as were originally stored on the one that failed. So if you keep the data files, the rebalancing -might be faster but most of the pieces will be deleted anyway from the disk and replaced by other ones. - -First, set up a new drive to store the metadata directory for the replacement node (a SSD is recommended), -and for the data directory if necessary. You can then start Garage on the new node. -The restarted node should generate a new node ID, and it should be shown with `NO ROLE ASSIGNED` in `garage status`. -The ID of the lost node should be shown in `garage status` in the section for disconnected/unavailable nodes. - -Then, replace the broken node by the new one, using: - -```bash -garage layout assign <new_node_id> --replace <old_node_id> \ - -c <capacity> -z <zone> -t <node_tag> -garage layout show # review the changes you are making -garage layout apply # once satisfied, apply the changes -``` - -Garage will then start synchronizing all required data on the new node. -This process can be monitored using the `garage stats -a` command. diff --git a/doc/book/cookbook/reverse-proxy.md b/doc/book/cookbook/reverse-proxy.md index 9c833ad0..b715193e 100644 --- a/doc/book/cookbook/reverse-proxy.md +++ b/doc/book/cookbook/reverse-proxy.md @@ -378,6 +378,47 @@ admin.garage.tld { But at the same time, the `reverse_proxy` is very flexible. For a production deployment, you should [read its documentation](https://caddyserver.com/docs/caddyfile/directives/reverse_proxy) as it supports features like DNS discovery of upstreams, load balancing with checks, streaming parameters, etc. +### Caching + +Caddy can compiled with a +[cache plugin](https://github.com/caddyserver/cache-handler) which can be used +to provide a hot-cache at the webserver-level for static websites hosted by +Garage. + +This can be configured as follows: + +```caddy +# Caddy global configuration section +{ + # Bare minimum configuration to enable cache. + order cache before rewrite + + cache + + #cache + # allowed_http_verbs GET + # default_cache_control public + # ttl 8h + #} +} + +# Site specific section +https:// { + cache + + #cache { + # timeout { + # backend 30s + # } + #} + + reverse_proxy ... +} +``` + +Caching is a complicated subject, and the reader is encouraged to study the +available options provided by the plugin. + ### On-demand TLS Caddy supports a technique called @@ -428,3 +469,6 @@ https:// { reverse_proxy localhost:3902 192.168.1.2:3902 example.tld:3902 } ``` + +More information on how this endpoint is implemented in Garage is available +in the [Admin API Reference](@/documentation/reference-manual/admin-api.md) page. diff --git a/doc/book/cookbook/systemd.md b/doc/book/cookbook/systemd.md index b271010b..c0ed7d1f 100644 --- a/doc/book/cookbook/systemd.md +++ b/doc/book/cookbook/systemd.md @@ -33,7 +33,20 @@ NoNewPrivileges=true WantedBy=multi-user.target ``` -*A note on hardening: garage will be run as a non privileged user, its user id is dynamically allocated by systemd. It cannot access (read or write) home folders (/home, /root and /run/user), the rest of the filesystem can only be read but not written, only the path seen as /var/lib/garage is writable as seen by the service (mapped to /var/lib/private/garage on your host). Additionnaly, the process can not gain new privileges over time.* +**A note on hardening:** Garage will be run as a non privileged user, its user +id is dynamically allocated by systemd (set with `DynamicUser=true`). It cannot +access (read or write) home folders (`/home`, `/root` and `/run/user`), the +rest of the filesystem can only be read but not written, only the path seen as +`/var/lib/garage` is writable as seen by the service. Additionnaly, the process +can not gain new privileges over time. + +For this to work correctly, your `garage.toml` must be set with +`metadata_dir=/var/lib/garage/meta` and `data_dir=/var/lib/garage/data`. This +is mandatory to use the DynamicUser hardening feature of systemd, which +autocreates these directories as virtual mapping. If the directory +`/var/lib/garage` already exists before starting the server for the first time, +the systemd service might not start correctly. Note that in your host +filesystem, Garage data will be held in `/var/lib/private/garage`. To start the service then automatically enable it at boot: diff --git a/doc/book/cookbook/upgrading.md b/doc/book/cookbook/upgrading.md deleted file mode 100644 index 9d60a988..00000000 --- a/doc/book/cookbook/upgrading.md +++ /dev/null @@ -1,85 +0,0 @@ -+++ -title = "Upgrading Garage" -weight = 60 -+++ - -Garage is a stateful clustered application, where all nodes are communicating together and share data structures. -It makes upgrade more difficult than stateless applications so you must be more careful when upgrading. -On a new version release, there is 2 possibilities: - - protocols and data structures remained the same ➡️ this is a **minor upgrade** - - protocols or data structures changed ➡️ this is a **major upgrade** - -You can quickly now what type of update you will have to operate by looking at the version identifier: -when we require our users to do a major upgrade, we will always bump the first nonzero component of the version identifier -(e.g. from v0.7.2 to v0.8.0). -Conversely, for versions that only require a minor upgrade, the first nonzero component will always stay the same (e.g. from v0.8.0 to v0.8.1). - -Major upgrades are designed to be run only between contiguous versions. -Example: migrations from v0.7.1 to v0.8.0 and from v0.7.0 to v0.8.2 are supported but migrations from v0.6.0 to v0.8.0 are not supported. - -The `garage_build_info` -[Prometheus metric](@/documentation/reference-manual/monitoring.md) provides -an overview for which Garage versions are currently in use within a cluster. - -## Minor upgrades - -Minor upgrades do not imply cluster downtime. -Before upgrading, you should still read [the changelog](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases) and ideally test your deployment on a staging cluster before. - -When you are ready, start by checking the health of your cluster. -You can force some checks with `garage repair`, we recommend at least running `garage repair --all-nodes --yes tables` which is very quick to run (less than a minute). -You will see that the command correctly terminated in the logs of your daemon, or using `garage worker list` (the repair workers should be in the `Done` state). - -Finally, you can simply upgrade nodes one by one. -For each node: stop it, install the new binary, edit the configuration if needed, restart it. - -## Major upgrades - -Major upgrades can be done with minimal downtime with a bit of preparation, but the simplest way is usually to put the cluster offline for the duration of the migration. -Before upgrading, you must read [the changelog](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases) and you must test your deployment on a staging cluster before. - -We write guides for each major upgrade, they are stored under the "Working Documents" section of this documentation. - -### Major upgrades with full downtime - -From a high level perspective, a major upgrade looks like this: - - 1. Disable API access (for instance in your reverse proxy, or by commenting the corresponding section in your Garage configuration file and restarting Garage) - 2. Check that your cluster is idle - 3. Make sure the health of your cluster is good (see `garage repair`) - 4. Stop the whole cluster - 5. Back up the metadata folder of all your nodes, so that you will be able to restore it if the upgrade fails (data blocks being immutable, they should not be impacted) - 6. Install the new binary, update the configuration - 7. Start the whole cluster - 8. If needed, run the corresponding migration from `garage migrate` - 9. Make sure the health of your cluster is good - 10. Enable API access (reverse step 1) - 11. Monitor your cluster while load comes back, check that all your applications are happy with this new version - -### Major upgarades with minimal downtime - -There is only one operation that has to be coordinated cluster-wide: the passage of one version of the internal RPC protocol to the next. -This means that an upgrade with very limited downtime can simply be performed from one major version to the next by restarting all nodes -simultaneously in the new version. -The downtime will simply be the time required for all nodes to stop and start again, which should be less than a minute. -If all nodes fail to stop and restart simultaneously, some nodes might be temporarily shut out from the cluster as nodes using different RPC protocol -versions are prevented to talk to one another. - -The entire procedure would look something like this: - -1. Make sure the health of your cluster is good (see `garage repair`) - -2. Take each node offline individually to back up its metadata folder, bring them back online once the backup is done. - You can do all of the nodes in a single zone at once as that won't impact global cluster availability. - Do not try to make a backup of the metadata folder of a running node. - -3. Prepare your binaries and configuration files for the new Garage version - -4. Restart all nodes simultaneously in the new version - -5. If any specific migration procedure is required, it is usually in one of the two cases: - - - It can be run on online nodes after the new version has started, during regular cluster operation. - - it has to be run offline - - For this last step, please refer to the specific documentation pertaining to the version upgrade you are doing. |