aboutsummaryrefslogtreecommitdiff
path: root/doc/book/cookbook
diff options
context:
space:
mode:
Diffstat (limited to 'doc/book/cookbook')
-rw-r--r--doc/book/cookbook/_index.md31
-rw-r--r--doc/book/cookbook/exposing-websites.md69
-rw-r--r--doc/book/cookbook/from-source.md54
-rw-r--r--doc/book/cookbook/gateways.md39
-rw-r--r--doc/book/cookbook/real-world.md295
-rw-r--r--doc/book/cookbook/recovering.md110
-rw-r--r--doc/book/cookbook/reverse-proxy.md168
-rw-r--r--doc/book/cookbook/systemd.md53
8 files changed, 819 insertions, 0 deletions
diff --git a/doc/book/cookbook/_index.md b/doc/book/cookbook/_index.md
new file mode 100644
index 00000000..6e279363
--- /dev/null
+++ b/doc/book/cookbook/_index.md
@@ -0,0 +1,31 @@
++++
+title="Cookbook"
+template = "documentation.html"
+weight = 2
+sort_by = "weight"
++++
+
+A cookbook, when you cook, is a collection of recipes.
+Similarly, Garage's cookbook contains a collection of recipes that are known to works well!
+This chapter could also be referred as "Tutorials" or "Best practices".
+
+- **[Multi-node deployment](@/documentation/cookbook/real-world.md):** This page will walk you through all of the necessary
+ steps to deploy Garage in a real-world setting.
+
+- **[Building from source](@/documentation/cookbook/from-source.md):** This page explains how to build Garage from
+ source in case a binary is not provided for your architecture, or if you want to
+ hack with us!
+
+- **[Integration with Systemd](@/documentation/cookbook/systemd.md):** This page explains how to run Garage
+ as a Systemd service (instead of as a Docker container).
+
+- **[Configuring a gateway node](@/documentation/cookbook/gateways.md):** This page explains how to run a gateway node in a Garage cluster, i.e. a Garage node that doesn't store data but accelerates access to data present on the other nodes.
+
+- **[Hosting a website](@/documentation/cookbook/exposing-websites.md):** This page explains how to use Garage
+ to host a static website.
+
+- **[Configuring a reverse-proxy](@/documentation/cookbook/reverse-proxy.md):** This page explains how to configure a reverse-proxy to add TLS support to your S3 api endpoint.
+
+- **[Recovering from failures](@/documentation/cookbook/recovering.md):** Garage's first selling point is resilience
+ to hardware failures. This section explains how to recover from such a failure in the
+ best possible way.
diff --git a/doc/book/cookbook/exposing-websites.md b/doc/book/cookbook/exposing-websites.md
new file mode 100644
index 00000000..be462dc9
--- /dev/null
+++ b/doc/book/cookbook/exposing-websites.md
@@ -0,0 +1,69 @@
++++
+title = "Exposing buckets as websites"
+weight = 25
++++
+
+## Configuring a bucket for website access
+
+There are two methods to expose buckets as website:
+
+1. using the PutBucketWebsite S3 API call, which is allowed for access keys that have the owner permission bit set
+
+2. from the Garage CLI, by an adminstrator of the cluster
+
+The `PutBucketWebsite` API endpoint [is documented](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html) in the official AWS docs.
+This endpoint can also be called [using `aws s3api`](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-website.html) on the command line.
+The website configuration supported by Garage is only a subset of the possibilities on Amazon S3: redirections are not supported, only the index document and error document can be specified.
+
+If you want to expose your bucket as a website from the CLI, use this simple command:
+
+```bash
+garage bucket website --allow my-website
+```
+
+Now it will be **publicly** exposed on the web endpoint (by default listening on port 3902).
+
+## How exposed websites work
+
+Our website serving logic is as follow:
+
+ - Supports only static websites (no support for PHP or other languages)
+ - Does not support directory listing
+ - The index file is defined per-bucket and can be specified in the `PutBucketWebsite` call
+ or on the CLI using the `--index-document` parameter (default: `index.html`)
+ - A custom error document for 404 errors can be specified in the `PutBucketWebsite` call
+ or on the CLI using the `--error-document` parameter
+
+Now we need to infer the URL of your website through your bucket name.
+Let assume:
+ - we set `root_domain = ".web.example.com"` in `garage.toml` ([ref](@/documentation/reference-manual/configuration.md#root_domain))
+ - our bucket name is `garagehq.deuxfleurs.fr`.
+
+Our bucket will be served if the Host field matches one of these 2 values (the port is ignored):
+
+ - `garagehq.deuxfleurs.fr.web.example.com`: you can dedicate a subdomain to your users (here `web.example.com`).
+
+ - `garagehq.deuxfleurs.fr`: your users can bring their own domain name, they just need to point them to your Garage cluster.
+
+You can try this logic locally, without configuring any DNS, thanks to `curl`:
+
+```bash
+# prepare your test
+echo hello world > /tmp/index.html
+mc cp /tmp/index.html garage/garagehq.deuxfleurs.fr
+
+curl -H 'Host: garagehq.deuxfleurs.fr' http://localhost:3902
+# should print "hello world"
+
+curl -H 'Host: garagehq.deuxfleurs.fr.web.example.com' http://localhost:3902
+# should also print "hello world"
+```
+
+Now that you understand how website logic works on Garage, you can:
+
+ - make the website endpoint listens on port 80 (instead of 3902)
+ - use iptables to redirect the port 80 to the port 3902:
+ `iptables -t nat -A PREROUTING -p tcp -dport 80 -j REDIRECT -to-port 3902`
+ - or configure a [reverse proxy](@/documentation/cookbook/reverse-proxy.md) in front of Garage to add TLS (HTTPS), CORS support, etc.
+
+You can also take a look at [Website Integration](@/documentation/connect/websites.md) to see how you can add Garage to your workflow.
diff --git a/doc/book/cookbook/from-source.md b/doc/book/cookbook/from-source.md
new file mode 100644
index 00000000..84c0d514
--- /dev/null
+++ b/doc/book/cookbook/from-source.md
@@ -0,0 +1,54 @@
++++
+title = "Compiling Garage from source"
+weight = 10
++++
+
+
+Garage is a standard Rust project.
+First, you need `rust` and `cargo`.
+For instance on Debian:
+
+```bash
+sudo apt-get update
+sudo apt-get install -y rustc cargo
+```
+
+You can also use [Rustup](https://rustup.rs/) to setup a Rust toolchain easily.
+
+## Using source from `crates.io`
+
+Garage's source code is published on `crates.io`, Rust's official package repository.
+This means you can simply ask `cargo` to download and build this source code for you:
+
+```bash
+cargo install garage
+```
+
+That's all, `garage` should be in `$HOME/.cargo/bin`.
+
+You can add this folder to your `$PATH` or copy the binary somewhere else on your system.
+For instance:
+
+```bash
+sudo cp $HOME/.cargo/bin/garage /usr/local/bin/garage
+```
+
+
+## Using source from the Gitea repository
+
+The primary location for Garage's source code is the
+[Gitea repository](https://git.deuxfleurs.fr/Deuxfleurs/garage).
+
+Clone the repository and build Garage with the following commands:
+
+```bash
+git clone https://git.deuxfleurs.fr/Deuxfleurs/garage.git
+cd garage
+cargo build
+```
+
+Be careful, as this will make a debug build of Garage, which will be extremely slow!
+To make a release build, invoke `cargo build --release` (this takes much longer).
+
+The binaries built this way are found in `target/{debug,release}/garage`.
+
diff --git a/doc/book/cookbook/gateways.md b/doc/book/cookbook/gateways.md
new file mode 100644
index 00000000..62ed0fe2
--- /dev/null
+++ b/doc/book/cookbook/gateways.md
@@ -0,0 +1,39 @@
++++
+title = "Configuring a gateway node"
+weight = 20
++++
+
+Gateways allow you to expose Garage endpoints (S3 API and websites) without storing data on the node.
+
+## Benefits
+
+You can configure Garage as a gateway on all nodes that will consume your S3 API, it will provide you the following benefits:
+
+ - **It removes 1 or 2 network RTT.** Instead of (querying your reverse proxy then) querying a random node of the cluster that will forward your request to the nodes effectively storing the data, your local gateway will directly knows which node to query.
+
+ - **It eases server management.** Instead of tracking in your reverse proxy and DNS what are the current Garage nodes, your gateway being part of the cluster keeps this information for you. In your software, you will always specify `http://localhost:3900`.
+
+ - **It simplifies security.** Instead of having to maintain and renew a TLS certificate, you leverage the Secret Handshake protocol we use for our cluster. The S3 API protocol will be in plain text but limited to your local machine.
+
+
+## Spawn a Gateway
+
+The instructions are similar to a regular node, the only option that is different is while configuring the node, you must set the `--gateway` parameter:
+
+```bash
+garage layout assign --gateway --tag gw1 <node_id>
+garage layout show # review the changes you are making
+garage layout apply # once satisfied, apply the changes
+```
+
+Then use `http://localhost:3900` when a S3 endpoint is required:
+
+```bash
+aws --endpoint-url http://127.0.0.1:3900 s3 ls
+```
+
+If a newly added gateway node seems to not be working, do a full table resync to ensure that bucket and key list are correctly propagated:
+
+```bash
+garage repair -a --yes tables
+```
diff --git a/doc/book/cookbook/real-world.md b/doc/book/cookbook/real-world.md
new file mode 100644
index 00000000..1178ded5
--- /dev/null
+++ b/doc/book/cookbook/real-world.md
@@ -0,0 +1,295 @@
++++
+title = "Deployment on a cluster"
+weight = 5
++++
+
+To run Garage in cluster mode, we recommend having at least 3 nodes.
+This will allow you to setup Garage for three-way replication of your data,
+the safest and most available mode proposed by Garage.
+
+We recommend first following the [quick start guide](@/documentation/quick-start/_index.md) in order
+to get familiar with Garage's command line and usage patterns.
+
+
+
+## Prerequisites
+
+To run a real-world deployment, make sure the following conditions are met:
+
+- You have at least three machines with sufficient storage space available.
+
+- Each machine has a public IP address which is reachable by other machines.
+ Running behind a NAT is likely to be possible but hasn't been tested for the latest version (TODO).
+
+- Ideally, each machine should have a SSD available in addition to the HDD you are dedicating
+ to Garage. This will allow for faster access to metadata and has the potential
+ to drastically reduce Garage's response times.
+
+- This guide will assume you are using Docker containers to deploy Garage on each node.
+ Garage can also be run independently, for instance as a [Systemd service](@/documentation/cookbook/systemd.md).
+ You can also use an orchestrator such as Nomad or Kubernetes to automatically manage
+ Docker containers on a fleet of nodes.
+
+Before deploying Garage on your infrastructure, you must inventory your machines.
+For our example, we will suppose the following infrastructure with IPv6 connectivity:
+
+| Location | Name | IP Address | Disk Space |
+|----------|---------|------------|------------|
+| Paris | Mercury | fc00:1::1 | 1 To |
+| Paris | Venus | fc00:1::2 | 2 To |
+| London | Earth | fc00:B::1 | 2 To |
+| Brussels | Mars | fc00:F::1 | 1.5 To |
+
+
+
+## Get a Docker image
+
+Our docker image is currently named `dxflrs/amd64_garage` and is stored on the [Docker Hub](https://hub.docker.com/r/dxflrs/amd64_garage/tags?page=1&ordering=last_updated).
+We encourage you to use a fixed tag (eg. `v0.4.0`) and not the `latest` tag.
+For this example, we will use the latest published version at the time of the writing which is `v0.4.0` but it's up to you
+to check [the most recent versions on the Docker Hub](https://hub.docker.com/r/dxflrs/amd64_garage/tags?page=1&ordering=last_updated).
+
+For example:
+
+```
+sudo docker pull dxflrs/amd64_garage:v0.4.0
+```
+
+## Deploying and configuring Garage
+
+On each machine, we will have a similar setup,
+especially you must consider the following folders/files:
+
+- `/etc/garage.toml`: Garage daemon's configuration (see below)
+
+- `/var/lib/garage/meta/`: Folder containing Garage's metadata,
+ put this folder on a SSD if possible
+
+- `/var/lib/garage/data/`: Folder containing Garage's data,
+ this folder will be your main data storage and must be on a large storage (e.g. large HDD)
+
+
+A valid `/etc/garage/garage.toml` for our cluster would look as follows:
+
+```toml
+metadata_dir = "/var/lib/garage/meta"
+data_dir = "/var/lib/garage/data"
+
+replication_mode = "3"
+
+compression_level = 2
+
+rpc_bind_addr = "[::]:3901"
+rpc_public_addr = "<this node's public IP>:3901"
+rpc_secret = "<RPC secret>"
+
+bootstrap_peers = []
+
+[s3_api]
+s3_region = "garage"
+api_bind_addr = "[::]:3900"
+root_domain = ".s3.garage"
+
+[s3_web]
+bind_addr = "[::]:3902"
+root_domain = ".web.garage"
+index = "index.html"
+```
+
+Check the following for your configuration files:
+
+- Make sure `rpc_public_addr` contains the public IP address of the node you are configuring.
+ This parameter is optional but recommended: if your nodes have trouble communicating with
+ one another, consider adding it.
+
+- Make sure `rpc_secret` is the same value on all nodes. It should be a 32-bytes hex-encoded secret key.
+ You can generate such a key with `openssl rand -hex 32`.
+
+## Starting Garage using Docker
+
+On each machine, you can run the daemon with:
+
+```bash
+docker run \
+ -d \
+ --name garaged \
+ --restart always \
+ --network host \
+ -v /etc/garage.toml:/etc/garage.toml \
+ -v /var/lib/garage/meta:/var/lib/garage/meta \
+ -v /var/lib/garage/data:/var/lib/garage/data \
+ lxpz/garage_amd64:v0.4.0
+```
+
+It should be restarted automatically at each reboot.
+Please note that we use host networking as otherwise Docker containers
+can not communicate with IPv6.
+
+Upgrading between Garage versions should be supported transparently,
+but please check the relase notes before doing so!
+To upgrade, simply stop and remove this container and
+start again the command with a new version of Garage.
+
+## Controling the daemon
+
+The `garage` binary has two purposes:
+ - it acts as a daemon when launched with `garage server`
+ - it acts as a control tool for the daemon when launched with any other command
+
+Ensure an appropriate `garage` binary (the same version as your Docker image) is available in your path.
+If your configuration file is at `/etc/garage.toml`, the `garage` binary should work with no further change.
+
+You can test your `garage` CLI utility by running a simple command such as:
+
+```bash
+garage status
+```
+
+At this point, nodes are not yet talking to one another.
+Your output should therefore look like follows:
+
+```
+Mercury$ garage status
+==== HEALTHY NODES ====
+ID Hostname Address Tag Zone Capacity
+563e1ac825ee3323… Mercury [fc00:1::1]:3901 NO ROLE ASSIGNED
+```
+
+
+## Connecting nodes together
+
+When your Garage nodes first start, they will generate a local node identifier
+(based on a public/private key pair).
+
+To obtain the node identifier of a node, once it is generated,
+run `garage node id`.
+This will print keys as follows:
+
+```bash
+Mercury$ garage node id
+563e1ac825ee3323aa441e72c26d1030d6d4414aeb3dd25287c531e7fc2bc95d@[fc00:1::1]:3901
+
+Venus$ garage node id
+86f0f26ae4afbd59aaf9cfb059eefac844951efd5b8caeec0d53f4ed6c85f332@[fc00:1::2]:3901
+
+etc.
+```
+
+You can then instruct nodes to connect to one another as follows:
+
+```bash
+# Instruct Venus to connect to Mercury (this will establish communication both ways)
+Venus$ garage node connect 563e1ac825ee3323aa441e72c26d1030d6d4414aeb3dd25287c531e7fc2bc95d@[fc00:1::1]:3901
+```
+
+You don't nead to instruct all node to connect to all other nodes:
+nodes will discover one another transitively.
+
+Now if your run `garage status` on any node, you should have an output that looks as follows:
+
+```
+==== HEALTHY NODES ====
+ID Hostname Address Tag Zone Capacity
+563e1ac825ee3323… Mercury [fc00:1::1]:3901 NO ROLE ASSIGNED
+86f0f26ae4afbd59… Venus [fc00:1::2]:3901 NO ROLE ASSIGNED
+68143d720f20c89d… Earth [fc00:B::1]:3901 NO ROLE ASSIGNED
+212f7572f0c89da9… Mars [fc00:F::1]:3901 NO ROLE ASSIGNED
+```
+
+## Creating a cluster layout
+
+We will now inform Garage of the disk space available on each node of the cluster
+as well as the zone (e.g. datacenter) in which each machine is located.
+This information is called the **cluster layout** and consists
+of a role that is assigned to each active cluster node.
+
+For our example, we will suppose we have the following infrastructure
+(Capacity, Identifier and Zone are specific values to Garage described in the following):
+
+| Location | Name | Disk Space | `Capacity` | `Identifier` | `Zone` |
+|----------|---------|------------|------------|--------------|--------------|
+| Paris | Mercury | 1 To | `10` | `563e` | `par1` |
+| Paris | Venus | 2 To | `20` | `86f0` | `par1` |
+| London | Earth | 2 To | `20` | `6814` | `lon1` |
+| Brussels | Mars | 1.5 To | `15` | `212f` | `bru1` |
+
+#### Node identifiers
+
+After its first launch, Garage generates a random and unique identifier for each nodes, such as:
+
+```
+563e1ac825ee3323aa441e72c26d1030d6d4414aeb3dd25287c531e7fc2bc95d
+```
+
+Often a shorter form can be used, containing only the beginning of the identifier, like `563e`,
+which identifies the server "Mercury" located in "Paris" according to our previous table.
+
+The most simple way to match an identifier to a node is to run:
+
+```
+garage status
+```
+
+It will display the IP address associated with each node;
+from the IP address you will be able to recognize the node.
+
+#### Zones
+
+Zones are simply a user-chosen identifier that identify a group of server that are grouped together logically.
+It is up to the system administrator deploying Garage to identify what does "grouped together" means.
+
+In most cases, a zone will correspond to a geographical location (i.e. a datacenter).
+Behind the scene, Garage will use zone definition to try to store the same data on different zones,
+in order to provide high availability despite failure of a zone.
+
+#### Capacity
+
+Garage reasons on an abstract metric about disk storage that is named the *capacity* of a node.
+The capacity configured in Garage must be proportional to the disk space dedicated to the node.
+
+Capacity values must be **integers** but can be given any signification.
+Here we chose that 1 unit of capacity = 100 GB.
+
+Note that the amount of data stored by Garage on each server may not be strictly proportional to
+its capacity value, as Garage will priorize having 3 copies of data in different zones,
+even if this means that capacities will not be strictly respected. For example in our above examples,
+nodes Earth and Mars will always store a copy of everything each, and the third copy will
+have 66% chance of being stored by Venus and 33% chance of being stored by Mercury.
+
+#### Injecting the topology
+
+Given the information above, we will configure our cluster as follow:
+
+```bash
+garage layout assign -z par1 -c 10 -t mercury 563e
+garage layout assign -z par1 -c 20 -t venus 86f0
+garage layout assign -z lon1 -c 20 -t earth 6814
+garage layout assign -z bru1 -c 15 -t mars 212f
+```
+
+At this point, the changes in the cluster layout have not yet been applied.
+To show the new layout that will be applied, call:
+
+```bash
+garage layout show
+```
+
+Once you are satisfied with your new layout, apply it with:
+
+```bash
+garage layout apply
+```
+
+**WARNING:** if you want to use the layout modification commands in a script,
+make sure to read [this page](@/documentation/reference-manual/layout.md) first.
+
+
+## Using your Garage cluster
+
+Creating buckets and managing keys is done using the `garage` CLI,
+and is covered in the [quick start guide](@/documentation/quick-start/_index.md).
+Remember also that the CLI is self-documented thanks to the `--help` flag and
+the `help` subcommand (e.g. `garage help`, `garage key --help`).
+
+Configuring S3-compatible applicatiosn to interact with Garage
+is covered in the [Integrations](@/documentation/connect/_index.md) section.
diff --git a/doc/book/cookbook/recovering.md b/doc/book/cookbook/recovering.md
new file mode 100644
index 00000000..2424558c
--- /dev/null
+++ b/doc/book/cookbook/recovering.md
@@ -0,0 +1,110 @@
++++
+title = "Recovering from failures"
+weight = 35
++++
+
+Garage is meant to work on old, second-hand hardware.
+In particular, this makes it likely that some of your drives will fail, and some manual intervention will be needed.
+Fear not! For Garage is fully equipped to handle drive failures, in most common cases.
+
+## A note on availability of Garage
+
+With nodes dispersed in 3 zones or more, here are the guarantees Garage provides with the 3-way replication strategy (3 copies of all data, which is the recommended replication mode):
+
+- The cluster remains fully functional as long as the machines that fail are in only one zone. This includes a whole zone going down due to power/Internet outage.
+- No data is lost as long as the machines that fail are in at most two zones.
+
+Of course this only works if your Garage nodes are correctly configured to be aware of the zone in which they are located.
+Make sure this is the case using `garage status` to check on the state of your cluster's configuration.
+
+In case of temporarily disconnected nodes, Garage should automatically re-synchronize
+when the nodes come back up. This guide will deal with recovering from disk failures
+that caused the loss of the data of a node.
+
+
+## First option: removing a node
+
+If you don't have spare parts (HDD, SDD) to replace the failed component, and if there are enough remaining nodes in your cluster
+(at least 3), you can simply remove the failed node from Garage's configuration.
+Note that if you **do** intend to replace the failed parts by new ones, using this method followed by adding back the node is **not recommended** (although it should work),
+and you should instead use one of the methods detailed in the next sections.
+
+Removing a node is done with the following command:
+
+```bash
+garage layout remove <node_id>
+garage layout show # review the changes you are making
+garage layout apply # once satisfied, apply the changes
+```
+
+(you can get the `node_id` of the failed node by running `garage status`)
+
+This will repartition the data and ensure that 3 copies of everything are present on the nodes that remain available.
+
+
+
+## Replacement scenario 1: only data is lost, metadata is fine
+
+The recommended deployment for Garage uses an SSD to store metadata, and an HDD to store blocks of data.
+In the case where only a single HDD crashes, the blocks of data are lost but the metadata is still fine.
+
+This is very easy to recover by setting up a new HDD to replace the failed one.
+The node does not need to be fully replaced and the configuration doesn't need to change.
+We just need to tell Garage to get back all the data blocks and store them on the new HDD.
+
+First, set up a new HDD to store Garage's data directory on the failed node, and restart Garage using
+the existing configuration. Then, run:
+
+```bash
+garage repair -a --yes blocks
+```
+
+This will re-synchronize blocks of data that are missing to the new HDD, reading them from copies located on other nodes.
+
+You can check on the advancement of this process by doing the following command:
+
+```bash
+garage stats -a
+```
+
+Look out for the following output:
+
+```
+Block manager stats:
+ resync queue length: 26541
+```
+
+This indicates that one of the Garage node is in the process of retrieving missing data from other nodes.
+This number decreases to zero when the node is fully synchronized.
+
+
+## Replacement scenario 2: metadata (and possibly data) is lost
+
+This scenario covers the case where a full node fails, i.e. both the metadata directory and
+the data directory are lost, as well as the case where only the metadata directory is lost.
+
+To replace the lost node, we will start from an empty metadata directory, which means
+Garage will generate a new node ID for the replacement node.
+We will thus need to remove the previous node ID from Garage's configuration and replace it by the ID of the new node.
+
+If your data directory is stored on a separate drive and is still fine, you can keep it, but it is not necessary to do so.
+In all cases, the data will be rebalanced and the replacement node will not store the same pieces of data
+as were originally stored on the one that failed. So if you keep the data files, the rebalancing
+might be faster but most of the pieces will be deleted anyway from the disk and replaced by other ones.
+
+First, set up a new drive to store the metadata directory for the replacement node (a SSD is recommended),
+and for the data directory if necessary. You can then start Garage on the new node.
+The restarted node should generate a new node ID, and it should be shown with `NO ROLE ASSIGNED` in `garage status`.
+The ID of the lost node should be shown in `garage status` in the section for disconnected/unavailable nodes.
+
+Then, replace the broken node by the new one, using:
+
+```bash
+garage layout assign <new_node_id> --replace <old_node_id> \
+ -c <capacity> -z <zone> -t <node_tag>
+garage layout show # review the changes you are making
+garage layout apply # once satisfied, apply the changes
+```
+
+Garage will then start synchronizing all required data on the new node.
+This process can be monitored using the `garage stats -a` command.
diff --git a/doc/book/cookbook/reverse-proxy.md b/doc/book/cookbook/reverse-proxy.md
new file mode 100644
index 00000000..63ba4bbe
--- /dev/null
+++ b/doc/book/cookbook/reverse-proxy.md
@@ -0,0 +1,168 @@
++++
+title = "Configuring a reverse proxy"
+weight = 30
++++
+
+The main reason to add a reverse proxy in front of Garage is to provide TLS to your users.
+
+In production you will likely need your certificates signed by a certificate authority.
+The most automated way is to use a provider supporting the [ACME protocol](https://datatracker.ietf.org/doc/html/rfc8555)
+such as [Let's Encrypt](https://letsencrypt.org/), [ZeroSSL](https://zerossl.com/) or [Buypass Go SSL](https://www.buypass.com/ssl/products/acme).
+
+If you are only testing Garage, you can generate a self-signed certificate to follow the documentation:
+
+```bash
+openssl req \
+ -new \
+ -x509 \
+ -keyout /tmp/garage.key \
+ -out /tmp/garage.crt \
+ -nodes \
+ -subj "/C=XX/ST=XX/L=XX/O=XX/OU=XX/CN=localhost/emailAddress=X@X.XX" \
+ -addext "subjectAltName = DNS:localhost, IP:127.0.0.1"
+
+cat /tmp/garage.key /tmp/garage.crt > /tmp/garage.pem
+```
+
+Be careful as you will need to allow self signed certificates in your client.
+For example, with minio, you must add the `--insecure` flag.
+An example:
+
+```bash
+mc ls --insecure garage/
+```
+
+## socat (only for testing purposes)
+
+If you want to test Garage with a TLS frontend, socat can do it for you in a single command:
+
+```bash
+socat \
+"openssl-listen:443,\
+reuseaddr,\
+fork,\
+verify=0,\
+cert=/tmp/garage.pem" \
+tcp4-connect:localhost:3900
+```
+
+## Nginx
+
+Nginx is a well-known reverse proxy suitable for production.
+We do the configuration in 3 steps: first we define the upstream blocks ("the backends")
+then we define the server blocks ("the frontends") for the S3 endpoint and finally for the web endpoint.
+
+The following configuration blocks can be all put in the same `/etc/nginx/sites-available/garage.conf`.
+To make your configuration active, run `ln -s /etc/nginx/sites-available/garage.conf /etc/nginx/sites-enabled/`.
+If you directly put the instructions in the root `nginx.conf`, keep in mind that these configurations must be enclosed inside a `http { }` block.
+
+And do not forget to reload nginx with `systemctl reload nginx` or `nginx -s reload`.
+
+### Defining backends
+
+First, we need to tell to nginx how to access our Garage cluster.
+Because we have multiple nodes, we want to leverage all of them by spreading the load.
+
+In nginx, we can do that with the upstream directive.
+Because we have 2 endpoints: one for the S3 API and one to serve websites,
+we create 2 backends named respectively `s3_backend` and `web_backend`.
+
+A documented example for the `s3_backend` assuming you chose port 3900:
+
+```nginx
+upstream s3_backend {
+ # if you have a garage instance locally
+ server 127.0.0.1:3900;
+ # you can also put your other instances
+ server 192.168.1.3:3900;
+ # domain names also work
+ server garage1.example.com:3900;
+ # you can assign weights if you have some servers
+ # that are more powerful than others
+ server garage2.example.com:3900 weight=2;
+}
+```
+
+A similar example for the `web_backend` assuming you chose port 3902:
+
+```nginx
+upstream web_backend {
+ server 127.0.0.1:3902;
+ server 192.168.1.3:3902;
+ server garage1.example.com:3902;
+ server garage2.example.com:3902 weight=2;
+}
+```
+
+### Exposing the S3 API
+
+The configuration section for the S3 API is simple as we only support path-access style yet.
+We simply configure the TLS parameters and forward all the requests to the backend:
+
+```nginx
+server {
+ listen [::]:443 http2 ssl;
+ ssl_certificate /tmp/garage.crt;
+ ssl_certificate_key /tmp/garage.key;
+
+ # should be the endpoint you want
+ # aws uses s3.amazonaws.com for example
+ server_name garage.example.com;
+
+ location / {
+ proxy_pass http://s3_backend;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header Host $host;
+ }
+}
+
+```
+
+### Exposing the web endpoint
+
+The web endpoint is a bit more complicated to configure as it listens on many different `Host` fields.
+To better understand the logic involved, you can refer to the [Exposing buckets as websites](@/documentation/cookbook/exposing-websites.md) section.
+Also, for some applications, you may need to serve CORS headers: Garage can not serve them directly but we show how we can use nginx to serve them.
+You can use the following example as your starting point:
+
+```nginx
+server {
+ listen [::]:443 http2 ssl;
+ ssl_certificate /tmp/garage.crt;
+ ssl_certificate_key /tmp/garage.key;
+
+ # We list all the Hosts fields that can access our buckets
+ server_name *.web.garage
+ example.com
+ my-site.tld
+ ;
+
+ location / {
+ # Add these headers only if you want to allow CORS requests
+ # For production use, more specific rules would be better for your security
+ add_header Access-Control-Allow-Origin *;
+ add_header Access-Control-Max-Age 3600;
+ add_header Access-Control-Expose-Headers Content-Length;
+ add_header Access-Control-Allow-Headers Range;
+
+ # We do not forward OPTIONS requests to Garage
+ # as it does not support them but they are needed for CORS.
+ if ($request_method = OPTIONS) {
+ return 200;
+ }
+
+ proxy_pass http://web_backend;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header Host $host;
+ }
+}
+```
+
+
+## Apache httpd
+
+@TODO
+
+## Traefik
+
+@TODO
diff --git a/doc/book/cookbook/systemd.md b/doc/book/cookbook/systemd.md
new file mode 100644
index 00000000..b271010b
--- /dev/null
+++ b/doc/book/cookbook/systemd.md
@@ -0,0 +1,53 @@
++++
+title = "Starting Garage with systemd"
+weight = 15
++++
+
+We make some assumptions for this systemd deployment.
+
+ - Your garage binary is located at `/usr/local/bin/garage`.
+
+ - Your configuration file is located at `/etc/garage.toml`.
+
+ - Your `garage.toml` must be set with `metadata_dir=/var/lib/garage/meta` and `data_dir=/var/lib/garage/data`. This is mandatory to use `systemd` hardening feature [Dynamic User](https://0pointer.net/blog/dynamic-users-with-systemd.html). Note that in your host filesystem, Garage data will be held in `/var/lib/private/garage`.
+
+
+
+Create a file named `/etc/systemd/system/garage.service`:
+
+```toml
+[Unit]
+Description=Garage Data Store
+After=network-online.target
+Wants=network-online.target
+
+[Service]
+Environment='RUST_LOG=garage=info' 'RUST_BACKTRACE=1'
+ExecStart=/usr/local/bin/garage server
+StateDirectory=garage
+DynamicUser=true
+ProtectHome=true
+NoNewPrivileges=true
+
+[Install]
+WantedBy=multi-user.target
+```
+
+*A note on hardening: garage will be run as a non privileged user, its user id is dynamically allocated by systemd. It cannot access (read or write) home folders (/home, /root and /run/user), the rest of the filesystem can only be read but not written, only the path seen as /var/lib/garage is writable as seen by the service (mapped to /var/lib/private/garage on your host). Additionnaly, the process can not gain new privileges over time.*
+
+To start the service then automatically enable it at boot:
+
+```bash
+sudo systemctl start garage
+sudo systemctl enable garage
+```
+
+To see if the service is running and to browse its logs:
+
+```bash
+sudo systemctl status garage
+sudo journalctl -u garage
+```
+
+If you want to modify the service file, do not forget to run `systemctl daemon-reload`
+to inform `systemd` of your modifications.