aboutsummaryrefslogtreecommitdiff
path: root/doc/book
diff options
context:
space:
mode:
Diffstat (limited to 'doc/book')
-rw-r--r--doc/book/build/python.md59
-rw-r--r--doc/book/connect/_index.md7
-rw-r--r--doc/book/connect/apps/index.md76
-rw-r--r--doc/book/connect/observability.md57
-rw-r--r--doc/book/cookbook/real-world.md8
-rw-r--r--doc/book/development/devenv.md2
-rw-r--r--doc/book/reference-manual/configuration.md6
7 files changed, 197 insertions, 18 deletions
diff --git a/doc/book/build/python.md b/doc/book/build/python.md
index 19912e85..5b797897 100644
--- a/doc/book/build/python.md
+++ b/doc/book/build/python.md
@@ -5,16 +5,59 @@ weight = 20
## S3
-*Coming soon*
+### Using Minio SDK
+
+First install the SDK:
+
+```bash
+pip3 install minio
+```
+
+Then instantiate a client object using garage root domain, api key and secret:
+
+```python
+import minio
+
+client = minio.Minio(
+ "your.domain.tld",
+ "GKyourapikey",
+ "abcd[...]1234",
+ # Force the region, this is specific to garage
+ region="region",
+)
+```
-Some refs:
- - Minio SDK
- - [Reference](https://docs.min.io/docs/python-client-api-reference.html)
+Then use all the standard S3 endpoints as implemented by the Minio SDK:
+
+```
+# List buckets
+print(client.list_buckets())
+
+# Put an object containing 'content' to /path in bucket named 'bucket':
+content = b"content"
+client.put_object(
+ "bucket",
+ "path",
+ io.BytesIO(content),
+ len(content),
+)
+
+# Read the object back and check contents
+data = client.get_object("bucket", "path").read()
+assert data == content
+```
+
+For further documentation, see the Minio SDK
+[Reference](https://docs.min.io/docs/python-client-api-reference.html)
+
+### Using Amazon boto3
+
+*Coming soon*
- - Amazon boto3
- - [Installation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html)
- - [Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html)
- - [Example](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html)
+See the official documentation:
+ - [Installation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html)
+ - [Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html)
+ - [Example](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html)
## K2V
diff --git a/doc/book/connect/_index.md b/doc/book/connect/_index.md
index ca44ac17..93a2b87e 100644
--- a/doc/book/connect/_index.md
+++ b/doc/book/connect/_index.md
@@ -10,11 +10,12 @@ Garage implements the Amazon S3 protocol, which makes it compatible with many ex
In particular, you will find here instructions to connect it with:
- - [Browsing tools](@/documentation/connect/cli.md)
- [Applications](@/documentation/connect/apps/index.md)
- - [Website hosting](@/documentation/connect/websites.md)
- - [Software repositories](@/documentation/connect/repositories.md)
+ - [Browsing tools](@/documentation/connect/cli.md)
- [FUSE](@/documentation/connect/fs.md)
+ - [Observability](@/documentation/connect/observability.md)
+ - [Software repositories](@/documentation/connect/repositories.md)
+ - [Website hosting](@/documentation/connect/websites.md)
### Generic instructions
diff --git a/doc/book/connect/apps/index.md b/doc/book/connect/apps/index.md
index 05e7cad9..4d556ff8 100644
--- a/doc/book/connect/apps/index.md
+++ b/doc/book/connect/apps/index.md
@@ -8,12 +8,12 @@ In this section, we cover the following web applications:
| Name | Status | Note |
|------|--------|------|
| [Nextcloud](#nextcloud) | ✅ | Both Primary Storage and External Storage are supported |
-| [Peertube](#peertube) | ✅ | Must be configured with the website endpoint |
+| [Peertube](#peertube) | ✅ | Supported with the website endpoint, proxifying private videos unsupported |
| [Mastodon](#mastodon) | ✅ | Natively supported |
| [Matrix](#matrix) | ✅ | Tested with `synapse-s3-storage-provider` |
| [Pixelfed](#pixelfed) | ❓ | Not yet tested |
| [Pleroma](#pleroma) | ❓ | Not yet tested |
-| [Lemmy](#lemmy) | ❓ | Not yet tested |
+| [Lemmy](#lemmy) | ✅ | Supported with pict-rs |
| [Funkwhale](#funkwhale) | ❓ | Not yet tested |
| [Misskey](#misskey) | ❓ | Not yet tested |
| [Prismo](#prismo) | ❓ | Not yet tested |
@@ -128,6 +128,10 @@ In other words, Peertube is only responsible of the "control plane" and offload
In return, this system is a bit harder to configure.
We show how it is still possible to configure Garage with Peertube, allowing you to spread the load and the bandwidth usage on the Garage cluster.
+Starting from version 5.0, Peertube also supports improving the security for private videos by not exposing them directly
+but relying on a single control point in the Peertube instance. This is based on S3 per-object and prefix ACL, which are not currently supported
+in Garage, so this feature is unsupported. While this technically impedes security for private videos, it is not a blocking issue and could be
+a reasonable trade-off for some instances.
### Create resources in Garage
@@ -195,6 +199,11 @@ object_storage:
max_upload_part: 2GB
+ proxy:
+ # You may enable this feature, yet it will not provide any security benefit, so
+ # you should rather benefit from Garage public endpoint for all videos
+ proxify_private_files: false
+
streaming_playlists:
bucket_name: 'peertube-playlist'
@@ -475,7 +484,68 @@ And add a new line. For example, to run it every 10 minutes:
## Lemmy
-Lemmy uses pict-rs that [supports S3 backends](https://git.asonix.dog/asonix/pict-rs/commit/f9f4fc63d670f357c93f24147c2ee3e1278e2d97)
+Lemmy uses pict-rs that [supports S3 backends](https://git.asonix.dog/asonix/pict-rs/commit/f9f4fc63d670f357c93f24147c2ee3e1278e2d97).
+This feature requires `pict-rs >= 4.0.0`.
+
+### Creating your bucket
+
+This is the usual Garage setup:
+
+```bash
+garage key new --name pictrs-key
+garage bucket create pictrs-data
+garage bucket allow pictrs-data --read --write --key pictrs-key
+```
+
+Note the Key ID and Secret Key.
+
+### Migrating your data
+
+If your pict-rs instance holds existing data, you first need to migrate to the S3 bucket.
+
+Stop pict-rs, then run the migration utility from local filesystem to the bucket:
+
+```
+pict-rs \
+ filesystem -p /path/to/existing/files \
+ object-store \
+ -e my-garage-instance.mydomain.tld:3900 \
+ -b pictrs-data \
+ -r garage \
+ -a GK... \
+ -s abcdef0123456789...
+```
+
+This is pretty slow, so hold on while migrating.
+
+### Running pict-rs with an S3 backend
+
+Pict-rs supports both a configuration file and environment variables.
+
+Either set the following section in your `pict-rs.toml`:
+
+```
+[store]
+type = 'object_storage'
+endpoint = 'http://my-garage-instance.mydomain.tld:3900'
+bucket_name = 'pictrs-data'
+region = 'garage'
+access_key = 'GK...'
+secret_key = 'abcdef0123456789...'
+```
+
+... or set these environment variables:
+
+
+```
+PICTRS__STORE__TYPE=object_storage
+PICTRS__STORE__ENDPOINT=http:/my-garage-instance.mydomain.tld:3900
+PICTRS__STORE__BUCKET_NAME=pictrs-data
+PICTRS__STORE__REGION=garage
+PICTRS__STORE__ACCESS_KEY=GK...
+PICTRS__STORE__SECRET_KEY=abcdef0123456789...
+```
+
## Funkwhale
diff --git a/doc/book/connect/observability.md b/doc/book/connect/observability.md
new file mode 100644
index 00000000..c5037fa4
--- /dev/null
+++ b/doc/book/connect/observability.md
@@ -0,0 +1,57 @@
++++
+title = "Observability"
+weight = 25
++++
+
+An object store can be used as data storage location for metrics, and logs which
+can then be leveraged for systems observability.
+
+## Metrics
+
+### Prometheus
+
+Prometheus itself has no object store capabilities, however two projects exist
+which support storing metrics in an object store:
+
+ - [Cortex](https://cortexmetrics.io/)
+ - [Thanos](https://thanos.io/)
+
+## System logs
+
+### Vector
+
+[Vector](https://vector.dev/) natively supports S3 as a
+[data sink](https://vector.dev/docs/reference/configuration/sinks/aws_s3/)
+(and [source](https://vector.dev/docs/reference/configuration/sources/aws_s3/)).
+
+This can be configured with Garage with the following:
+
+```bash
+garage key new --name vector-system-logs
+garage bucket create system-logs
+garage bucket allow system-logs --read --write --key vector-system-logs
+```
+
+The `vector.toml` can then be configured as follows:
+
+```toml
+[sources.journald]
+type = "journald"
+current_boot_only = true
+
+[sinks.out]
+encoding.codec = "json"
+type = "aws_s3"
+inputs = [ "journald" ]
+bucket = "system-logs"
+key_prefix = "%F/"
+compression = "none"
+region = "garage"
+endpoint = "https://my-garage-instance.mydomain.tld"
+auth.access_key_id = ""
+auth.secret_access_key = ""
+```
+
+This is an example configuration - please refer to the Vector documentation for
+all configuration and transformation possibilities. Also note that Garage
+performs its own compression, so this should be disabled in Vector.
diff --git a/doc/book/cookbook/real-world.md b/doc/book/cookbook/real-world.md
index 5423bbab..9be1ba44 100644
--- a/doc/book/cookbook/real-world.md
+++ b/doc/book/cookbook/real-world.md
@@ -19,8 +19,12 @@ To run a real-world deployment, make sure the following conditions are met:
- You have at least three machines with sufficient storage space available.
-- Each machine has a public IP address which is reachable by other machines.
- Running behind a NAT is likely to be possible but hasn't been tested for the latest version (TODO).
+- Each machine has a public IP address which is reachable by other machines. It
+ is highly recommended that you use IPv6 for this end-to-end connectivity. If
+ IPv6 is not available, then using a mesh VPN such as
+ [Nebula](https://github.com/slackhq/nebula) or
+ [Yggdrasil](https://yggdrasil-network.github.io/) are approaches to consider
+ in addition to building out your own VPN tunneling.
- This guide will assume you are using Docker containers to deploy Garage on each node.
Garage can also be run independently, for instance as a [Systemd service](@/documentation/cookbook/systemd.md).
diff --git a/doc/book/development/devenv.md b/doc/book/development/devenv.md
index c2ef4e7d..8d7d2e95 100644
--- a/doc/book/development/devenv.md
+++ b/doc/book/development/devenv.md
@@ -39,7 +39,7 @@ Now you can enter our nix-shell, all the required packages will be downloaded bu
nix-shell
```
-You can use the traditionnal Rust development workflow:
+You can use the traditional Rust development workflow:
```bash
cargo build # compile the project
diff --git a/doc/book/reference-manual/configuration.md b/doc/book/reference-manual/configuration.md
index 2d9c3f0c..353963ef 100644
--- a/doc/book/reference-manual/configuration.md
+++ b/doc/book/reference-manual/configuration.md
@@ -96,7 +96,7 @@ Performance characteristics of the different DB engines are as follows:
- Sled: the default database engine, which tends to produce
large data files and also has performance issues, especially when the metadata folder
- is on a traditionnal HDD and not on SSD.
+ is on a traditional HDD and not on SSD.
- LMDB: the recommended alternative on 64-bit systems,
much more space-efficiant and slightly faster. Note that the data format of LMDB is not portable
between architectures, so for instance the Garage database of an x86-64
@@ -267,6 +267,10 @@ This key should be specified here in the form of a 32-byte hex-encoded
random string. Such a string can be generated with a command
such as `openssl rand -hex 32`.
+### `rpc_secret_file`
+
+Like `rpc_secret` above, just that this is the path to a file that Garage will try to read the secret from.
+
### `rpc_bind_addr`
The address and port on which to bind for inter-cluster communcations