aboutsummaryrefslogtreecommitdiff
path: root/content/documentation/connect
diff options
context:
space:
mode:
Diffstat (limited to 'content/documentation/connect')
l---------content/documentation/connect1
-rw-r--r--content/documentation/connect/_index.md48
-rw-r--r--content/documentation/connect/apps.md464
-rw-r--r--content/documentation/connect/backup.md37
-rw-r--r--content/documentation/connect/cli-nextcloud-gui.pngbin201685 -> 0 bytes
-rw-r--r--content/documentation/connect/cli.md170
-rw-r--r--content/documentation/connect/code.md83
-rw-r--r--content/documentation/connect/fs.md72
-rw-r--r--content/documentation/connect/repositories.md173
-rw-r--r--content/documentation/connect/websites.md82
10 files changed, 1 insertions, 1129 deletions
diff --git a/content/documentation/connect b/content/documentation/connect
new file mode 120000
index 0000000..6610578
--- /dev/null
+++ b/content/documentation/connect
@@ -0,0 +1 @@
+../../garage/doc/book/connect \ No newline at end of file
diff --git a/content/documentation/connect/_index.md b/content/documentation/connect/_index.md
deleted file mode 100644
index 8ae3e60..0000000
--- a/content/documentation/connect/_index.md
+++ /dev/null
@@ -1,48 +0,0 @@
-+++
-title = "Integrations"
-weight = 3
-sort_by = "weight"
-template = "documentation.html"
-+++
-
-
-
-Garage implements the Amazon S3 protocol, which makes it compatible with many existing software programs.
-
-In particular, you will find here instructions to connect it with:
-
- - [web applications](./apps.md)
- - [website hosting](./websites.md)
- - [software repositories](./repositories.md)
- - [CLI tools](./cli.md)
- - [your own code](./code.md)
-
-### Generic instructions
-
-To configure S3-compatible software to interact with Garage,
-you will need the following parameters:
-
-- An **API endpoint**: this corresponds to the HTTP or HTTPS address
- used to contact the Garage server. When runing Garage locally this will usually
- be `http://127.0.0.1:3900`. In a real-world setting, you would usually have a reverse-proxy
- that adds TLS support and makes your Garage server available under a public hostname
- such as `https://garage.example.com`.
-
-- An **API access key** and its associated **secret key**. These usually look something
- like this: `GK3515373e4c851ebaad366558` (access key),
- `7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34` (secret key).
- These keys are created and managed using the `garage` CLI, as explained in the
- [quick start](../quick_start/index.md) guide.
-
-Most S3 clients can be configured easily with these parameters,
-provided that you follow the following guidelines:
-
-- **Force path style:** Garage does not support DNS-style buckets, which are now by default
- on Amazon S3. Instead, Garage uses the legacy path-style bucket addressing.
- Remember to configure your client to acknowledge this fact.
-
-- **Configuring the S3 region:** Garage requires your client to talk to the correct "S3 region",
- which is set in the configuration file. This is often set just to `garage`.
- If this is not configured explicitly, clients usually try to talk to region `us-east-1`.
- Garage should normally redirect your client to the correct region,
- but in case your client does not support this you might have to configure it manually.
diff --git a/content/documentation/connect/apps.md b/content/documentation/connect/apps.md
deleted file mode 100644
index 4348286..0000000
--- a/content/documentation/connect/apps.md
+++ /dev/null
@@ -1,464 +0,0 @@
-+++
-title = "Apps (Nextcloud, Peertube...)"
-weight = 5
-+++
-
-In this section, we cover the following software: [Nextcloud](#nextcloud), [Peertube](#peertube), [Mastodon](#mastodon), [Matrix](#matrix)
-
-## Nextcloud
-
-Nextcloud is a popular file synchronisation and backup service.
-By default, Nextcloud stores its data on the local filesystem.
-If you want to expand your storage to aggregate multiple servers, Garage is the way to go.
-
-A S3 backend can be configured in two ways on Nextcloud, either as Primary Storage or as an External Storage.
-Primary storage will store all your data on S3, in an opaque manner, and will provide the best performances.
-External storage enable you to select which data will be stored on S3, your file hierarchy will be preserved in S3, but it might be slower.
-
-In the following, we cover both methods but before reading our guide, we suppose you have done some preliminary steps.
-First, we expect you have an already installed and configured Nextcloud instance.
-Second, we suppose you have created a key and a bucket.
-
-As a reminder, you can create a key for your nextcloud instance as follow:
-
-```bash
-garage key new --name nextcloud-key
-```
-
-Keep the Key ID and the Secret key in a pad, they will be needed later.
-Then you can create a bucket and give read/write rights to your key on this bucket with:
-
-```bash
-garage bucket create nextcloud
-garage bucket allow nextcloud --read --write --key nextcloud-key
-```
-
-
-### Primary Storage
-
-Now edit your Nextcloud configuration file to enable object storage.
-On my installation, the config. file is located at the following path: `/var/www/nextcloud/config/config.php`.
-We will add a new root key to the `$CONFIG` dictionnary named `objectstore`:
-
-```php
-<?php
-$CONFIG = array(
-/* your existing configuration */
-'objectstore' => [
- 'class' => '\\OC\\Files\\ObjectStore\\S3',
- 'arguments' => [
- 'bucket' => 'nextcloud', // Your bucket name, must be created before
- 'autocreate' => false, // Garage does not support autocreate
- 'key' => 'xxxxxxxxx', // The Key ID generated previously
- 'secret' => 'xxxxxxxxx', // The Secret key generated previously
- 'hostname' => '127.0.0.1', // Can also be a domain name, eg. garage.example.com
- 'port' => 3900, // Put your reverse proxy port or your S3 API port
- 'use_ssl' => false, // Set it to true if you have a TLS enabled reverse proxy
- 'region' => 'garage', // Garage has only one region named "garage"
- 'use_path_style' => true // Garage supports only path style, must be set to true
- ],
-],
-```
-
-That's all, your Nextcloud will store all your data to S3.
-To test your new configuration, just reload your Nextcloud webpage and start sending data.
-
-*External link:* [Nextcloud Documentation > Primary Storage](https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/primary_storage.html)
-
-### External Storage
-
-**From the GUI.** Activate the "External storage support" app from the "Applications" page (click on your account icon on the top right corner of your screen to display the menu). Go to your parameters page (also located below your account icon). Click on external storage (or the corresponding translation in your language).
-
-[![Screenshot of the External Storage form](../cli-nextcloud-gui.png)](../cli-nextcloud-gui.png)
-*Click on the picture to zoom*
-
-Add a new external storage. Put what you want in "folder name" (eg. "shared"). Select "Amazon S3". Keep "Access Key" for the Authentication field.
-In Configuration, put your bucket name (eg. nextcloud), the host (eg. 127.0.0.1), the port (eg. 3900 or 443), the region (garage). Tick the SSL box if you have put an HTTPS proxy in front of garage. You must tick the "Path access" box and you must leave the "Legacy authentication (v2)" box empty. Put your Key ID (eg. GK...) and your Secret Key in the last two input boxes. Finally click on the tick symbol on the right of your screen.
-
-Now go to your "Files" app and a new "linked folder" has appeared with the name you chose earlier (eg. "shared").
-
-*External link:* [Nextcloud Documentation > External Storage Configuration GUI](https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage_configuration_gui.html)
-
-**From the CLI.** First install the external storage application:
-
-```bash
-php occ app:install files_external
-```
-
-Then add a new mount point with:
-
-```bash
- php occ files_external:create \
- -c bucket=nextcloud \
- -c hostname=127.0.0.1 \
- -c port=3900 \
- -c region=garage \
- -c use_ssl=false \
- -c use_path_style=true \
- -c legacy_auth=false \
- -c key=GKxxxx \
- -c secret=xxxx \
- shared amazons3 amazons3::accesskey
-```
-
-Adapt the `hostname`, `port`, `use_ssl`, `key`, and `secret` entries to your configuration.
-Do not change the `use_path_style` and `legacy_auth` entries, other configurations are not supported.
-
-*External link:* [Nextcloud Documentation > occ command > files external](https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html#files-external-label)
-
-
-## Peertube
-
-Peertube proposes a clever integration of S3 by directly exposing its endpoint instead of proxifying requests through the application.
-In other words, Peertube is only responsible of the "control plane" and offload the "data plane" to Garage.
-In return, this system is a bit harder to configure, especially with Garage that supports less feature than other older S3 backends.
-We show that it is still possible to configure Garage with Peertube, allowing you to spread the load and the bandwidth usage on the Garage cluster.
-
-### Enable path-style access by patching Peertube
-
-First, you will need to apply a small patch on Peertube ([#4510](https://github.com/Chocobozzz/PeerTube/pull/4510)):
-
-```diff
-From e3b4c641bdf67e07d406a1d49d6aa6b1fbce2ab4 Mon Sep 17 00:00:00 2001
-From: Martin Honermeyer <maze@strahlungsfrei.de>
-Date: Sun, 31 Oct 2021 12:34:04 +0100
-Subject: [PATCH] Allow setting path-style access for object storage
-
----
- config/default.yaml | 4 ++++
- config/production.yaml.example | 4 ++++
- server/initializers/config.ts | 1 +
- server/lib/object-storage/shared/client.ts | 3 ++-
- .../production/config/custom-environment-variables.yaml | 2 ++
- 5 files changed, 13 insertions(+), 1 deletion(-)
-
-diff --git a/config/default.yaml b/config/default.yaml
-index cf9d69a6211..4efd56fb804 100644
---- a/config/default.yaml
-+++ b/config/default.yaml
-@@ -123,6 +123,10 @@ object_storage:
- # You can also use AWS_SECRET_ACCESS_KEY env variable
- secret_access_key: ''
-
-+ # Reference buckets via path rather than subdomain
-+ # (i.e. "my-endpoint.com/bucket" instead of "bucket.my-endpoint.com")
-+ force_path_style: false
-+
- # Maximum amount to upload in one request to object storage
- max_upload_part: 2GB
-
-diff --git a/config/production.yaml.example b/config/production.yaml.example
-index 70993bf57a3..9ca2de5f4c9 100644
---- a/config/production.yaml.example
-+++ b/config/production.yaml.example
-@@ -121,6 +121,10 @@ object_storage:
- # You can also use AWS_SECRET_ACCESS_KEY env variable
- secret_access_key: ''
-
-+ # Reference buckets via path rather than subdomain
-+ # (i.e. "my-endpoint.com/bucket" instead of "bucket.my-endpoint.com")
-+ force_path_style: false
-+
- # Maximum amount to upload in one request to object storage
- max_upload_part: 2GB
-
-diff --git a/server/initializers/config.ts b/server/initializers/config.ts
-index 8375bf4304c..d726c59a4b6 100644
---- a/server/initializers/config.ts
-+++ b/server/initializers/config.ts
-@@ -91,6 +91,7 @@ const CONFIG = {
- ACCESS_KEY_ID: config.get<string>('object_storage.credentials.access_key_id'),
- SECRET_ACCESS_KEY: config.get<string>('object_storage.credentials.secret_access_key')
- },
-+ FORCE_PATH_STYLE: config.get<boolean>('object_storage.force_path_style'),
- VIDEOS: {
- BUCKET_NAME: config.get<string>('object_storage.videos.bucket_name'),
- PREFIX: config.get<string>('object_storage.videos.prefix'),
-diff --git a/server/lib/object-storage/shared/client.ts b/server/lib/object-storage/shared/client.ts
-index c9a61459336..eadad02f93f 100644
---- a/server/lib/object-storage/shared/client.ts
-+++ b/server/lib/object-storage/shared/client.ts
-@@ -26,7 +26,8 @@ function getClient () {
- accessKeyId: OBJECT_STORAGE.CREDENTIALS.ACCESS_KEY_ID,
- secretAccessKey: OBJECT_STORAGE.CREDENTIALS.SECRET_ACCESS_KEY
- }
-- : undefined
-+ : undefined,
-+ forcePathStyle: CONFIG.OBJECT_STORAGE.FORCE_PATH_STYLE
- })
-
- logger.info('Initialized S3 client %s with region %s.', getEndpoint(), OBJECT_STORAGE.REGION, lTags())
-diff --git a/support/docker/production/config/custom-environment-variables.yaml b/support/docker/production/config/custom-environment-variables.yaml
-index c7cd28e6521..a960bab0bc9 100644
---- a/support/docker/production/config/custom-environment-variables.yaml
-+++ b/support/docker/production/config/custom-environment-variables.yaml
-@@ -54,6 +54,8 @@ object_storage:
-
- region: "PEERTUBE_OBJECT_STORAGE_REGION"
-
-+ force_path_style: "PEERTUBE_OBJECT_STORAGE_FORCE_PATH_STYLE"
-+
- max_upload_part:
- __name: "PEERTUBE_OBJECT_STORAGE_MAX_UPLOAD_PART"
- __format: "json"
-```
-
-You can then recompile it with:
-
-```
-npm run build
-```
-
-And it can be started with:
-
-```
-NODE_ENV=production NODE_CONFIG_DIR=/srv/peertube/config node dist/server.js
-```
-
-
-### Create resources in Garage
-
-Create a key for Peertube:
-
-```bash
-garage key new --name peertube-key
-```
-
-Keep the Key ID and the Secret key in a pad, they will be needed later.
-
-We need two buckets, one for normal videos (named peertube-video) and one for webtorrent videos (named peertube-playlist).
-```bash
-garage bucket create peertube-video
-garage bucket create peertube-playlist
-```
-
-Now we allow our key to read and write on these buckets:
-
-```
-garage bucket allow peertube-playlist --read --write --key peertube-key
-garage bucket allow peertube-video --read --write --key peertube-key
-```
-
-Finally, we need to expose these buckets publicly to serve their content to users:
-
-```bash
-garage bucket website --allow peertube-playlist
-garage bucket website --allow peertube-video
-```
-
-These buckets are now accessible on the web port (by default 3902) with the following URL: `http://<bucket><root_domain>:<web_port>` where the root domain is defined in your configuration file (by default `.web.garage`). So we have currently the following URLs:
- * http://peertube-playlist.web.garage:3902
- * http://peertube-video.web.garage:3902
-
-Make sure you (will) have a corresponding DNS entry for them.
-
-### Configure a Reverse Proxy to serve CORS
-
-Now we will configure a reverse proxy in front of Garage.
-This is required as we have no other way to serve CORS headers yet.
-Check the [Configuring a reverse proxy](/documentation/cookbook/reverse-proxy/) section to know how.
-
-Now make sure that your 2 dns entries are pointing to your reverse proxy.
-
-### Configure Peertube
-
-You must edit the file named `config/production.yaml`, we are only modifying the root key named `object_storage`:
-
-```yaml
-object_storage:
- enabled: true
-
- # Put localhost only if you have a garage instance running on that node
- endpoint: 'http://localhost:3900' # or "garage.example.com" if you have TLS on port 443
-
- # This entry has been added by our patch, must be set to true
- force_path_style: true
-
- # Garage supports only one region for now, named garage
- region: 'garage'
-
- credentials:
- access_key_id: 'GKxxxx'
- secret_access_key: 'xxxx'
-
- max_upload_part: 2GB
-
- streaming_playlists:
- bucket_name: 'peertube-playlist'
-
- # Keep it empty for our example
- prefix: ''
-
- # You must fill this field to make Peertube use our reverse proxy/website logic
- base_url: 'http://peertube-playlist.web.garage' # Example: 'https://mirror.example.com'
-
- # Same settings but for webtorrent videos
- videos:
- bucket_name: 'peertube-video'
- prefix: ''
- # You must fill this field to make Peertube use our reverse proxy/website logic
- base_url: 'http://peertube-video.web.garage'
-```
-
-### That's all
-
-Everything must be configured now, simply restart Peertube and try to upload a video.
-You must see in your browser console that data are fetched directly from our bucket (through the reverse proxy).
-
-### Miscellaneous
-
-*Known bug:* The playback does not start and some 400 Bad Request Errors appear in your browser console and on Garage.
-If the description of the error contains HTTP Invalid Range: InvalidRange, the error is due to a buggy ffmpeg version.
-You must avoid the 4.4.0 and use either a newer or older version.
-
-*Associated issues:* [#137](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/137), [#138](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/138), [#140](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/140). These issues are non blocking.
-
-*External link:* [Peertube Documentation > Remote Storage](https://docs.joinpeertube.org/admin-remote-storage)
-
-## Mastodon
-
-https://docs.joinmastodon.org/admin/config/#cdn
-
-## Matrix
-
-Matrix is a chat communication protocol. Its main stable server implementation, [Synapse](https://matrix-org.github.io/synapse/latest/), provides a module to store media on a S3 backend. Additionally, a server independent media store supporting S3 has been developped by the community, it has been made possible thanks to how the matrix API has been designed and will work with implementations like Conduit, Dendrite, etc.
-
-### synapse-s3-storage-provider (synapse only)
-
-Supposing you have a working synapse installation, you can add the module with pip:
-
-```bash
- pip3 install --user git+https://github.com/matrix-org/synapse-s3-storage-provider.git
-```
-
-Now create a bucket and a key for your matrix instance (note your Key ID and Secret Key somewhere, they will be needed later):
-
-```bash
-garage key new --name matrix-key
-garage bucket create matrix
-garage bucket allow matrix --read --write --key matrix-key
-```
-
-Then you must edit your server configuration (eg. `/etc/matrix-synapse/homeserver.yaml`) and add the `media_storage_providers` root key:
-
-```yaml
-media_storage_providers:
-- module: s3_storage_provider.S3StorageProviderBackend
- store_local: True # do we want to store on S3 media created by our users?
- store_remote: True # do we want to store on S3 media created
- # by users of others servers federated to ours?
- store_synchronous: True # do we want to wait that the file has been written before returning?
- config:
- bucket: matrix # the name of our bucket, we chose matrix earlier
- region_name: garage # only "garage" is supported for the region field
- endpoint_url: http://localhost:3900 # the path to the S3 endpoint
- access_key_id: "GKxxx" # your Key ID
- secret_access_key: "xxxx" # your Secret Key
-```
-
-Note that uploaded media will also be stored locally and this behavior can not be deactivated, it is even required for
-some operations like resizing images.
-In fact, your local filesysem is considered as a cache but without any automated way to garbage collect it.
-
-We can build our garbage collector with `s3_media_upload`, a tool provided with the module.
-If you installed the module with the command provided before, you should be able to bring it in your path:
-
-```
-PATH=$HOME/.local/bin/:$PATH
-command -v s3_media_upload
-```
-
-Now we can write a simple script (eg `~/.local/bin/matrix-cache-gc`):
-
-```bash
-#!/bin/bash
-
-## CONFIGURATION ##
-AWS_ACCESS_KEY_ID=GKxxx
-AWS_SECRET_ACCESS_KEY=xxxx
-S3_ENDPOINT=http://localhost:3900
-S3_BUCKET=matrix
-MEDIA_STORE=/var/lib/matrix-synapse/media
-PG_USER=matrix
-PG_PASS=xxxx
-PG_DB=synapse
-PG_HOST=localhost
-PG_PORT=5432
-
-## CODE ##
-PATH=$HOME/.local/bin/:$PATH
-cat > database.yaml <<EOF
-user: $PG_USER
-password: $PG_PASS
-database: $PG_DB
-host: $PG_HOST
-port: $PG_PORT
-EOF
-
-s3_media_upload update-db 1d
-s3_media_upload --no-progress check-deleted $MEDIA_STORE
-s3_media_upload --no-progress upload $MEDIA_STORE $S3_BUCKET --delete --endpoint-url $S3_ENDPOINT
-```
-
-This script will list all the medias that were not accessed in the 24 hours according to your database.
-It will check if, in this list, the file still exists in the local media store.
-For files that are still in the cache, it will upload them to S3 if they are not already present (in case of a crash or an initial synchronisation).
-Finally, the script will delete these files from the cache.
-
-Make this script executable and check that it works:
-
-```bash
-chmod +x $HOME/.local/bin/matrix-cache-gc
-matrix-cache-gc
-```
-
-Add it to your crontab. Open the editor with:
-
-```bash
-crontab -e
-```
-
-And add a new line. For example, to run it every 10 minutes:
-
-```cron
-*/10 * * * * $HOME/.local/bin/matrix-cache-gc
-```
-
-*External link:* [Github > matrix-org/synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider)
-
-### matrix-media-repo (server independent)
-
-*External link:* [matrix-media-repo Documentation > S3](https://docs.t2bot.io/matrix-media-repo/configuration/s3-datastore.html)
-
-## Pixelfed
-
-https://docs.pixelfed.org/technical-documentation/env.html#filesystem
-
-## Pleroma
-
-https://docs-develop.pleroma.social/backend/configuration/cheatsheet/#pleromauploaderss3
-
-## Lemmy
-
-via pict-rs
-https://git.asonix.dog/asonix/pict-rs/commit/f9f4fc63d670f357c93f24147c2ee3e1278e2d97
-
-## Funkwhale
-
-https://docs.funkwhale.audio/admin/configuration.html#s3-storage
-
-## Misskey
-
-https://github.com/misskey-dev/misskey/commit/9d944243a3a59e8880a360cbfe30fd5a3ec8d52d
-
-## Prismo
-
-https://gitlab.com/prismosuite/prismo/-/blob/dev/.env.production.sample#L26-33
-
-## Owncloud Infinite Scale (ocis)
-
-## Unsupported
-
- - Mobilizon: No S3 integration
- - WriteFreely: No S3 integration
- - Plume: No S3 integration
diff --git a/content/documentation/connect/backup.md b/content/documentation/connect/backup.md
deleted file mode 100644
index 878660f..0000000
--- a/content/documentation/connect/backup.md
+++ /dev/null
@@ -1,37 +0,0 @@
-+++
-title = "Backups (restic, duplicity...)"
-weight = 25
-+++
-
-
-Backups are essential for disaster recovery but they are not trivial to manage.
-Using Garage as your backup target will enable you to scale your storage as needed while ensuring high availability.
-
-## Borg Backup
-
-Borg Backup is very popular among the backup tools but it is not yet compatible with the S3 API.
-We recommend using any other tool listed in this guide because they are all compatible with the S3 API.
-If you still want to use Borg, you can use it with `rclone mount`.
-
-
-
-## Restic
-
-*External links:* [Restic Documentation > Amazon S3](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#amazon-s3)
-
-## Duplicity
-
-*External links:* [Duplicity > man](https://duplicity.gitlab.io/duplicity-web/vers8/duplicity.1.html) (scroll to "URL Format" and "A note on Amazon S3")
-
-## Duplicati
-
-*External links:* [Duplicati Documentation > Storage Providers](https://github.com/kees-z/DuplicatiDocs/blob/master/docs/05-storage-providers.md#s3-compatible)
-
-## knoxite
-
-*External links:* [Knoxite Documentation > Storage Backends](https://knoxite.com/docs/storage-backends/#amazon-s3)
-
-## kopia
-
-*External links:* [Kopia Documentation > Repositories](https://kopia.io/docs/repositories/#amazon-s3)
-
diff --git a/content/documentation/connect/cli-nextcloud-gui.png b/content/documentation/connect/cli-nextcloud-gui.png
deleted file mode 100644
index 7a58a3a..0000000
--- a/content/documentation/connect/cli-nextcloud-gui.png
+++ /dev/null
Binary files differ
diff --git a/content/documentation/connect/cli.md b/content/documentation/connect/cli.md
deleted file mode 100644
index 77d0647..0000000
--- a/content/documentation/connect/cli.md
+++ /dev/null
@@ -1,170 +0,0 @@
-+++
-title = "CLI tools"
-weight = 20
-+++
-
-
-CLI tools allow you to query the S3 API without too many abstractions.
-These tools are particularly suitable for debug, backups, website deployments or any scripted task that need to handle data.
-
-## Minio client (recommended)
-
-Use the following command to set an "alias", i.e. define a new S3 server to be
-used by the Minio client:
-
-```bash
-mc alias set \
- garage \
- <endpoint> \
- <access key> \
- <secret key> \
- --api S3v4
-```
-
-Remember that `mc` is sometimes called `mcli` (such as on Arch Linux), to avoid conflicts
-with Midnight Commander.
-
-Some commands:
-
-```bash
-# list buckets
-mc ls garage/
-
-# list objets in a bucket
-mc ls garage/my_files
-
-# copy from your filesystem to garage
-mc cp /proc/cpuinfo garage/my_files/cpuinfo.txt
-
-# copy from garage to your filesystem
-mc cp garage/my_files/cpuinfo.txt /tmp/cpuinfo.txt
-
-# mirror a folder from your filesystem to garage
-mc mirror --overwrite ./book garage/garagehq.deuxfleurs.fr
-```
-
-
-## AWS CLI
-
-Create a file named `~/.aws/credentials` and put:
-
-```toml
-[default]
-aws_access_key_id=xxxx
-aws_secret_access_key=xxxx
-```
-
-Then a file named `~/.aws/config` and put:
-
-```toml
-[default]
-region=garage
-```
-
-Now, supposing Garage is listening on `http://127.0.0.1:3900`, you can list your buckets with:
-
-```bash
-aws --endpoint-url http://127.0.0.1:3900 s3 ls
-```
-
-Passing the `--endpoint-url` parameter to each command is annoying but AWS developers do not provide a corresponding configuration entry.
-As a workaround, you can redefine the aws command by editing the file `~/.bashrc`:
-
-```
-function aws { command aws --endpoint-url http://127.0.0.1:3900 $@ ; }
-```
-
-*Do not forget to run `source ~/.bashrc` or to start a new terminal before running the next commands.*
-
-Now you can simply run:
-
-```bash
-# list buckets
-aws s3 ls
-
-# list objects of a bucket
-aws s3 ls s3://my_files
-
-# copy from your filesystem to garage
-aws s3 cp /proc/cpuinfo s3://my_files/cpuinfo.txt
-
-# copy from garage to your filesystem
-aws s3 cp s3/my_files/cpuinfo.txt /tmp/cpuinfo.txt
-```
-
-## `rclone`
-
-`rclone` can be configured using the interactive assistant invoked using `rclone config`.
-
-You can also configure `rclone` by writing directly its configuration file.
-Here is a template `rclone.ini` configuration file (mine is located at `~/.config/rclone/rclone.conf`):
-
-```ini
-[garage]
-type = s3
-provider = Other
-env_auth = false
-access_key_id = <access key>
-secret_access_key = <secret key>
-region = <region>
-endpoint = <endpoint>
-force_path_style = true
-acl = private
-bucket_acl = private
-```
-
-Now you can run:
-
-```bash
-# list buckets
-rclone lsd garage:
-
-# list objects of a bucket aggregated in directories
-rclone lsd garage:my-bucket
-
-# copy from your filesystem to garage
-echo hello world > /tmp/hello.txt
-rclone copy /tmp/hello.txt garage:my-bucket/
-
-# copy from garage to your filesystem
-rclone copy garage:quentin.divers/hello.txt .
-
-# see all available subcommands
-rclone help
-```
-
-## `s3cmd`
-
-Here is a template for the `s3cmd.cfg` file to talk with Garage:
-
-```ini
-[default]
-access_key = <access key>
-secret_key = <secret key>
-host_base = <endpoint without http(s)://>
-host_bucket = <same as host_base>
-use_https = <False or True>
-```
-
-And use it as follow:
-
-```bash
-# List buckets
-s3cmd ls
-
-# s3cmd objects inside a bucket
-s3cmd ls s3://my-bucket
-
-# copy from your filesystem to garage
-echo hello world > /tmp/hello.txt
-s3cmd put /tmp/hello.txt s3://my-bucket/
-
-# copy from garage to your filesystem
-s3cmd get s3://my-bucket/hello.txt hello.txt
-```
-
-## Cyberduck & duck
-
-TODO
-
-
diff --git a/content/documentation/connect/code.md b/content/documentation/connect/code.md
deleted file mode 100644
index 2351ab0..0000000
--- a/content/documentation/connect/code.md
+++ /dev/null
@@ -1,83 +0,0 @@
-+++
-title = "Your code (PHP, JS, Go...)"
-weight = 30
-+++
-
-
-If you are developping a new application, you may want to use Garage to store your user's media.
-
-The S3 API that Garage uses is a standard REST API, so as long as you can make HTTP requests,
-you can query it. You can check the [S3 REST API Reference](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations_Amazon_Simple_Storage_Service.html) from Amazon to learn more.
-
-Developping your own wrapper around the REST API is time consuming and complicated.
-Instead, there are some libraries already avalaible.
-
-Some of them are maintained by Amazon, some by Minio, others by the community.
-
-## PHP
-
- - Amazon aws-sdk-php
- - [Installation](https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/getting-started_installation.html)
- - [Reference](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html)
- - [Example](https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/s3-examples-creating-buckets.html)
-
-## Javascript
-
- - Minio SDK
- - [Reference](https://docs.min.io/docs/javascript-client-api-reference.html)
-
- - Amazon aws-sdk-js
- - [Installation](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/getting-started.html)
- - [Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html)
- - [Example](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/s3-example-creating-buckets.html)
-
-## Golang
-
- - Minio minio-go-sdk
- - [Reference](https://docs.min.io/docs/golang-client-api-reference.html)
-
- - Amazon aws-sdk-go-v2
- - [Installation](https://aws.github.io/aws-sdk-go-v2/docs/getting-started/)
- - [Reference](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3)
- - [Example](https://aws.github.io/aws-sdk-go-v2/docs/code-examples/s3/putobject/)
-
-## Python
-
- - Minio SDK
- - [Reference](https://docs.min.io/docs/python-client-api-reference.html)
-
- - Amazon boto3
- - [Installation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html)
- - [Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html)
- - [Example](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html)
-
-## Java
-
- - Minio SDK
- - [Reference](https://docs.min.io/docs/java-client-api-reference.html)
-
- - Amazon aws-sdk-java
- - [Installation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html)
- - [Reference](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html)
- - [Example](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3-objects.html)
-
-## Rust
-
- - Amazon aws-rust-sdk
- - [Github](https://github.com/awslabs/aws-sdk-rust)
-
-## .NET
-
- - Minio SDK
- - [Reference](https://docs.min.io/docs/dotnet-client-api-reference.html)
-
- - Amazon aws-dotnet-sdk
-
-## C++
-
- - Amazon aws-cpp-sdk
-
-## Haskell
-
- - Minio SDK
- - [Reference](https://docs.min.io/docs/haskell-client-api-reference.html)
diff --git a/content/documentation/connect/fs.md b/content/documentation/connect/fs.md
deleted file mode 100644
index 792d5c5..0000000
--- a/content/documentation/connect/fs.md
+++ /dev/null
@@ -1,72 +0,0 @@
-+++
-title = "FUSE (s3fs, goofys, s3backer...)"
-weight = 25
-+++
-
-
-**WARNING! Garage is not POSIX compatible.
-Mounting S3 buckets as filesystems will not provide POSIX compatibility.
-If you are not careful, you will lose or corrupt your data.**
-
-Do not use these FUSE filesystems to store any database files (eg. MySQL, Postgresql, Mongo or sqlite),
-any daemon cache (dovecot, openldap, gitea, etc.),
-and more generally any software that use locking, advanced filesystems features or make any synchronisation assumption.
-Ideally, avoid these solutions at all for any serious or production use.
-
-## rclone mount
-
-rclone uses the same configuration when used [in CLI](/documentation/connect/cli/) and mount mode.
-We suppose you have the following entry in your `rclone.ini` (mine is located in `~/.config/rclone/rclone.conf`):
-
-```toml
-[garage]
-type = s3
-provider = Other
-env_auth = false
-access_key_id = <access key>
-secret_access_key = <secret key>
-region = <region>
-endpoint = <endpoint>
-force_path_style = true
-acl = private
-bucket_acl = private
-```
-
-Then you can mount and access any bucket as follow:
-
-```bash
-# mount the bucket
-mkdir /tmp/my-bucket
-rclone mount --daemon garage:my-bucket /tmp/my-bucket
-
-# set your working directory to the bucket
-cd /tmp/my-bucket
-
-# create a file
-echo hello world > hello.txt
-
-# access the file
-cat hello.txt
-
-# unmount the bucket
-cd
-fusermount -u /tmp/my-bucket
-```
-
-*External link:* [rclone documentation > rclone mount](https://rclone.org/commands/rclone_mount/)
-
-## s3fs
-
-*External link:* [s3fs github > README.md](https://github.com/s3fs-fuse/s3fs-fuse#examples)
-
-## goofys
-
-*External link:* [goofys github > README.md](https://github.com/kahing/goofys#usage)
-
-## s3backer
-
-*External link:* [s3backer github > manpage](https://github.com/archiecobbs/s3backer/wiki/ManPage)
-
-## csi-s3
-
-*External link:* [csi-s3 Github > README.md](https://github.com/ctrox/csi-s3)
diff --git a/content/documentation/connect/repositories.md b/content/documentation/connect/repositories.md
deleted file mode 100644
index 90adc46..0000000
--- a/content/documentation/connect/repositories.md
+++ /dev/null
@@ -1,173 +0,0 @@
-+++
-title = "Repositories (Docker, Nix, Git...)"
-weight = 15
-+++
-
-
-Whether you need to store and serve binary packages or source code, you may want to deploy a tool referred as a repository or registry.
-Garage can also help you serve this content.
-
-## Gitea
-
-You can use Garage with Gitea to store your [git LFS](https://git-lfs.github.com/) data, your users' avatar, and their attachements.
-You can configure a different target for each data type (check `[lfs]` and `[attachment]` sections of the Gitea documentation) and you can provide a default one through the `[storage]` section.
-
-Let's start by creating a key and a bucket (your key id and secret will be needed later, keep them somewhere):
-
-```bash
-garage key new --name gitea-key
-garage bucket create gitea
-garage bucket allow gitea --read --write --key gitea-key
-```
-
-Then you can edit your configuration (by default `/etc/gitea/conf/app.ini`):
-
-```ini
-[storage]
-STORAGE_TYPE=minio
-MINIO_ENDPOINT=localhost:3900
-MINIO_ACCESS_KEY_ID=GKxxx
-MINIO_SECRET_ACCESS_KEY=xxxx
-MINIO_BUCKET=gitea
-MINIO_LOCATION=garage
-MINIO_USE_SSL=false
-```
-
-You can also pass this configuration through environment variables:
-
-```bash
-GITEA__storage__STORAGE_TYPE=minio
-GITEA__storage__MINIO_ENDPOINT=localhost:3900
-GITEA__storage__MINIO_ACCESS_KEY_ID=GKxxx
-GITEA__storage__MINIO_SECRET_ACCESS_KEY=xxxx
-GITEA__storage__MINIO_BUCKET=gitea
-GITEA__storage__MINIO_LOCATION=garage
-GITEA__storage__MINIO_USE_SSL=false
-```
-
-Then restart your gitea instance and try to upload a custom avatar.
-If it worked, you should see some content in your gitea bucket (you must configure your `aws` command before):
-
-```
-$ aws s3 ls s3://gitea/avatars/
-2021-11-10 12:35:47 190034 616ba79ae2b84f565c33d72c2ec50861
-```
-
-
-*External link:* [Gitea Documentation > Configuration Cheat Sheet](https://docs.gitea.io/en-us/config-cheat-sheet/)
-
-## Gitlab
-
-*External link:* [Gitlab Documentation > Object storage](https://docs.gitlab.com/ee/administration/object_storage.html)
-
-
-## Private NPM Registry (Verdacio)
-
-*External link:* [Verdaccio Github Repository > aws-storage plugin](https://github.com/verdaccio/verdaccio/tree/master/packages/plugins/aws-storage)
-
-## Docker
-
-Not yet compatible, follow [#103](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/103).
-
-*External link:* [Docker Documentation > Registry storage drivers > S3 storage driver](https://docs.docker.com/registry/storage-drivers/s3/)
-
-## Nix
-
-Nix has no repository in its terminology: instead, it breaks down this concept in 2 parts: binary cache and channel.
-
-**A channel** is a set of `.nix` definitions that generate definitions for all the software you want to serve.
-
-Because we do not want all our clients to compile all these derivations by themselves,
-we can compile them once and then serve them as part of our **binary cache**.
-
-It is possible to use a **binary cache** without a channel, you only need to serve your nix definitions
-through another support, like a git repository.
-
-As a first step, we will need to create a bucket on Garage and enabling website access on it:
-
-```bash
-garage key new --name nix-key
-garage bucket create nix.example.com
-garage bucket allow nix.example.com --read --write --key nix-key
-garage bucket website nix.example.com --allow
-```
-
-If you need more information about exposing buckets as websites on Garage,
-check [Exposing buckets as websites](/documentation/cookbook/exposing-websites/)
- and [Configuring a reverse proxy](/documentation/cookbook/reverse-proxy/).
-
-Next, we want to check that our bucket works:
-
-```bash
-echo nix repo > /tmp/index.html
-mc cp /tmp/index.html garage/nix/
-rm /tmp/index.html
-
-curl https://nix.example.com
-# output: nix repo
-```
-
-### Binary cache
-
-To serve binaries as part of your cache, you need to sign them with a key specific to nix.
-You can generate the keypair as follow:
-
-```bash
-nix-store --generate-binary-cache-key <name> cache-priv-key.pem cache-pub-key.pem
-```
-
-You can then manually sign the packages of your store with the following command:
-
-```bash
-nix sign-paths --all -k cache-priv-key.pem
-```
-
-Setting a key in `nix.conf` will do the signature at build time automatically without additional commands.
-Edit the `nix.conf` of your builder:
-
-```toml
-secret-key-files = /etc/nix/cache-priv-key.pem
-```
-
-Now that your content is signed, you can copy a derivation to your cache.
-For example, if you want to copy a specific derivation of your store:
-
-```bash
-nix copy /nix/store/wadmyilr414n7bimxysbny876i2vlm5r-bash-5.1-p8 --to 's3://nix?endpoint=garage.example.com&region=garage'
-```
-
-*Note that if you have not signed your packages, you can append to the end of your S3 URL `&secret-key=/etc/nix/cache-priv-key.pem`.*
-
-Sometimes you don't want to hardcode this store path in your script.
-Let suppose that you are working on a codebase that you build with `nix-build`, you can then run:
-
-```bash
-nix copy $(nix-build) --to 's3://nix?endpoint=garage.example.com&region=garage'
-```
-
-*This command works because the only thing that `nix-build` outputs on stdout is the paths of the built derivations in your nix store.*
-
-You can include your derivation dependencies:
-
-```bash
-nix copy $(nix-store -qR $(nix-build)) --to 's3://nix?endpoint=garage.example.com&region=garage'
-```
-
-Now, your binary cache stores your derivation and all its dependencies.
-Just inform your users that they must update their `nix.conf` file with the following lines:
-
-```toml
-substituters = https://cache.nixos.org https://nix.example.com
-trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= nix.example.com:eTGL6kvaQn6cDR/F9lDYUIP9nCVR/kkshYfLDJf1yKs=
-```
-
-*You must re-add cache.nixorg.org because redeclaring these keys override the previous configuration instead of extending it.*
-
-Now, when your clients will run `nix-build` or any command that generates a derivation for which a hash is already present
-on the binary cache, the client will download the result from the cache instead of compiling it, saving lot of time and CPU!
-
-
-### Channels
-
-Channels additionnaly serve Nix definitions, ie. a `.nix` file referencing
-all the derivations you want to serve.
diff --git a/content/documentation/connect/websites.md b/content/documentation/connect/websites.md
deleted file mode 100644
index 2692196..0000000
--- a/content/documentation/connect/websites.md
+++ /dev/null
@@ -1,82 +0,0 @@
-+++
-title = "Websites (Hugo, Jekyll, Publii...)"
-weight = 10
-+++
-
-
-Garage is also suitable to host static websites.
-While they can be deployed with traditional CLI tools, some static website generators have integrated options to ease your workflow.
-
-## Hugo
-
-Add to your `config.toml` the following section:
-
-```toml
-[[deployment.targets]]
- URL = "s3://<bucket>?endpoint=<endpoint>&disableSSL=<bool>&s3ForcePathStyle=true&region=garage"
-```
-
-For example:
-
-```toml
-[[deployment.targets]]
- URL = "s3://my-blog?endpoint=localhost:9000&disableSSL=true&s3ForcePathStyle=true&region=garage"
-```
-
-Then inform hugo of your credentials:
-
-```bash
-export AWS_ACCESS_KEY_ID=GKxxx
-export AWS_SECRET_ACCESS_KEY=xxx
-```
-
-And finally build and deploy your website:
-
-```bsh
-hugo
-hugo deploy
-```
-
-*External links:*
- - [gocloud.dev > aws > Supported URL parameters](https://pkg.go.dev/gocloud.dev/aws?utm_source=godoc#ConfigFromURLParams)
- - [Hugo Documentation > hugo deploy](https://gohugo.io/hosting-and-deployment/hugo-deploy/)
-
-## Publii
-
-It would require a patch either on Garage or on Publii to make both systems work.
-
-Currently, the proposed workaround is to deploy your website manually:
- - On the left menu, click on Server, choose Manual Deployment (the logo looks like a compressed file)
- - Set your website URL, keep Output type as "Non-compressed catalog"
- - Click on Save changes
- - Click on Sync your website (bottom left of the app)
- - On the new page, click again on Sync your website
- - Click on Get website files
- - You need to synchronize the output folder you see in your file explorer, we will use minio client.
-
-Be sure that you [configured minio client](/documentation/connect/cli/#minio-client-recommended).
-
-Then copy this output folder
-
-```bash
-mc mirror --overwrite output garage/my-site
-```
-
-## Generic (eg. Jekyll)
-
-Some tools do not support sending to a S3 backend but output a compiled folder on your system.
-We can then use any CLI tool to upload this content to our S3 target.
-
-First, start by [configuring minio client](/documentation/connect/cli/#minio-client-recommended).
-
-Then build your website:
-
-```bash
-jekyll build
-```
-
-And copy jekyll's output folder on S3:
-
-```bash
-mc mirror --overwrite _site garage/my-site
-```