From 961a4cf7b7b7b429d3edfc96dc6a58d58d2d7df5 Mon Sep 17 00:00:00 2001 From: Alex Auvolat Date: Wed, 2 Feb 2022 11:43:28 +0100 Subject: Change _ to - and fix internal links --- doc/book/connect/_index.md | 12 +- doc/book/connect/apps.md | 464 --------------------- doc/book/connect/apps/cli-nextcloud-gui.png | Bin 0 -> 201685 bytes doc/book/connect/apps/index.md | 464 +++++++++++++++++++++ doc/book/connect/cli-nextcloud-gui.png | Bin 201685 -> 0 bytes doc/book/connect/fs.md | 2 +- doc/book/connect/websites.md | 4 +- doc/book/cookbook/_index.md | 14 +- doc/book/cookbook/exposing-websites.md | 51 +++ doc/book/cookbook/exposing_websites.md | 51 --- doc/book/cookbook/from-source.md | 54 +++ doc/book/cookbook/from_source.md | 54 --- doc/book/cookbook/real-world.md | 295 +++++++++++++ doc/book/cookbook/real_world.md | 295 ------------- doc/book/cookbook/reverse-proxy.md | 168 ++++++++ doc/book/cookbook/reverse_proxy.md | 168 -------- doc/book/design/benchmarks.md | 84 ---- doc/book/design/benchmarks/endpoint-latency-dc.png | Bin 0 -> 131776 bytes doc/book/design/benchmarks/endpoint-latency.png | Bin 0 -> 127369 bytes doc/book/design/benchmarks/index.md | 84 ++++ doc/book/design/img/endpoint-latency-dc.png | Bin 131776 -> 0 bytes doc/book/design/img/endpoint-latency.png | Bin 127369 -> 0 bytes doc/book/design/internals.md | 2 +- doc/book/design/related-work.md | 80 ++++ doc/book/design/related_work.md | 80 ---- doc/book/development/miscellaneous-notes.md | 101 +++++ doc/book/development/miscellaneous_notes.md | 101 ----- doc/book/development/release-process.md | 198 +++++++++ doc/book/development/release_process.md | 198 --------- doc/book/development/scripts.md | 6 +- doc/book/quick-start/_index.md | 30 +- doc/book/reference-manual/configuration.md | 4 +- doc/book/reference-manual/layout.md | 2 +- doc/book/reference-manual/s3-compatibility.md | 67 +++ doc/book/reference-manual/s3_compatibility.md | 67 --- doc/book/working-documents/compatibility-target.md | 108 +++++ doc/book/working-documents/compatibility_target.md | 108 ----- doc/book/working-documents/design-draft.md | 165 ++++++++ doc/book/working-documents/design_draft.md | 165 -------- doc/book/working-documents/load-balancing.md | 202 +++++++++ doc/book/working-documents/load_balancing.md | 202 --------- doc/book/working-documents/migration-04.md | 108 +++++ doc/book/working-documents/migration-06.md | 53 +++ doc/book/working-documents/migration_04.md | 108 ----- doc/book/working-documents/migration_06.md | 53 --- 45 files changed, 2237 insertions(+), 2235 deletions(-) delete mode 100644 doc/book/connect/apps.md create mode 100644 doc/book/connect/apps/cli-nextcloud-gui.png create mode 100644 doc/book/connect/apps/index.md delete mode 100644 doc/book/connect/cli-nextcloud-gui.png create mode 100644 doc/book/cookbook/exposing-websites.md delete mode 100644 doc/book/cookbook/exposing_websites.md create mode 100644 doc/book/cookbook/from-source.md delete mode 100644 doc/book/cookbook/from_source.md create mode 100644 doc/book/cookbook/real-world.md delete mode 100644 doc/book/cookbook/real_world.md create mode 100644 doc/book/cookbook/reverse-proxy.md delete mode 100644 doc/book/cookbook/reverse_proxy.md delete mode 100644 doc/book/design/benchmarks.md create mode 100644 doc/book/design/benchmarks/endpoint-latency-dc.png create mode 100644 doc/book/design/benchmarks/endpoint-latency.png create mode 100644 doc/book/design/benchmarks/index.md delete mode 100644 doc/book/design/img/endpoint-latency-dc.png delete mode 100644 doc/book/design/img/endpoint-latency.png create mode 100644 doc/book/design/related-work.md delete mode 100644 doc/book/design/related_work.md create mode 100644 doc/book/development/miscellaneous-notes.md delete mode 100644 doc/book/development/miscellaneous_notes.md create mode 100644 doc/book/development/release-process.md delete mode 100644 doc/book/development/release_process.md create mode 100644 doc/book/reference-manual/s3-compatibility.md delete mode 100644 doc/book/reference-manual/s3_compatibility.md create mode 100644 doc/book/working-documents/compatibility-target.md delete mode 100644 doc/book/working-documents/compatibility_target.md create mode 100644 doc/book/working-documents/design-draft.md delete mode 100644 doc/book/working-documents/design_draft.md create mode 100644 doc/book/working-documents/load-balancing.md delete mode 100644 doc/book/working-documents/load_balancing.md create mode 100644 doc/book/working-documents/migration-04.md create mode 100644 doc/book/working-documents/migration-06.md delete mode 100644 doc/book/working-documents/migration_04.md delete mode 100644 doc/book/working-documents/migration_06.md (limited to 'doc/book') diff --git a/doc/book/connect/_index.md b/doc/book/connect/_index.md index c6a46aea..b4868b9f 100644 --- a/doc/book/connect/_index.md +++ b/doc/book/connect/_index.md @@ -10,11 +10,11 @@ Garage implements the Amazon S3 protocol, which makes it compatible with many ex In particular, you will find here instructions to connect it with: - - [web applications](./apps.md) - - [website hosting](./websites.md) - - [software repositories](./repositories.md) - - [CLI tools](./cli.md) - - [your own code](./code.md) + - [web applications](@/documentation/connect/apps/index.md) + - [website hosting](@/documentation/connect/websites.md) + - [software repositories](@/documentation/connect/repositories.md) + - [CLI tools](@/documentation/connect/cli.md) + - [your own code](@/documentation/connect/code.md) ### Generic instructions @@ -31,7 +31,7 @@ you will need the following parameters: like this: `GK3515373e4c851ebaad366558` (access key), `7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34` (secret key). These keys are created and managed using the `garage` CLI, as explained in the - [quick start](../quick_start/index.md) guide. + [quick start](@/documentation/quick-start/_index.md) guide. Most S3 clients can be configured easily with these parameters, provided that you follow the following guidelines: diff --git a/doc/book/connect/apps.md b/doc/book/connect/apps.md deleted file mode 100644 index 65b97dfe..00000000 --- a/doc/book/connect/apps.md +++ /dev/null @@ -1,464 +0,0 @@ -+++ -title = "Apps (Nextcloud, Peertube...)" -weight = 5 -+++ - -In this section, we cover the following software: [Nextcloud](#nextcloud), [Peertube](#peertube), [Mastodon](#mastodon), [Matrix](#matrix) - -## Nextcloud - -Nextcloud is a popular file synchronisation and backup service. -By default, Nextcloud stores its data on the local filesystem. -If you want to expand your storage to aggregate multiple servers, Garage is the way to go. - -A S3 backend can be configured in two ways on Nextcloud, either as Primary Storage or as an External Storage. -Primary storage will store all your data on S3, in an opaque manner, and will provide the best performances. -External storage enable you to select which data will be stored on S3, your file hierarchy will be preserved in S3, but it might be slower. - -In the following, we cover both methods but before reading our guide, we suppose you have done some preliminary steps. -First, we expect you have an already installed and configured Nextcloud instance. -Second, we suppose you have created a key and a bucket. - -As a reminder, you can create a key for your nextcloud instance as follow: - -```bash -garage key new --name nextcloud-key -``` - -Keep the Key ID and the Secret key in a pad, they will be needed later. -Then you can create a bucket and give read/write rights to your key on this bucket with: - -```bash -garage bucket create nextcloud -garage bucket allow nextcloud --read --write --key nextcloud-key -``` - - -### Primary Storage - -Now edit your Nextcloud configuration file to enable object storage. -On my installation, the config. file is located at the following path: `/var/www/nextcloud/config/config.php`. -We will add a new root key to the `$CONFIG` dictionnary named `objectstore`: - -```php - [ - 'class' => '\\OC\\Files\\ObjectStore\\S3', - 'arguments' => [ - 'bucket' => 'nextcloud', // Your bucket name, must be created before - 'autocreate' => false, // Garage does not support autocreate - 'key' => 'xxxxxxxxx', // The Key ID generated previously - 'secret' => 'xxxxxxxxx', // The Secret key generated previously - 'hostname' => '127.0.0.1', // Can also be a domain name, eg. garage.example.com - 'port' => 3900, // Put your reverse proxy port or your S3 API port - 'use_ssl' => false, // Set it to true if you have a TLS enabled reverse proxy - 'region' => 'garage', // Garage has only one region named "garage" - 'use_path_style' => true // Garage supports only path style, must be set to true - ], -], -``` - -That's all, your Nextcloud will store all your data to S3. -To test your new configuration, just reload your Nextcloud webpage and start sending data. - -*External link:* [Nextcloud Documentation > Primary Storage](https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/primary_storage.html) - -### External Storage - -**From the GUI.** Activate the "External storage support" app from the "Applications" page (click on your account icon on the top right corner of your screen to display the menu). Go to your parameters page (also located below your account icon). Click on external storage (or the corresponding translation in your language). - -[![Screenshot of the External Storage form](./cli-nextcloud-gui.png)](./cli-nextcloud-gui.png) -*Click on the picture to zoom* - -Add a new external storage. Put what you want in "folder name" (eg. "shared"). Select "Amazon S3". Keep "Access Key" for the Authentication field. -In Configuration, put your bucket name (eg. nextcloud), the host (eg. 127.0.0.1), the port (eg. 3900 or 443), the region (garage). Tick the SSL box if you have put an HTTPS proxy in front of garage. You must tick the "Path access" box and you must leave the "Legacy authentication (v2)" box empty. Put your Key ID (eg. GK...) and your Secret Key in the last two input boxes. Finally click on the tick symbol on the right of your screen. - -Now go to your "Files" app and a new "linked folder" has appeared with the name you chose earlier (eg. "shared"). - -*External link:* [Nextcloud Documentation > External Storage Configuration GUI](https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage_configuration_gui.html) - -**From the CLI.** First install the external storage application: - -```bash -php occ app:install files_external -``` - -Then add a new mount point with: - -```bash - php occ files_external:create \ - -c bucket=nextcloud \ - -c hostname=127.0.0.1 \ - -c port=3900 \ - -c region=garage \ - -c use_ssl=false \ - -c use_path_style=true \ - -c legacy_auth=false \ - -c key=GKxxxx \ - -c secret=xxxx \ - shared amazons3 amazons3::accesskey -``` - -Adapt the `hostname`, `port`, `use_ssl`, `key`, and `secret` entries to your configuration. -Do not change the `use_path_style` and `legacy_auth` entries, other configurations are not supported. - -*External link:* [Nextcloud Documentation > occ command > files external](https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html#files-external-label) - - -## Peertube - -Peertube proposes a clever integration of S3 by directly exposing its endpoint instead of proxifying requests through the application. -In other words, Peertube is only responsible of the "control plane" and offload the "data plane" to Garage. -In return, this system is a bit harder to configure, especially with Garage that supports less feature than other older S3 backends. -We show that it is still possible to configure Garage with Peertube, allowing you to spread the load and the bandwidth usage on the Garage cluster. - -### Enable path-style access by patching Peertube - -First, you will need to apply a small patch on Peertube ([#4510](https://github.com/Chocobozzz/PeerTube/pull/4510)): - -```diff -From e3b4c641bdf67e07d406a1d49d6aa6b1fbce2ab4 Mon Sep 17 00:00:00 2001 -From: Martin Honermeyer -Date: Sun, 31 Oct 2021 12:34:04 +0100 -Subject: [PATCH] Allow setting path-style access for object storage - ---- - config/default.yaml | 4 ++++ - config/production.yaml.example | 4 ++++ - server/initializers/config.ts | 1 + - server/lib/object-storage/shared/client.ts | 3 ++- - .../production/config/custom-environment-variables.yaml | 2 ++ - 5 files changed, 13 insertions(+), 1 deletion(-) - -diff --git a/config/default.yaml b/config/default.yaml -index cf9d69a6211..4efd56fb804 100644 ---- a/config/default.yaml -+++ b/config/default.yaml -@@ -123,6 +123,10 @@ object_storage: - # You can also use AWS_SECRET_ACCESS_KEY env variable - secret_access_key: '' - -+ # Reference buckets via path rather than subdomain -+ # (i.e. "my-endpoint.com/bucket" instead of "bucket.my-endpoint.com") -+ force_path_style: false -+ - # Maximum amount to upload in one request to object storage - max_upload_part: 2GB - -diff --git a/config/production.yaml.example b/config/production.yaml.example -index 70993bf57a3..9ca2de5f4c9 100644 ---- a/config/production.yaml.example -+++ b/config/production.yaml.example -@@ -121,6 +121,10 @@ object_storage: - # You can also use AWS_SECRET_ACCESS_KEY env variable - secret_access_key: '' - -+ # Reference buckets via path rather than subdomain -+ # (i.e. "my-endpoint.com/bucket" instead of "bucket.my-endpoint.com") -+ force_path_style: false -+ - # Maximum amount to upload in one request to object storage - max_upload_part: 2GB - -diff --git a/server/initializers/config.ts b/server/initializers/config.ts -index 8375bf4304c..d726c59a4b6 100644 ---- a/server/initializers/config.ts -+++ b/server/initializers/config.ts -@@ -91,6 +91,7 @@ const CONFIG = { - ACCESS_KEY_ID: config.get('object_storage.credentials.access_key_id'), - SECRET_ACCESS_KEY: config.get('object_storage.credentials.secret_access_key') - }, -+ FORCE_PATH_STYLE: config.get('object_storage.force_path_style'), - VIDEOS: { - BUCKET_NAME: config.get('object_storage.videos.bucket_name'), - PREFIX: config.get('object_storage.videos.prefix'), -diff --git a/server/lib/object-storage/shared/client.ts b/server/lib/object-storage/shared/client.ts -index c9a61459336..eadad02f93f 100644 ---- a/server/lib/object-storage/shared/client.ts -+++ b/server/lib/object-storage/shared/client.ts -@@ -26,7 +26,8 @@ function getClient () { - accessKeyId: OBJECT_STORAGE.CREDENTIALS.ACCESS_KEY_ID, - secretAccessKey: OBJECT_STORAGE.CREDENTIALS.SECRET_ACCESS_KEY - } -- : undefined -+ : undefined, -+ forcePathStyle: CONFIG.OBJECT_STORAGE.FORCE_PATH_STYLE - }) - - logger.info('Initialized S3 client %s with region %s.', getEndpoint(), OBJECT_STORAGE.REGION, lTags()) -diff --git a/support/docker/production/config/custom-environment-variables.yaml b/support/docker/production/config/custom-environment-variables.yaml -index c7cd28e6521..a960bab0bc9 100644 ---- a/support/docker/production/config/custom-environment-variables.yaml -+++ b/support/docker/production/config/custom-environment-variables.yaml -@@ -54,6 +54,8 @@ object_storage: - - region: "PEERTUBE_OBJECT_STORAGE_REGION" - -+ force_path_style: "PEERTUBE_OBJECT_STORAGE_FORCE_PATH_STYLE" -+ - max_upload_part: - __name: "PEERTUBE_OBJECT_STORAGE_MAX_UPLOAD_PART" - __format: "json" -``` - -You can then recompile it with: - -``` -npm run build -``` - -And it can be started with: - -``` -NODE_ENV=production NODE_CONFIG_DIR=/srv/peertube/config node dist/server.js -``` - - -### Create resources in Garage - -Create a key for Peertube: - -```bash -garage key new --name peertube-key -``` - -Keep the Key ID and the Secret key in a pad, they will be needed later. - -We need two buckets, one for normal videos (named peertube-video) and one for webtorrent videos (named peertube-playlist). -```bash -garage bucket create peertube-video -garage bucket create peertube-playlist -``` - -Now we allow our key to read and write on these buckets: - -``` -garage bucket allow peertube-playlist --read --write --key peertube-key -garage bucket allow peertube-video --read --write --key peertube-key -``` - -Finally, we need to expose these buckets publicly to serve their content to users: - -```bash -garage bucket website --allow peertube-playlist -garage bucket website --allow peertube-video -``` - -These buckets are now accessible on the web port (by default 3902) with the following URL: `http://:` where the root domain is defined in your configuration file (by default `.web.garage`). So we have currently the following URLs: - * http://peertube-playlist.web.garage:3902 - * http://peertube-video.web.garage:3902 - -Make sure you (will) have a corresponding DNS entry for them. - -### Configure a Reverse Proxy to serve CORS - -Now we will configure a reverse proxy in front of Garage. -This is required as we have no other way to serve CORS headers yet. -Check the [Configuring a reverse proxy](/cookbook/reverse_proxy.html) section to know how. - -Now make sure that your 2 dns entries are pointing to your reverse proxy. - -### Configure Peertube - -You must edit the file named `config/production.yaml`, we are only modifying the root key named `object_storage`: - -```yaml -object_storage: - enabled: true - - # Put localhost only if you have a garage instance running on that node - endpoint: 'http://localhost:3900' # or "garage.example.com" if you have TLS on port 443 - - # This entry has been added by our patch, must be set to true - force_path_style: true - - # Garage supports only one region for now, named garage - region: 'garage' - - credentials: - access_key_id: 'GKxxxx' - secret_access_key: 'xxxx' - - max_upload_part: 2GB - - streaming_playlists: - bucket_name: 'peertube-playlist' - - # Keep it empty for our example - prefix: '' - - # You must fill this field to make Peertube use our reverse proxy/website logic - base_url: 'http://peertube-playlist.web.garage' # Example: 'https://mirror.example.com' - - # Same settings but for webtorrent videos - videos: - bucket_name: 'peertube-video' - prefix: '' - # You must fill this field to make Peertube use our reverse proxy/website logic - base_url: 'http://peertube-video.web.garage' -``` - -### That's all - -Everything must be configured now, simply restart Peertube and try to upload a video. -You must see in your browser console that data are fetched directly from our bucket (through the reverse proxy). - -### Miscellaneous - -*Known bug:* The playback does not start and some 400 Bad Request Errors appear in your browser console and on Garage. -If the description of the error contains HTTP Invalid Range: InvalidRange, the error is due to a buggy ffmpeg version. -You must avoid the 4.4.0 and use either a newer or older version. - -*Associated issues:* [#137](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/137), [#138](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/138), [#140](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/140). These issues are non blocking. - -*External link:* [Peertube Documentation > Remote Storage](https://docs.joinpeertube.org/admin-remote-storage) - -## Mastodon - -https://docs.joinmastodon.org/admin/config/#cdn - -## Matrix - -Matrix is a chat communication protocol. Its main stable server implementation, [Synapse](https://matrix-org.github.io/synapse/latest/), provides a module to store media on a S3 backend. Additionally, a server independent media store supporting S3 has been developped by the community, it has been made possible thanks to how the matrix API has been designed and will work with implementations like Conduit, Dendrite, etc. - -### synapse-s3-storage-provider (synapse only) - -Supposing you have a working synapse installation, you can add the module with pip: - -```bash - pip3 install --user git+https://github.com/matrix-org/synapse-s3-storage-provider.git -``` - -Now create a bucket and a key for your matrix instance (note your Key ID and Secret Key somewhere, they will be needed later): - -```bash -garage key new --name matrix-key -garage bucket create matrix -garage bucket allow matrix --read --write --key matrix-key -``` - -Then you must edit your server configuration (eg. `/etc/matrix-synapse/homeserver.yaml`) and add the `media_storage_providers` root key: - -```yaml -media_storage_providers: -- module: s3_storage_provider.S3StorageProviderBackend - store_local: True # do we want to store on S3 media created by our users? - store_remote: True # do we want to store on S3 media created - # by users of others servers federated to ours? - store_synchronous: True # do we want to wait that the file has been written before returning? - config: - bucket: matrix # the name of our bucket, we chose matrix earlier - region_name: garage # only "garage" is supported for the region field - endpoint_url: http://localhost:3900 # the path to the S3 endpoint - access_key_id: "GKxxx" # your Key ID - secret_access_key: "xxxx" # your Secret Key -``` - -Note that uploaded media will also be stored locally and this behavior can not be deactivated, it is even required for -some operations like resizing images. -In fact, your local filesysem is considered as a cache but without any automated way to garbage collect it. - -We can build our garbage collector with `s3_media_upload`, a tool provided with the module. -If you installed the module with the command provided before, you should be able to bring it in your path: - -``` -PATH=$HOME/.local/bin/:$PATH -command -v s3_media_upload -``` - -Now we can write a simple script (eg `~/.local/bin/matrix-cache-gc`): - -```bash -#!/bin/bash - -## CONFIGURATION ## -AWS_ACCESS_KEY_ID=GKxxx -AWS_SECRET_ACCESS_KEY=xxxx -S3_ENDPOINT=http://localhost:3900 -S3_BUCKET=matrix -MEDIA_STORE=/var/lib/matrix-synapse/media -PG_USER=matrix -PG_PASS=xxxx -PG_DB=synapse -PG_HOST=localhost -PG_PORT=5432 - -## CODE ## -PATH=$HOME/.local/bin/:$PATH -cat > database.yaml < matrix-org/synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider) - -### matrix-media-repo (server independent) - -*External link:* [matrix-media-repo Documentation > S3](https://docs.t2bot.io/matrix-media-repo/configuration/s3-datastore.html) - -## Pixelfed - -https://docs.pixelfed.org/technical-documentation/env.html#filesystem - -## Pleroma - -https://docs-develop.pleroma.social/backend/configuration/cheatsheet/#pleromauploaderss3 - -## Lemmy - -via pict-rs -https://git.asonix.dog/asonix/pict-rs/commit/f9f4fc63d670f357c93f24147c2ee3e1278e2d97 - -## Funkwhale - -https://docs.funkwhale.audio/admin/configuration.html#s3-storage - -## Misskey - -https://github.com/misskey-dev/misskey/commit/9d944243a3a59e8880a360cbfe30fd5a3ec8d52d - -## Prismo - -https://gitlab.com/prismosuite/prismo/-/blob/dev/.env.production.sample#L26-33 - -## Owncloud Infinite Scale (ocis) - -## Unsupported - - - Mobilizon: No S3 integration - - WriteFreely: No S3 integration - - Plume: No S3 integration diff --git a/doc/book/connect/apps/cli-nextcloud-gui.png b/doc/book/connect/apps/cli-nextcloud-gui.png new file mode 100644 index 00000000..7a58a3ab Binary files /dev/null and b/doc/book/connect/apps/cli-nextcloud-gui.png differ diff --git a/doc/book/connect/apps/index.md b/doc/book/connect/apps/index.md new file mode 100644 index 00000000..84f46891 --- /dev/null +++ b/doc/book/connect/apps/index.md @@ -0,0 +1,464 @@ ++++ +title = "Apps (Nextcloud, Peertube...)" +weight = 5 ++++ + +In this section, we cover the following software: [Nextcloud](#nextcloud), [Peertube](#peertube), [Mastodon](#mastodon), [Matrix](#matrix) + +## Nextcloud + +Nextcloud is a popular file synchronisation and backup service. +By default, Nextcloud stores its data on the local filesystem. +If you want to expand your storage to aggregate multiple servers, Garage is the way to go. + +A S3 backend can be configured in two ways on Nextcloud, either as Primary Storage or as an External Storage. +Primary storage will store all your data on S3, in an opaque manner, and will provide the best performances. +External storage enable you to select which data will be stored on S3, your file hierarchy will be preserved in S3, but it might be slower. + +In the following, we cover both methods but before reading our guide, we suppose you have done some preliminary steps. +First, we expect you have an already installed and configured Nextcloud instance. +Second, we suppose you have created a key and a bucket. + +As a reminder, you can create a key for your nextcloud instance as follow: + +```bash +garage key new --name nextcloud-key +``` + +Keep the Key ID and the Secret key in a pad, they will be needed later. +Then you can create a bucket and give read/write rights to your key on this bucket with: + +```bash +garage bucket create nextcloud +garage bucket allow nextcloud --read --write --key nextcloud-key +``` + + +### Primary Storage + +Now edit your Nextcloud configuration file to enable object storage. +On my installation, the config. file is located at the following path: `/var/www/nextcloud/config/config.php`. +We will add a new root key to the `$CONFIG` dictionnary named `objectstore`: + +```php + [ + 'class' => '\\OC\\Files\\ObjectStore\\S3', + 'arguments' => [ + 'bucket' => 'nextcloud', // Your bucket name, must be created before + 'autocreate' => false, // Garage does not support autocreate + 'key' => 'xxxxxxxxx', // The Key ID generated previously + 'secret' => 'xxxxxxxxx', // The Secret key generated previously + 'hostname' => '127.0.0.1', // Can also be a domain name, eg. garage.example.com + 'port' => 3900, // Put your reverse proxy port or your S3 API port + 'use_ssl' => false, // Set it to true if you have a TLS enabled reverse proxy + 'region' => 'garage', // Garage has only one region named "garage" + 'use_path_style' => true // Garage supports only path style, must be set to true + ], +], +``` + +That's all, your Nextcloud will store all your data to S3. +To test your new configuration, just reload your Nextcloud webpage and start sending data. + +*External link:* [Nextcloud Documentation > Primary Storage](https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/primary_storage.html) + +### External Storage + +**From the GUI.** Activate the "External storage support" app from the "Applications" page (click on your account icon on the top right corner of your screen to display the menu). Go to your parameters page (also located below your account icon). Click on external storage (or the corresponding translation in your language). + +[![Screenshot of the External Storage form](cli-nextcloud-gui.png)](cli-nextcloud-gui.png) +*Click on the picture to zoom* + +Add a new external storage. Put what you want in "folder name" (eg. "shared"). Select "Amazon S3". Keep "Access Key" for the Authentication field. +In Configuration, put your bucket name (eg. nextcloud), the host (eg. 127.0.0.1), the port (eg. 3900 or 443), the region (garage). Tick the SSL box if you have put an HTTPS proxy in front of garage. You must tick the "Path access" box and you must leave the "Legacy authentication (v2)" box empty. Put your Key ID (eg. GK...) and your Secret Key in the last two input boxes. Finally click on the tick symbol on the right of your screen. + +Now go to your "Files" app and a new "linked folder" has appeared with the name you chose earlier (eg. "shared"). + +*External link:* [Nextcloud Documentation > External Storage Configuration GUI](https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage_configuration_gui.html) + +**From the CLI.** First install the external storage application: + +```bash +php occ app:install files_external +``` + +Then add a new mount point with: + +```bash + php occ files_external:create \ + -c bucket=nextcloud \ + -c hostname=127.0.0.1 \ + -c port=3900 \ + -c region=garage \ + -c use_ssl=false \ + -c use_path_style=true \ + -c legacy_auth=false \ + -c key=GKxxxx \ + -c secret=xxxx \ + shared amazons3 amazons3::accesskey +``` + +Adapt the `hostname`, `port`, `use_ssl`, `key`, and `secret` entries to your configuration. +Do not change the `use_path_style` and `legacy_auth` entries, other configurations are not supported. + +*External link:* [Nextcloud Documentation > occ command > files external](https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html#files-external-label) + + +## Peertube + +Peertube proposes a clever integration of S3 by directly exposing its endpoint instead of proxifying requests through the application. +In other words, Peertube is only responsible of the "control plane" and offload the "data plane" to Garage. +In return, this system is a bit harder to configure, especially with Garage that supports less feature than other older S3 backends. +We show that it is still possible to configure Garage with Peertube, allowing you to spread the load and the bandwidth usage on the Garage cluster. + +### Enable path-style access by patching Peertube + +First, you will need to apply a small patch on Peertube ([#4510](https://github.com/Chocobozzz/PeerTube/pull/4510)): + +```diff +From e3b4c641bdf67e07d406a1d49d6aa6b1fbce2ab4 Mon Sep 17 00:00:00 2001 +From: Martin Honermeyer +Date: Sun, 31 Oct 2021 12:34:04 +0100 +Subject: [PATCH] Allow setting path-style access for object storage + +--- + config/default.yaml | 4 ++++ + config/production.yaml.example | 4 ++++ + server/initializers/config.ts | 1 + + server/lib/object-storage/shared/client.ts | 3 ++- + .../production/config/custom-environment-variables.yaml | 2 ++ + 5 files changed, 13 insertions(+), 1 deletion(-) + +diff --git a/config/default.yaml b/config/default.yaml +index cf9d69a6211..4efd56fb804 100644 +--- a/config/default.yaml ++++ b/config/default.yaml +@@ -123,6 +123,10 @@ object_storage: + # You can also use AWS_SECRET_ACCESS_KEY env variable + secret_access_key: '' + ++ # Reference buckets via path rather than subdomain ++ # (i.e. "my-endpoint.com/bucket" instead of "bucket.my-endpoint.com") ++ force_path_style: false ++ + # Maximum amount to upload in one request to object storage + max_upload_part: 2GB + +diff --git a/config/production.yaml.example b/config/production.yaml.example +index 70993bf57a3..9ca2de5f4c9 100644 +--- a/config/production.yaml.example ++++ b/config/production.yaml.example +@@ -121,6 +121,10 @@ object_storage: + # You can also use AWS_SECRET_ACCESS_KEY env variable + secret_access_key: '' + ++ # Reference buckets via path rather than subdomain ++ # (i.e. "my-endpoint.com/bucket" instead of "bucket.my-endpoint.com") ++ force_path_style: false ++ + # Maximum amount to upload in one request to object storage + max_upload_part: 2GB + +diff --git a/server/initializers/config.ts b/server/initializers/config.ts +index 8375bf4304c..d726c59a4b6 100644 +--- a/server/initializers/config.ts ++++ b/server/initializers/config.ts +@@ -91,6 +91,7 @@ const CONFIG = { + ACCESS_KEY_ID: config.get('object_storage.credentials.access_key_id'), + SECRET_ACCESS_KEY: config.get('object_storage.credentials.secret_access_key') + }, ++ FORCE_PATH_STYLE: config.get('object_storage.force_path_style'), + VIDEOS: { + BUCKET_NAME: config.get('object_storage.videos.bucket_name'), + PREFIX: config.get('object_storage.videos.prefix'), +diff --git a/server/lib/object-storage/shared/client.ts b/server/lib/object-storage/shared/client.ts +index c9a61459336..eadad02f93f 100644 +--- a/server/lib/object-storage/shared/client.ts ++++ b/server/lib/object-storage/shared/client.ts +@@ -26,7 +26,8 @@ function getClient () { + accessKeyId: OBJECT_STORAGE.CREDENTIALS.ACCESS_KEY_ID, + secretAccessKey: OBJECT_STORAGE.CREDENTIALS.SECRET_ACCESS_KEY + } +- : undefined ++ : undefined, ++ forcePathStyle: CONFIG.OBJECT_STORAGE.FORCE_PATH_STYLE + }) + + logger.info('Initialized S3 client %s with region %s.', getEndpoint(), OBJECT_STORAGE.REGION, lTags()) +diff --git a/support/docker/production/config/custom-environment-variables.yaml b/support/docker/production/config/custom-environment-variables.yaml +index c7cd28e6521..a960bab0bc9 100644 +--- a/support/docker/production/config/custom-environment-variables.yaml ++++ b/support/docker/production/config/custom-environment-variables.yaml +@@ -54,6 +54,8 @@ object_storage: + + region: "PEERTUBE_OBJECT_STORAGE_REGION" + ++ force_path_style: "PEERTUBE_OBJECT_STORAGE_FORCE_PATH_STYLE" ++ + max_upload_part: + __name: "PEERTUBE_OBJECT_STORAGE_MAX_UPLOAD_PART" + __format: "json" +``` + +You can then recompile it with: + +``` +npm run build +``` + +And it can be started with: + +``` +NODE_ENV=production NODE_CONFIG_DIR=/srv/peertube/config node dist/server.js +``` + + +### Create resources in Garage + +Create a key for Peertube: + +```bash +garage key new --name peertube-key +``` + +Keep the Key ID and the Secret key in a pad, they will be needed later. + +We need two buckets, one for normal videos (named peertube-video) and one for webtorrent videos (named peertube-playlist). +```bash +garage bucket create peertube-video +garage bucket create peertube-playlist +``` + +Now we allow our key to read and write on these buckets: + +``` +garage bucket allow peertube-playlist --read --write --key peertube-key +garage bucket allow peertube-video --read --write --key peertube-key +``` + +Finally, we need to expose these buckets publicly to serve their content to users: + +```bash +garage bucket website --allow peertube-playlist +garage bucket website --allow peertube-video +``` + +These buckets are now accessible on the web port (by default 3902) with the following URL: `http://:` where the root domain is defined in your configuration file (by default `.web.garage`). So we have currently the following URLs: + * http://peertube-playlist.web.garage:3902 + * http://peertube-video.web.garage:3902 + +Make sure you (will) have a corresponding DNS entry for them. + +### Configure a Reverse Proxy to serve CORS + +Now we will configure a reverse proxy in front of Garage. +This is required as we have no other way to serve CORS headers yet. +Check the [Configuring a reverse proxy](@/documentation/cookbook/reverse-proxy.md) section to know how. + +Now make sure that your 2 dns entries are pointing to your reverse proxy. + +### Configure Peertube + +You must edit the file named `config/production.yaml`, we are only modifying the root key named `object_storage`: + +```yaml +object_storage: + enabled: true + + # Put localhost only if you have a garage instance running on that node + endpoint: 'http://localhost:3900' # or "garage.example.com" if you have TLS on port 443 + + # This entry has been added by our patch, must be set to true + force_path_style: true + + # Garage supports only one region for now, named garage + region: 'garage' + + credentials: + access_key_id: 'GKxxxx' + secret_access_key: 'xxxx' + + max_upload_part: 2GB + + streaming_playlists: + bucket_name: 'peertube-playlist' + + # Keep it empty for our example + prefix: '' + + # You must fill this field to make Peertube use our reverse proxy/website logic + base_url: 'http://peertube-playlist.web.garage' # Example: 'https://mirror.example.com' + + # Same settings but for webtorrent videos + videos: + bucket_name: 'peertube-video' + prefix: '' + # You must fill this field to make Peertube use our reverse proxy/website logic + base_url: 'http://peertube-video.web.garage' +``` + +### That's all + +Everything must be configured now, simply restart Peertube and try to upload a video. +You must see in your browser console that data are fetched directly from our bucket (through the reverse proxy). + +### Miscellaneous + +*Known bug:* The playback does not start and some 400 Bad Request Errors appear in your browser console and on Garage. +If the description of the error contains HTTP Invalid Range: InvalidRange, the error is due to a buggy ffmpeg version. +You must avoid the 4.4.0 and use either a newer or older version. + +*Associated issues:* [#137](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/137), [#138](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/138), [#140](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/140). These issues are non blocking. + +*External link:* [Peertube Documentation > Remote Storage](https://docs.joinpeertube.org/admin-remote-storage) + +## Mastodon + +https://docs.joinmastodon.org/admin/config/#cdn + +## Matrix + +Matrix is a chat communication protocol. Its main stable server implementation, [Synapse](https://matrix-org.github.io/synapse/latest/), provides a module to store media on a S3 backend. Additionally, a server independent media store supporting S3 has been developped by the community, it has been made possible thanks to how the matrix API has been designed and will work with implementations like Conduit, Dendrite, etc. + +### synapse-s3-storage-provider (synapse only) + +Supposing you have a working synapse installation, you can add the module with pip: + +```bash + pip3 install --user git+https://github.com/matrix-org/synapse-s3-storage-provider.git +``` + +Now create a bucket and a key for your matrix instance (note your Key ID and Secret Key somewhere, they will be needed later): + +```bash +garage key new --name matrix-key +garage bucket create matrix +garage bucket allow matrix --read --write --key matrix-key +``` + +Then you must edit your server configuration (eg. `/etc/matrix-synapse/homeserver.yaml`) and add the `media_storage_providers` root key: + +```yaml +media_storage_providers: +- module: s3_storage_provider.S3StorageProviderBackend + store_local: True # do we want to store on S3 media created by our users? + store_remote: True # do we want to store on S3 media created + # by users of others servers federated to ours? + store_synchronous: True # do we want to wait that the file has been written before returning? + config: + bucket: matrix # the name of our bucket, we chose matrix earlier + region_name: garage # only "garage" is supported for the region field + endpoint_url: http://localhost:3900 # the path to the S3 endpoint + access_key_id: "GKxxx" # your Key ID + secret_access_key: "xxxx" # your Secret Key +``` + +Note that uploaded media will also be stored locally and this behavior can not be deactivated, it is even required for +some operations like resizing images. +In fact, your local filesysem is considered as a cache but without any automated way to garbage collect it. + +We can build our garbage collector with `s3_media_upload`, a tool provided with the module. +If you installed the module with the command provided before, you should be able to bring it in your path: + +``` +PATH=$HOME/.local/bin/:$PATH +command -v s3_media_upload +``` + +Now we can write a simple script (eg `~/.local/bin/matrix-cache-gc`): + +```bash +#!/bin/bash + +## CONFIGURATION ## +AWS_ACCESS_KEY_ID=GKxxx +AWS_SECRET_ACCESS_KEY=xxxx +S3_ENDPOINT=http://localhost:3900 +S3_BUCKET=matrix +MEDIA_STORE=/var/lib/matrix-synapse/media +PG_USER=matrix +PG_PASS=xxxx +PG_DB=synapse +PG_HOST=localhost +PG_PORT=5432 + +## CODE ## +PATH=$HOME/.local/bin/:$PATH +cat > database.yaml < matrix-org/synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider) + +### matrix-media-repo (server independent) + +*External link:* [matrix-media-repo Documentation > S3](https://docs.t2bot.io/matrix-media-repo/configuration/s3-datastore.html) + +## Pixelfed + +https://docs.pixelfed.org/technical-documentation/env.html#filesystem + +## Pleroma + +https://docs-develop.pleroma.social/backend/configuration/cheatsheet/#pleromauploaderss3 + +## Lemmy + +via pict-rs +https://git.asonix.dog/asonix/pict-rs/commit/f9f4fc63d670f357c93f24147c2ee3e1278e2d97 + +## Funkwhale + +https://docs.funkwhale.audio/admin/configuration.html#s3-storage + +## Misskey + +https://github.com/misskey-dev/misskey/commit/9d944243a3a59e8880a360cbfe30fd5a3ec8d52d + +## Prismo + +https://gitlab.com/prismosuite/prismo/-/blob/dev/.env.production.sample#L26-33 + +## Owncloud Infinite Scale (ocis) + +## Unsupported + + - Mobilizon: No S3 integration + - WriteFreely: No S3 integration + - Plume: No S3 integration diff --git a/doc/book/connect/cli-nextcloud-gui.png b/doc/book/connect/cli-nextcloud-gui.png deleted file mode 100644 index 7a58a3ab..00000000 Binary files a/doc/book/connect/cli-nextcloud-gui.png and /dev/null differ diff --git a/doc/book/connect/fs.md b/doc/book/connect/fs.md index 60a60c7f..46df593b 100644 --- a/doc/book/connect/fs.md +++ b/doc/book/connect/fs.md @@ -14,7 +14,7 @@ Ideally, avoid these solutions at all for any serious or production use. ## rclone mount -rclone uses the same configuration when used [in CLI](/connect/cli.html) and mount mode. +rclone uses the same configuration when used [in CLI](@/documentation/connect/cli.md) and mount mode. We suppose you have the following entry in your `rclone.ini` (mine is located in `~/.config/rclone/rclone.conf`): ```toml diff --git a/doc/book/connect/websites.md b/doc/book/connect/websites.md index 1ef50463..3f62c9a6 100644 --- a/doc/book/connect/websites.md +++ b/doc/book/connect/websites.md @@ -53,7 +53,7 @@ Currently, the proposed workaround is to deploy your website manually: - Click on Get website files - You need to synchronize the output folder you see in your file explorer, we will use minio client. -Be sure that you [configured minio client](cli.html#minio-client-recommended). +Be sure that you [configured minio client](@/documentation/connect/cli.md#minio-client-recommended). Then copy this output folder @@ -66,7 +66,7 @@ mc mirror --overwrite output garage/my-site Some tools do not support sending to a S3 backend but output a compiled folder on your system. We can then use any CLI tool to upload this content to our S3 target. -First, start by [configuring minio client](cli.html#minio-client-recommended). +First, start by [configuring minio client](@/documentation/connect/cli.md#minio-client-recommended). Then build your website: diff --git a/doc/book/cookbook/_index.md b/doc/book/cookbook/_index.md index 72c32687..6e279363 100644 --- a/doc/book/cookbook/_index.md +++ b/doc/book/cookbook/_index.md @@ -9,23 +9,23 @@ A cookbook, when you cook, is a collection of recipes. Similarly, Garage's cookbook contains a collection of recipes that are known to works well! This chapter could also be referred as "Tutorials" or "Best practices". -- **[Multi-node deployment](real_world.md):** This page will walk you through all of the necessary +- **[Multi-node deployment](@/documentation/cookbook/real-world.md):** This page will walk you through all of the necessary steps to deploy Garage in a real-world setting. -- **[Building from source](from_source.md):** This page explains how to build Garage from +- **[Building from source](@/documentation/cookbook/from-source.md):** This page explains how to build Garage from source in case a binary is not provided for your architecture, or if you want to hack with us! -- **[Integration with Systemd](systemd.md):** This page explains how to run Garage +- **[Integration with Systemd](@/documentation/cookbook/systemd.md):** This page explains how to run Garage as a Systemd service (instead of as a Docker container). -- **[Configuring a gateway node](gateways.md):** This page explains how to run a gateway node in a Garage cluster, i.e. a Garage node that doesn't store data but accelerates access to data present on the other nodes. +- **[Configuring a gateway node](@/documentation/cookbook/gateways.md):** This page explains how to run a gateway node in a Garage cluster, i.e. a Garage node that doesn't store data but accelerates access to data present on the other nodes. -- **[Hosting a website](exposing_websites.md):** This page explains how to use Garage +- **[Hosting a website](@/documentation/cookbook/exposing-websites.md):** This page explains how to use Garage to host a static website. -- **[Configuring a reverse-proxy](reverse_proxy.md):** This page explains how to configure a reverse-proxy to add TLS support to your S3 api endpoint. +- **[Configuring a reverse-proxy](@/documentation/cookbook/reverse-proxy.md):** This page explains how to configure a reverse-proxy to add TLS support to your S3 api endpoint. -- **[Recovering from failures](recovering.md):** Garage's first selling point is resilience +- **[Recovering from failures](@/documentation/cookbook/recovering.md):** Garage's first selling point is resilience to hardware failures. This section explains how to recover from such a failure in the best possible way. diff --git a/doc/book/cookbook/exposing-websites.md b/doc/book/cookbook/exposing-websites.md new file mode 100644 index 00000000..cc4ddfa3 --- /dev/null +++ b/doc/book/cookbook/exposing-websites.md @@ -0,0 +1,51 @@ ++++ +title = "Exposing buckets as websites" +weight = 25 ++++ + +You can expose your bucket as a website with this simple command: + +```bash +garage bucket website --allow my-website +``` + +Now it will be **publicly** exposed on the web endpoint (by default listening on port 3902). + +Our website serving logic is as follow: + - Supports only static websites (no support for PHP or other languages) + - Does not support directory listing + - The index is defined in your `garage.toml`. ([ref](@/documentation/reference-manual/configuration.md#index)) + +Now we need to infer the URL of your website through your bucket name. +Let assume: + - we set `root_domain = ".web.example.com"` in `garage.toml` ([ref](@/documentation/reference-manual/configuration.md#root_domain)) + - our bucket name is `garagehq.deuxfleurs.fr`. + +Our bucket will be served if the Host field matches one of these 2 values (the port is ignored): + + - `garagehq.deuxfleurs.fr.web.example.com`: you can dedicate a subdomain to your users (here `web.example.com`). + + - `garagehq.deuxfleurs.fr`: your users can bring their own domain name, they just need to point them to your Garage cluster. + +You can try this logic locally, without configuring any DNS, thanks to `curl`: + +```bash +# prepare your test +echo hello world > /tmp/index.html +mc cp /tmp/index.html garage/garagehq.deuxfleurs.fr + +curl -H 'Host: garagehq.deuxfleurs.fr' http://localhost:3902 +# should print "hello world" + +curl -H 'Host: garagehq.deuxfleurs.fr.web.example.com' http://localhost:3902 +# should also print "hello world" +``` + +Now that you understand how website logic works on Garage, you can: + + - make the website endpoint listens on port 80 (instead of 3902) + - use iptables to redirect the port 80 to the port 3902: + `iptables -t nat -A PREROUTING -p tcp -dport 80 -j REDIRECT -to-port 3902` + - or configure a [reverse proxy](@/documentation/cookbook/reverse-proxy.md) in front of Garage to add TLS (HTTPS), CORS support, etc. + +You can also take a look at [Website Integration](@/documentation/connect/websites.md) to see how you can add Garage to your workflow. diff --git a/doc/book/cookbook/exposing_websites.md b/doc/book/cookbook/exposing_websites.md deleted file mode 100644 index dcb56d36..00000000 --- a/doc/book/cookbook/exposing_websites.md +++ /dev/null @@ -1,51 +0,0 @@ -+++ -title = "Exposing buckets as websites" -weight = 25 -+++ - -You can expose your bucket as a website with this simple command: - -```bash -garage bucket website --allow my-website -``` - -Now it will be **publicly** exposed on the web endpoint (by default listening on port 3902). - -Our website serving logic is as follow: - - Supports only static websites (no support for PHP or other languages) - - Does not support directory listing - - The index is defined in your `garage.toml`. ([ref](/reference_manual/configuration.html#index)) - -Now we need to infer the URL of your website through your bucket name. -Let assume: - - we set `root_domain = ".web.example.com"` in `garage.toml` ([ref](/reference_manual/configuration.html#root_domain)) - - our bucket name is `garagehq.deuxfleurs.fr`. - -Our bucket will be served if the Host field matches one of these 2 values (the port is ignored): - - - `garagehq.deuxfleurs.fr.web.example.com`: you can dedicate a subdomain to your users (here `web.example.com`). - - - `garagehq.deuxfleurs.fr`: your users can bring their own domain name, they just need to point them to your Garage cluster. - -You can try this logic locally, without configuring any DNS, thanks to `curl`: - -```bash -# prepare your test -echo hello world > /tmp/index.html -mc cp /tmp/index.html garage/garagehq.deuxfleurs.fr - -curl -H 'Host: garagehq.deuxfleurs.fr' http://localhost:3902 -# should print "hello world" - -curl -H 'Host: garagehq.deuxfleurs.fr.web.example.com' http://localhost:3902 -# should also print "hello world" -``` - -Now that you understand how website logic works on Garage, you can: - - - make the website endpoint listens on port 80 (instead of 3902) - - use iptables to redirect the port 80 to the port 3902: - `iptables -t nat -A PREROUTING -p tcp -dport 80 -j REDIRECT -to-port 3902` - - or configure a [reverse proxy](reverse_proxy.html) in front of Garage to add TLS (HTTPS), CORS support, etc. - -You can also take a look at [Website Integration](/connect/websites.html) to see how you can add Garage to your workflow. diff --git a/doc/book/cookbook/from-source.md b/doc/book/cookbook/from-source.md new file mode 100644 index 00000000..84c0d514 --- /dev/null +++ b/doc/book/cookbook/from-source.md @@ -0,0 +1,54 @@ ++++ +title = "Compiling Garage from source" +weight = 10 ++++ + + +Garage is a standard Rust project. +First, you need `rust` and `cargo`. +For instance on Debian: + +```bash +sudo apt-get update +sudo apt-get install -y rustc cargo +``` + +You can also use [Rustup](https://rustup.rs/) to setup a Rust toolchain easily. + +## Using source from `crates.io` + +Garage's source code is published on `crates.io`, Rust's official package repository. +This means you can simply ask `cargo` to download and build this source code for you: + +```bash +cargo install garage +``` + +That's all, `garage` should be in `$HOME/.cargo/bin`. + +You can add this folder to your `$PATH` or copy the binary somewhere else on your system. +For instance: + +```bash +sudo cp $HOME/.cargo/bin/garage /usr/local/bin/garage +``` + + +## Using source from the Gitea repository + +The primary location for Garage's source code is the +[Gitea repository](https://git.deuxfleurs.fr/Deuxfleurs/garage). + +Clone the repository and build Garage with the following commands: + +```bash +git clone https://git.deuxfleurs.fr/Deuxfleurs/garage.git +cd garage +cargo build +``` + +Be careful, as this will make a debug build of Garage, which will be extremely slow! +To make a release build, invoke `cargo build --release` (this takes much longer). + +The binaries built this way are found in `target/{debug,release}/garage`. + diff --git a/doc/book/cookbook/from_source.md b/doc/book/cookbook/from_source.md deleted file mode 100644 index 84c0d514..00000000 --- a/doc/book/cookbook/from_source.md +++ /dev/null @@ -1,54 +0,0 @@ -+++ -title = "Compiling Garage from source" -weight = 10 -+++ - - -Garage is a standard Rust project. -First, you need `rust` and `cargo`. -For instance on Debian: - -```bash -sudo apt-get update -sudo apt-get install -y rustc cargo -``` - -You can also use [Rustup](https://rustup.rs/) to setup a Rust toolchain easily. - -## Using source from `crates.io` - -Garage's source code is published on `crates.io`, Rust's official package repository. -This means you can simply ask `cargo` to download and build this source code for you: - -```bash -cargo install garage -``` - -That's all, `garage` should be in `$HOME/.cargo/bin`. - -You can add this folder to your `$PATH` or copy the binary somewhere else on your system. -For instance: - -```bash -sudo cp $HOME/.cargo/bin/garage /usr/local/bin/garage -``` - - -## Using source from the Gitea repository - -The primary location for Garage's source code is the -[Gitea repository](https://git.deuxfleurs.fr/Deuxfleurs/garage). - -Clone the repository and build Garage with the following commands: - -```bash -git clone https://git.deuxfleurs.fr/Deuxfleurs/garage.git -cd garage -cargo build -``` - -Be careful, as this will make a debug build of Garage, which will be extremely slow! -To make a release build, invoke `cargo build --release` (this takes much longer). - -The binaries built this way are found in `target/{debug,release}/garage`. - diff --git a/doc/book/cookbook/real-world.md b/doc/book/cookbook/real-world.md new file mode 100644 index 00000000..1178ded5 --- /dev/null +++ b/doc/book/cookbook/real-world.md @@ -0,0 +1,295 @@ ++++ +title = "Deployment on a cluster" +weight = 5 ++++ + +To run Garage in cluster mode, we recommend having at least 3 nodes. +This will allow you to setup Garage for three-way replication of your data, +the safest and most available mode proposed by Garage. + +We recommend first following the [quick start guide](@/documentation/quick-start/_index.md) in order +to get familiar with Garage's command line and usage patterns. + + + +## Prerequisites + +To run a real-world deployment, make sure the following conditions are met: + +- You have at least three machines with sufficient storage space available. + +- Each machine has a public IP address which is reachable by other machines. + Running behind a NAT is likely to be possible but hasn't been tested for the latest version (TODO). + +- Ideally, each machine should have a SSD available in addition to the HDD you are dedicating + to Garage. This will allow for faster access to metadata and has the potential + to drastically reduce Garage's response times. + +- This guide will assume you are using Docker containers to deploy Garage on each node. + Garage can also be run independently, for instance as a [Systemd service](@/documentation/cookbook/systemd.md). + You can also use an orchestrator such as Nomad or Kubernetes to automatically manage + Docker containers on a fleet of nodes. + +Before deploying Garage on your infrastructure, you must inventory your machines. +For our example, we will suppose the following infrastructure with IPv6 connectivity: + +| Location | Name | IP Address | Disk Space | +|----------|---------|------------|------------| +| Paris | Mercury | fc00:1::1 | 1 To | +| Paris | Venus | fc00:1::2 | 2 To | +| London | Earth | fc00:B::1 | 2 To | +| Brussels | Mars | fc00:F::1 | 1.5 To | + + + +## Get a Docker image + +Our docker image is currently named `dxflrs/amd64_garage` and is stored on the [Docker Hub](https://hub.docker.com/r/dxflrs/amd64_garage/tags?page=1&ordering=last_updated). +We encourage you to use a fixed tag (eg. `v0.4.0`) and not the `latest` tag. +For this example, we will use the latest published version at the time of the writing which is `v0.4.0` but it's up to you +to check [the most recent versions on the Docker Hub](https://hub.docker.com/r/dxflrs/amd64_garage/tags?page=1&ordering=last_updated). + +For example: + +``` +sudo docker pull dxflrs/amd64_garage:v0.4.0 +``` + +## Deploying and configuring Garage + +On each machine, we will have a similar setup, +especially you must consider the following folders/files: + +- `/etc/garage.toml`: Garage daemon's configuration (see below) + +- `/var/lib/garage/meta/`: Folder containing Garage's metadata, + put this folder on a SSD if possible + +- `/var/lib/garage/data/`: Folder containing Garage's data, + this folder will be your main data storage and must be on a large storage (e.g. large HDD) + + +A valid `/etc/garage/garage.toml` for our cluster would look as follows: + +```toml +metadata_dir = "/var/lib/garage/meta" +data_dir = "/var/lib/garage/data" + +replication_mode = "3" + +compression_level = 2 + +rpc_bind_addr = "[::]:3901" +rpc_public_addr = ":3901" +rpc_secret = "" + +bootstrap_peers = [] + +[s3_api] +s3_region = "garage" +api_bind_addr = "[::]:3900" +root_domain = ".s3.garage" + +[s3_web] +bind_addr = "[::]:3902" +root_domain = ".web.garage" +index = "index.html" +``` + +Check the following for your configuration files: + +- Make sure `rpc_public_addr` contains the public IP address of the node you are configuring. + This parameter is optional but recommended: if your nodes have trouble communicating with + one another, consider adding it. + +- Make sure `rpc_secret` is the same value on all nodes. It should be a 32-bytes hex-encoded secret key. + You can generate such a key with `openssl rand -hex 32`. + +## Starting Garage using Docker + +On each machine, you can run the daemon with: + +```bash +docker run \ + -d \ + --name garaged \ + --restart always \ + --network host \ + -v /etc/garage.toml:/etc/garage.toml \ + -v /var/lib/garage/meta:/var/lib/garage/meta \ + -v /var/lib/garage/data:/var/lib/garage/data \ + lxpz/garage_amd64:v0.4.0 +``` + +It should be restarted automatically at each reboot. +Please note that we use host networking as otherwise Docker containers +can not communicate with IPv6. + +Upgrading between Garage versions should be supported transparently, +but please check the relase notes before doing so! +To upgrade, simply stop and remove this container and +start again the command with a new version of Garage. + +## Controling the daemon + +The `garage` binary has two purposes: + - it acts as a daemon when launched with `garage server` + - it acts as a control tool for the daemon when launched with any other command + +Ensure an appropriate `garage` binary (the same version as your Docker image) is available in your path. +If your configuration file is at `/etc/garage.toml`, the `garage` binary should work with no further change. + +You can test your `garage` CLI utility by running a simple command such as: + +```bash +garage status +``` + +At this point, nodes are not yet talking to one another. +Your output should therefore look like follows: + +``` +Mercury$ garage status +==== HEALTHY NODES ==== +ID Hostname Address Tag Zone Capacity +563e1ac825ee3323… Mercury [fc00:1::1]:3901 NO ROLE ASSIGNED +``` + + +## Connecting nodes together + +When your Garage nodes first start, they will generate a local node identifier +(based on a public/private key pair). + +To obtain the node identifier of a node, once it is generated, +run `garage node id`. +This will print keys as follows: + +```bash +Mercury$ garage node id +563e1ac825ee3323aa441e72c26d1030d6d4414aeb3dd25287c531e7fc2bc95d@[fc00:1::1]:3901 + +Venus$ garage node id +86f0f26ae4afbd59aaf9cfb059eefac844951efd5b8caeec0d53f4ed6c85f332@[fc00:1::2]:3901 + +etc. +``` + +You can then instruct nodes to connect to one another as follows: + +```bash +# Instruct Venus to connect to Mercury (this will establish communication both ways) +Venus$ garage node connect 563e1ac825ee3323aa441e72c26d1030d6d4414aeb3dd25287c531e7fc2bc95d@[fc00:1::1]:3901 +``` + +You don't nead to instruct all node to connect to all other nodes: +nodes will discover one another transitively. + +Now if your run `garage status` on any node, you should have an output that looks as follows: + +``` +==== HEALTHY NODES ==== +ID Hostname Address Tag Zone Capacity +563e1ac825ee3323… Mercury [fc00:1::1]:3901 NO ROLE ASSIGNED +86f0f26ae4afbd59… Venus [fc00:1::2]:3901 NO ROLE ASSIGNED +68143d720f20c89d… Earth [fc00:B::1]:3901 NO ROLE ASSIGNED +212f7572f0c89da9… Mars [fc00:F::1]:3901 NO ROLE ASSIGNED +``` + +## Creating a cluster layout + +We will now inform Garage of the disk space available on each node of the cluster +as well as the zone (e.g. datacenter) in which each machine is located. +This information is called the **cluster layout** and consists +of a role that is assigned to each active cluster node. + +For our example, we will suppose we have the following infrastructure +(Capacity, Identifier and Zone are specific values to Garage described in the following): + +| Location | Name | Disk Space | `Capacity` | `Identifier` | `Zone` | +|----------|---------|------------|------------|--------------|--------------| +| Paris | Mercury | 1 To | `10` | `563e` | `par1` | +| Paris | Venus | 2 To | `20` | `86f0` | `par1` | +| London | Earth | 2 To | `20` | `6814` | `lon1` | +| Brussels | Mars | 1.5 To | `15` | `212f` | `bru1` | + +#### Node identifiers + +After its first launch, Garage generates a random and unique identifier for each nodes, such as: + +``` +563e1ac825ee3323aa441e72c26d1030d6d4414aeb3dd25287c531e7fc2bc95d +``` + +Often a shorter form can be used, containing only the beginning of the identifier, like `563e`, +which identifies the server "Mercury" located in "Paris" according to our previous table. + +The most simple way to match an identifier to a node is to run: + +``` +garage status +``` + +It will display the IP address associated with each node; +from the IP address you will be able to recognize the node. + +#### Zones + +Zones are simply a user-chosen identifier that identify a group of server that are grouped together logically. +It is up to the system administrator deploying Garage to identify what does "grouped together" means. + +In most cases, a zone will correspond to a geographical location (i.e. a datacenter). +Behind the scene, Garage will use zone definition to try to store the same data on different zones, +in order to provide high availability despite failure of a zone. + +#### Capacity + +Garage reasons on an abstract metric about disk storage that is named the *capacity* of a node. +The capacity configured in Garage must be proportional to the disk space dedicated to the node. + +Capacity values must be **integers** but can be given any signification. +Here we chose that 1 unit of capacity = 100 GB. + +Note that the amount of data stored by Garage on each server may not be strictly proportional to +its capacity value, as Garage will priorize having 3 copies of data in different zones, +even if this means that capacities will not be strictly respected. For example in our above examples, +nodes Earth and Mars will always store a copy of everything each, and the third copy will +have 66% chance of being stored by Venus and 33% chance of being stored by Mercury. + +#### Injecting the topology + +Given the information above, we will configure our cluster as follow: + +```bash +garage layout assign -z par1 -c 10 -t mercury 563e +garage layout assign -z par1 -c 20 -t venus 86f0 +garage layout assign -z lon1 -c 20 -t earth 6814 +garage layout assign -z bru1 -c 15 -t mars 212f +``` + +At this point, the changes in the cluster layout have not yet been applied. +To show the new layout that will be applied, call: + +```bash +garage layout show +``` + +Once you are satisfied with your new layout, apply it with: + +```bash +garage layout apply +``` + +**WARNING:** if you want to use the layout modification commands in a script, +make sure to read [this page](@/documentation/reference-manual/layout.md) first. + + +## Using your Garage cluster + +Creating buckets and managing keys is done using the `garage` CLI, +and is covered in the [quick start guide](@/documentation/quick-start/_index.md). +Remember also that the CLI is self-documented thanks to the `--help` flag and +the `help` subcommand (e.g. `garage help`, `garage key --help`). + +Configuring S3-compatible applicatiosn to interact with Garage +is covered in the [Integrations](@/documentation/connect/_index.md) section. diff --git a/doc/book/cookbook/real_world.md b/doc/book/cookbook/real_world.md deleted file mode 100644 index 788c80a9..00000000 --- a/doc/book/cookbook/real_world.md +++ /dev/null @@ -1,295 +0,0 @@ -+++ -title = "Deployment on a cluster" -weight = 5 -+++ - -To run Garage in cluster mode, we recommend having at least 3 nodes. -This will allow you to setup Garage for three-way replication of your data, -the safest and most available mode proposed by Garage. - -We recommend first following the [quick start guide](../quick_start/index.md) in order -to get familiar with Garage's command line and usage patterns. - - - -## Prerequisites - -To run a real-world deployment, make sure the following conditions are met: - -- You have at least three machines with sufficient storage space available. - -- Each machine has a public IP address which is reachable by other machines. - Running behind a NAT is likely to be possible but hasn't been tested for the latest version (TODO). - -- Ideally, each machine should have a SSD available in addition to the HDD you are dedicating - to Garage. This will allow for faster access to metadata and has the potential - to drastically reduce Garage's response times. - -- This guide will assume you are using Docker containers to deploy Garage on each node. - Garage can also be run independently, for instance as a [Systemd service](systemd.md). - You can also use an orchestrator such as Nomad or Kubernetes to automatically manage - Docker containers on a fleet of nodes. - -Before deploying Garage on your infrastructure, you must inventory your machines. -For our example, we will suppose the following infrastructure with IPv6 connectivity: - -| Location | Name | IP Address | Disk Space | -|----------|---------|------------|------------| -| Paris | Mercury | fc00:1::1 | 1 To | -| Paris | Venus | fc00:1::2 | 2 To | -| London | Earth | fc00:B::1 | 2 To | -| Brussels | Mars | fc00:F::1 | 1.5 To | - - - -## Get a Docker image - -Our docker image is currently named `dxflrs/amd64_garage` and is stored on the [Docker Hub](https://hub.docker.com/r/dxflrs/amd64_garage/tags?page=1&ordering=last_updated). -We encourage you to use a fixed tag (eg. `v0.4.0`) and not the `latest` tag. -For this example, we will use the latest published version at the time of the writing which is `v0.4.0` but it's up to you -to check [the most recent versions on the Docker Hub](https://hub.docker.com/r/dxflrs/amd64_garage/tags?page=1&ordering=last_updated). - -For example: - -``` -sudo docker pull dxflrs/amd64_garage:v0.4.0 -``` - -## Deploying and configuring Garage - -On each machine, we will have a similar setup, -especially you must consider the following folders/files: - -- `/etc/garage.toml`: Garage daemon's configuration (see below) - -- `/var/lib/garage/meta/`: Folder containing Garage's metadata, - put this folder on a SSD if possible - -- `/var/lib/garage/data/`: Folder containing Garage's data, - this folder will be your main data storage and must be on a large storage (e.g. large HDD) - - -A valid `/etc/garage/garage.toml` for our cluster would look as follows: - -```toml -metadata_dir = "/var/lib/garage/meta" -data_dir = "/var/lib/garage/data" - -replication_mode = "3" - -compression_level = 2 - -rpc_bind_addr = "[::]:3901" -rpc_public_addr = ":3901" -rpc_secret = "" - -bootstrap_peers = [] - -[s3_api] -s3_region = "garage" -api_bind_addr = "[::]:3900" -root_domain = ".s3.garage" - -[s3_web] -bind_addr = "[::]:3902" -root_domain = ".web.garage" -index = "index.html" -``` - -Check the following for your configuration files: - -- Make sure `rpc_public_addr` contains the public IP address of the node you are configuring. - This parameter is optional but recommended: if your nodes have trouble communicating with - one another, consider adding it. - -- Make sure `rpc_secret` is the same value on all nodes. It should be a 32-bytes hex-encoded secret key. - You can generate such a key with `openssl rand -hex 32`. - -## Starting Garage using Docker - -On each machine, you can run the daemon with: - -```bash -docker run \ - -d \ - --name garaged \ - --restart always \ - --network host \ - -v /etc/garage.toml:/etc/garage.toml \ - -v /var/lib/garage/meta:/var/lib/garage/meta \ - -v /var/lib/garage/data:/var/lib/garage/data \ - lxpz/garage_amd64:v0.4.0 -``` - -It should be restarted automatically at each reboot. -Please note that we use host networking as otherwise Docker containers -can not communicate with IPv6. - -Upgrading between Garage versions should be supported transparently, -but please check the relase notes before doing so! -To upgrade, simply stop and remove this container and -start again the command with a new version of Garage. - -## Controling the daemon - -The `garage` binary has two purposes: - - it acts as a daemon when launched with `garage server` - - it acts as a control tool for the daemon when launched with any other command - -Ensure an appropriate `garage` binary (the same version as your Docker image) is available in your path. -If your configuration file is at `/etc/garage.toml`, the `garage` binary should work with no further change. - -You can test your `garage` CLI utility by running a simple command such as: - -```bash -garage status -``` - -At this point, nodes are not yet talking to one another. -Your output should therefore look like follows: - -``` -Mercury$ garage status -==== HEALTHY NODES ==== -ID Hostname Address Tag Zone Capacity -563e1ac825ee3323… Mercury [fc00:1::1]:3901 NO ROLE ASSIGNED -``` - - -## Connecting nodes together - -When your Garage nodes first start, they will generate a local node identifier -(based on a public/private key pair). - -To obtain the node identifier of a node, once it is generated, -run `garage node id`. -This will print keys as follows: - -```bash -Mercury$ garage node id -563e1ac825ee3323aa441e72c26d1030d6d4414aeb3dd25287c531e7fc2bc95d@[fc00:1::1]:3901 - -Venus$ garage node id -86f0f26ae4afbd59aaf9cfb059eefac844951efd5b8caeec0d53f4ed6c85f332@[fc00:1::2]:3901 - -etc. -``` - -You can then instruct nodes to connect to one another as follows: - -```bash -# Instruct Venus to connect to Mercury (this will establish communication both ways) -Venus$ garage node connect 563e1ac825ee3323aa441e72c26d1030d6d4414aeb3dd25287c531e7fc2bc95d@[fc00:1::1]:3901 -``` - -You don't nead to instruct all node to connect to all other nodes: -nodes will discover one another transitively. - -Now if your run `garage status` on any node, you should have an output that looks as follows: - -``` -==== HEALTHY NODES ==== -ID Hostname Address Tag Zone Capacity -563e1ac825ee3323… Mercury [fc00:1::1]:3901 NO ROLE ASSIGNED -86f0f26ae4afbd59… Venus [fc00:1::2]:3901 NO ROLE ASSIGNED -68143d720f20c89d… Earth [fc00:B::1]:3901 NO ROLE ASSIGNED -212f7572f0c89da9… Mars [fc00:F::1]:3901 NO ROLE ASSIGNED -``` - -## Creating a cluster layout - -We will now inform Garage of the disk space available on each node of the cluster -as well as the zone (e.g. datacenter) in which each machine is located. -This information is called the **cluster layout** and consists -of a role that is assigned to each active cluster node. - -For our example, we will suppose we have the following infrastructure -(Capacity, Identifier and Zone are specific values to Garage described in the following): - -| Location | Name | Disk Space | `Capacity` | `Identifier` | `Zone` | -|----------|---------|------------|------------|--------------|--------------| -| Paris | Mercury | 1 To | `10` | `563e` | `par1` | -| Paris | Venus | 2 To | `20` | `86f0` | `par1` | -| London | Earth | 2 To | `20` | `6814` | `lon1` | -| Brussels | Mars | 1.5 To | `15` | `212f` | `bru1` | - -#### Node identifiers - -After its first launch, Garage generates a random and unique identifier for each nodes, such as: - -``` -563e1ac825ee3323aa441e72c26d1030d6d4414aeb3dd25287c531e7fc2bc95d -``` - -Often a shorter form can be used, containing only the beginning of the identifier, like `563e`, -which identifies the server "Mercury" located in "Paris" according to our previous table. - -The most simple way to match an identifier to a node is to run: - -``` -garage status -``` - -It will display the IP address associated with each node; -from the IP address you will be able to recognize the node. - -#### Zones - -Zones are simply a user-chosen identifier that identify a group of server that are grouped together logically. -It is up to the system administrator deploying Garage to identify what does "grouped together" means. - -In most cases, a zone will correspond to a geographical location (i.e. a datacenter). -Behind the scene, Garage will use zone definition to try to store the same data on different zones, -in order to provide high availability despite failure of a zone. - -#### Capacity - -Garage reasons on an abstract metric about disk storage that is named the *capacity* of a node. -The capacity configured in Garage must be proportional to the disk space dedicated to the node. - -Capacity values must be **integers** but can be given any signification. -Here we chose that 1 unit of capacity = 100 GB. - -Note that the amount of data stored by Garage on each server may not be strictly proportional to -its capacity value, as Garage will priorize having 3 copies of data in different zones, -even if this means that capacities will not be strictly respected. For example in our above examples, -nodes Earth and Mars will always store a copy of everything each, and the third copy will -have 66% chance of being stored by Venus and 33% chance of being stored by Mercury. - -#### Injecting the topology - -Given the information above, we will configure our cluster as follow: - -```bash -garage layout assign -z par1 -c 10 -t mercury 563e -garage layout assign -z par1 -c 20 -t venus 86f0 -garage layout assign -z lon1 -c 20 -t earth 6814 -garage layout assign -z bru1 -c 15 -t mars 212f -``` - -At this point, the changes in the cluster layout have not yet been applied. -To show the new layout that will be applied, call: - -```bash -garage layout show -``` - -Once you are satisfied with your new layout, apply it with: - -```bash -garage layout apply -``` - -**WARNING:** if you want to use the layout modification commands in a script, -make sure to read [this page](/reference_manual/layout.html) first. - - -## Using your Garage cluster - -Creating buckets and managing keys is done using the `garage` CLI, -and is covered in the [quick start guide](../quick_start/index.md). -Remember also that the CLI is self-documented thanks to the `--help` flag and -the `help` subcommand (e.g. `garage help`, `garage key --help`). - -Configuring S3-compatible applicatiosn to interact with Garage -is covered in the [Integrations](/connect/index.html) section. diff --git a/doc/book/cookbook/reverse-proxy.md b/doc/book/cookbook/reverse-proxy.md new file mode 100644 index 00000000..63ba4bbe --- /dev/null +++ b/doc/book/cookbook/reverse-proxy.md @@ -0,0 +1,168 @@ ++++ +title = "Configuring a reverse proxy" +weight = 30 ++++ + +The main reason to add a reverse proxy in front of Garage is to provide TLS to your users. + +In production you will likely need your certificates signed by a certificate authority. +The most automated way is to use a provider supporting the [ACME protocol](https://datatracker.ietf.org/doc/html/rfc8555) +such as [Let's Encrypt](https://letsencrypt.org/), [ZeroSSL](https://zerossl.com/) or [Buypass Go SSL](https://www.buypass.com/ssl/products/acme). + +If you are only testing Garage, you can generate a self-signed certificate to follow the documentation: + +```bash +openssl req \ + -new \ + -x509 \ + -keyout /tmp/garage.key \ + -out /tmp/garage.crt \ + -nodes \ + -subj "/C=XX/ST=XX/L=XX/O=XX/OU=XX/CN=localhost/emailAddress=X@X.XX" \ + -addext "subjectAltName = DNS:localhost, IP:127.0.0.1" + +cat /tmp/garage.key /tmp/garage.crt > /tmp/garage.pem +``` + +Be careful as you will need to allow self signed certificates in your client. +For example, with minio, you must add the `--insecure` flag. +An example: + +```bash +mc ls --insecure garage/ +``` + +## socat (only for testing purposes) + +If you want to test Garage with a TLS frontend, socat can do it for you in a single command: + +```bash +socat \ +"openssl-listen:443,\ +reuseaddr,\ +fork,\ +verify=0,\ +cert=/tmp/garage.pem" \ +tcp4-connect:localhost:3900 +``` + +## Nginx + +Nginx is a well-known reverse proxy suitable for production. +We do the configuration in 3 steps: first we define the upstream blocks ("the backends") +then we define the server blocks ("the frontends") for the S3 endpoint and finally for the web endpoint. + +The following configuration blocks can be all put in the same `/etc/nginx/sites-available/garage.conf`. +To make your configuration active, run `ln -s /etc/nginx/sites-available/garage.conf /etc/nginx/sites-enabled/`. +If you directly put the instructions in the root `nginx.conf`, keep in mind that these configurations must be enclosed inside a `http { }` block. + +And do not forget to reload nginx with `systemctl reload nginx` or `nginx -s reload`. + +### Defining backends + +First, we need to tell to nginx how to access our Garage cluster. +Because we have multiple nodes, we want to leverage all of them by spreading the load. + +In nginx, we can do that with the upstream directive. +Because we have 2 endpoints: one for the S3 API and one to serve websites, +we create 2 backends named respectively `s3_backend` and `web_backend`. + +A documented example for the `s3_backend` assuming you chose port 3900: + +```nginx +upstream s3_backend { + # if you have a garage instance locally + server 127.0.0.1:3900; + # you can also put your other instances + server 192.168.1.3:3900; + # domain names also work + server garage1.example.com:3900; + # you can assign weights if you have some servers + # that are more powerful than others + server garage2.example.com:3900 weight=2; +} +``` + +A similar example for the `web_backend` assuming you chose port 3902: + +```nginx +upstream web_backend { + server 127.0.0.1:3902; + server 192.168.1.3:3902; + server garage1.example.com:3902; + server garage2.example.com:3902 weight=2; +} +``` + +### Exposing the S3 API + +The configuration section for the S3 API is simple as we only support path-access style yet. +We simply configure the TLS parameters and forward all the requests to the backend: + +```nginx +server { + listen [::]:443 http2 ssl; + ssl_certificate /tmp/garage.crt; + ssl_certificate_key /tmp/garage.key; + + # should be the endpoint you want + # aws uses s3.amazonaws.com for example + server_name garage.example.com; + + location / { + proxy_pass http://s3_backend; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header Host $host; + } +} + +``` + +### Exposing the web endpoint + +The web endpoint is a bit more complicated to configure as it listens on many different `Host` fields. +To better understand the logic involved, you can refer to the [Exposing buckets as websites](@/documentation/cookbook/exposing-websites.md) section. +Also, for some applications, you may need to serve CORS headers: Garage can not serve them directly but we show how we can use nginx to serve them. +You can use the following example as your starting point: + +```nginx +server { + listen [::]:443 http2 ssl; + ssl_certificate /tmp/garage.crt; + ssl_certificate_key /tmp/garage.key; + + # We list all the Hosts fields that can access our buckets + server_name *.web.garage + example.com + my-site.tld + ; + + location / { + # Add these headers only if you want to allow CORS requests + # For production use, more specific rules would be better for your security + add_header Access-Control-Allow-Origin *; + add_header Access-Control-Max-Age 3600; + add_header Access-Control-Expose-Headers Content-Length; + add_header Access-Control-Allow-Headers Range; + + # We do not forward OPTIONS requests to Garage + # as it does not support them but they are needed for CORS. + if ($request_method = OPTIONS) { + return 200; + } + + proxy_pass http://web_backend; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header Host $host; + } +} +``` + + +## Apache httpd + +@TODO + +## Traefik + +@TODO diff --git a/doc/book/cookbook/reverse_proxy.md b/doc/book/cookbook/reverse_proxy.md deleted file mode 100644 index 55a2e9b1..00000000 --- a/doc/book/cookbook/reverse_proxy.md +++ /dev/null @@ -1,168 +0,0 @@ -+++ -title = "Configuring a reverse proxy" -weight = 30 -+++ - -The main reason to add a reverse proxy in front of Garage is to provide TLS to your users. - -In production you will likely need your certificates signed by a certificate authority. -The most automated way is to use a provider supporting the [ACME protocol](https://datatracker.ietf.org/doc/html/rfc8555) -such as [Let's Encrypt](https://letsencrypt.org/), [ZeroSSL](https://zerossl.com/) or [Buypass Go SSL](https://www.buypass.com/ssl/products/acme). - -If you are only testing Garage, you can generate a self-signed certificate to follow the documentation: - -```bash -openssl req \ - -new \ - -x509 \ - -keyout /tmp/garage.key \ - -out /tmp/garage.crt \ - -nodes \ - -subj "/C=XX/ST=XX/L=XX/O=XX/OU=XX/CN=localhost/emailAddress=X@X.XX" \ - -addext "subjectAltName = DNS:localhost, IP:127.0.0.1" - -cat /tmp/garage.key /tmp/garage.crt > /tmp/garage.pem -``` - -Be careful as you will need to allow self signed certificates in your client. -For example, with minio, you must add the `--insecure` flag. -An example: - -```bash -mc ls --insecure garage/ -``` - -## socat (only for testing purposes) - -If you want to test Garage with a TLS frontend, socat can do it for you in a single command: - -```bash -socat \ -"openssl-listen:443,\ -reuseaddr,\ -fork,\ -verify=0,\ -cert=/tmp/garage.pem" \ -tcp4-connect:localhost:3900 -``` - -## Nginx - -Nginx is a well-known reverse proxy suitable for production. -We do the configuration in 3 steps: first we define the upstream blocks ("the backends") -then we define the server blocks ("the frontends") for the S3 endpoint and finally for the web endpoint. - -The following configuration blocks can be all put in the same `/etc/nginx/sites-available/garage.conf`. -To make your configuration active, run `ln -s /etc/nginx/sites-available/garage.conf /etc/nginx/sites-enabled/`. -If you directly put the instructions in the root `nginx.conf`, keep in mind that these configurations must be enclosed inside a `http { }` block. - -And do not forget to reload nginx with `systemctl reload nginx` or `nginx -s reload`. - -### Defining backends - -First, we need to tell to nginx how to access our Garage cluster. -Because we have multiple nodes, we want to leverage all of them by spreading the load. - -In nginx, we can do that with the upstream directive. -Because we have 2 endpoints: one for the S3 API and one to serve websites, -we create 2 backends named respectively `s3_backend` and `web_backend`. - -A documented example for the `s3_backend` assuming you chose port 3900: - -```nginx -upstream s3_backend { - # if you have a garage instance locally - server 127.0.0.1:3900; - # you can also put your other instances - server 192.168.1.3:3900; - # domain names also work - server garage1.example.com:3900; - # you can assign weights if you have some servers - # that are more powerful than others - server garage2.example.com:3900 weight=2; -} -``` - -A similar example for the `web_backend` assuming you chose port 3902: - -```nginx -upstream web_backend { - server 127.0.0.1:3902; - server 192.168.1.3:3902; - server garage1.example.com:3902; - server garage2.example.com:3902 weight=2; -} -``` - -### Exposing the S3 API - -The configuration section for the S3 API is simple as we only support path-access style yet. -We simply configure the TLS parameters and forward all the requests to the backend: - -```nginx -server { - listen [::]:443 http2 ssl; - ssl_certificate /tmp/garage.crt; - ssl_certificate_key /tmp/garage.key; - - # should be the endpoint you want - # aws uses s3.amazonaws.com for example - server_name garage.example.com; - - location / { - proxy_pass http://s3_backend; - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - proxy_set_header Host $host; - } -} - -``` - -### Exposing the web endpoint - -The web endpoint is a bit more complicated to configure as it listens on many different `Host` fields. -To better understand the logic involved, you can refer to the [Exposing buckets as websites](/cookbook/exposing_websites.html) section. -Also, for some applications, you may need to serve CORS headers: Garage can not serve them directly but we show how we can use nginx to serve them. -You can use the following example as your starting point: - -```nginx -server { - listen [::]:443 http2 ssl; - ssl_certificate /tmp/garage.crt; - ssl_certificate_key /tmp/garage.key; - - # We list all the Hosts fields that can access our buckets - server_name *.web.garage - example.com - my-site.tld - ; - - location / { - # Add these headers only if you want to allow CORS requests - # For production use, more specific rules would be better for your security - add_header Access-Control-Allow-Origin *; - add_header Access-Control-Max-Age 3600; - add_header Access-Control-Expose-Headers Content-Length; - add_header Access-Control-Allow-Headers Range; - - # We do not forward OPTIONS requests to Garage - # as it does not support them but they are needed for CORS. - if ($request_method = OPTIONS) { - return 200; - } - - proxy_pass http://web_backend; - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - proxy_set_header Host $host; - } -} -``` - - -## Apache httpd - -@TODO - -## Traefik - -@TODO diff --git a/doc/book/design/benchmarks.md b/doc/book/design/benchmarks.md deleted file mode 100644 index c47995b4..00000000 --- a/doc/book/design/benchmarks.md +++ /dev/null @@ -1,84 +0,0 @@ -+++ -title = "Benchmarks" -weight = 10 -+++ - -With Garage, we wanted to build a software defined storage service that follow the [KISS principle](https://en.wikipedia.org/wiki/KISS_principle), - that is suitable for geo-distributed deployments and more generally that would work well for community hosting (like a Mastodon instance). - -In our benchmarks, we aim to quantify how Garage performs on these goals compared to the other available solutions. - -## Geo-distribution - -The main challenge in a geo-distributed setup is latency between nodes of the cluster. -The more a user request will require intra-cluster requests to complete, the more its latency will increase. -This is especially true for sequential requests: requests that must wait the result of another request to be sent. -We designed Garage without consensus algorithms (eg. Paxos or Raft) to minimize the number of sequential and parallel requests. - -This serie of benchmarks quantifies the impact of this design choice. - -### On a simple simulated network - -We start with a controlled environment, all the instances are running on the same (powerful enough) machine. - -To control the network latency, we simulate the network with [mknet](https://git.deuxfleurs.fr/trinity-1686a/mknet) (a tool we developped, based on `tc` and the linux network stack). -To mesure S3 endpoints latency, we use our own tool [s3lat](https://git.deuxfleurs.fr/quentin/s3lat/) to observe only the intra-cluster latency and not some contention on the nodes (CPU, RAM, disk I/O, network bandwidth, etc.). -Compared to other benchmark tools, S3Lat sends only one (small) request at the same time and measures its latency. -We selected 5 standard endpoints that are often in the critical path: ListBuckets, ListObjects, GetObject, PutObject and RemoveObject. - -In this first benchmark, we consider 5 instances that are located in a different place each. To simulate the distance, we configure mknet with a RTT between each node of 100 ms +/- 20 ms of jitter. We get the following graph, where the colored bars represent the mean latency while the error bars the minimum and maximum one: - -![Comparison of endpoints latency for minio and garage](./img/endpoint-latency.png) - -Compared to garage, minio latency drastically increases on 3 endpoints: GetObject, PutObject, RemoveObject. - -We suppose that these requests on minio make transactions over Raft, involving 4 sequential requests: 1) sending the message to the leader, 2) having the leader dispatch it to the other nodes, 3) waiting for the confirmation of followers and finally 4) commiting it. With our current configuration, one Raft transaction will take around 400 ms. GetObject seems to correlate to 1 transaction while PutObject and RemoveObject seems to correlate to 2 or 3. Reviewing minio code would be required to confirm this hypothesis. - -Conversely, garage uses an architecture similar to DynamoDB and never require global cluster coordination to answer a request. -Instead, garage can always contact the right node in charge of the requested data, and can answer in as low as one request in the case of GetObject and PutObject. We also observed that Garage latency, while often lower to minio, is more dispersed: garage is still in beta and has not received any performance optimization yet. - -As a conclusion, Garage performs well in such setup while minio will be hard to use, especially for interactive use cases. - -### On a complex simulated network - -This time we consider a more heterogeneous network with 6 servers spread in 3 datacenter, giving us 2 servers per datacenters. -We consider that intra-DC communications are now very cheap with a latency of 0.5ms and without any jitter. -The inter-DC remains costly with the same value as before (100ms +/- 20ms of jitter). -We plot a similar graph as before: - -![Comparison of endpoints latency for minio and garage with 6 nodes in 3 DC](./img/endpoint-latency-dc.png) - -This new graph is very similar to the one before, neither minio or garage seems to benefit from this new topology, but they also do not suffer from it. - -Considering garage, this is expected: nodes in the same DC are put in the same zone, and then data are spread on different zones for data resiliency and availaibility. -Then, in the default mode, requesting data requires to query at least 2 zones to be sure that we have the most up to date information. -These requests will involve at least one inter-DC communication. -In other words, we prioritize data availability and synchronization over raw performances. - -Minio's case is a bit different as by default a minio cluster is not location aware, so we can't explain its performances through location awareness. -*We know that minio has a multi site mode but it is definitely not a first class citizen: data are asynchronously replicated from one minio cluster to another.* -We suppose that, due to the consensus, for many of its requests minio will wait for a response of the majority of the server, also involving inter-DC communications. - -As a conclusion, our new topology did not influence garage or minio performances, confirming that in presence of latency, garage is the best fit. - -### On a real world deployment - -*TODO* - - -## Performance stability - -A storage cluster will encounter different scenario over its life, many of them will not be predictable. -In this context, we argue that, more than peak performances, we should seek predictable and stable performances to ensure data availability. - -### Reference - -*TODO* - -### On a degraded cluster - -*TODO* - -### At scale - -*TODO* diff --git a/doc/book/design/benchmarks/endpoint-latency-dc.png b/doc/book/design/benchmarks/endpoint-latency-dc.png new file mode 100644 index 00000000..7c7411cd Binary files /dev/null and b/doc/book/design/benchmarks/endpoint-latency-dc.png differ diff --git a/doc/book/design/benchmarks/endpoint-latency.png b/doc/book/design/benchmarks/endpoint-latency.png new file mode 100644 index 00000000..741539a7 Binary files /dev/null and b/doc/book/design/benchmarks/endpoint-latency.png differ diff --git a/doc/book/design/benchmarks/index.md b/doc/book/design/benchmarks/index.md new file mode 100644 index 00000000..c2215a4a --- /dev/null +++ b/doc/book/design/benchmarks/index.md @@ -0,0 +1,84 @@ ++++ +title = "Benchmarks" +weight = 10 ++++ + +With Garage, we wanted to build a software defined storage service that follow the [KISS principle](https://en.wikipedia.org/wiki/KISS_principle), + that is suitable for geo-distributed deployments and more generally that would work well for community hosting (like a Mastodon instance). + +In our benchmarks, we aim to quantify how Garage performs on these goals compared to the other available solutions. + +## Geo-distribution + +The main challenge in a geo-distributed setup is latency between nodes of the cluster. +The more a user request will require intra-cluster requests to complete, the more its latency will increase. +This is especially true for sequential requests: requests that must wait the result of another request to be sent. +We designed Garage without consensus algorithms (eg. Paxos or Raft) to minimize the number of sequential and parallel requests. + +This serie of benchmarks quantifies the impact of this design choice. + +### On a simple simulated network + +We start with a controlled environment, all the instances are running on the same (powerful enough) machine. + +To control the network latency, we simulate the network with [mknet](https://git.deuxfleurs.fr/trinity-1686a/mknet) (a tool we developped, based on `tc` and the linux network stack). +To mesure S3 endpoints latency, we use our own tool [s3lat](https://git.deuxfleurs.fr/quentin/s3lat/) to observe only the intra-cluster latency and not some contention on the nodes (CPU, RAM, disk I/O, network bandwidth, etc.). +Compared to other benchmark tools, S3Lat sends only one (small) request at the same time and measures its latency. +We selected 5 standard endpoints that are often in the critical path: ListBuckets, ListObjects, GetObject, PutObject and RemoveObject. + +In this first benchmark, we consider 5 instances that are located in a different place each. To simulate the distance, we configure mknet with a RTT between each node of 100 ms +/- 20 ms of jitter. We get the following graph, where the colored bars represent the mean latency while the error bars the minimum and maximum one: + +![Comparison of endpoints latency for minio and garage](./endpoint-latency.png) + +Compared to garage, minio latency drastically increases on 3 endpoints: GetObject, PutObject, RemoveObject. + +We suppose that these requests on minio make transactions over Raft, involving 4 sequential requests: 1) sending the message to the leader, 2) having the leader dispatch it to the other nodes, 3) waiting for the confirmation of followers and finally 4) commiting it. With our current configuration, one Raft transaction will take around 400 ms. GetObject seems to correlate to 1 transaction while PutObject and RemoveObject seems to correlate to 2 or 3. Reviewing minio code would be required to confirm this hypothesis. + +Conversely, garage uses an architecture similar to DynamoDB and never require global cluster coordination to answer a request. +Instead, garage can always contact the right node in charge of the requested data, and can answer in as low as one request in the case of GetObject and PutObject. We also observed that Garage latency, while often lower to minio, is more dispersed: garage is still in beta and has not received any performance optimization yet. + +As a conclusion, Garage performs well in such setup while minio will be hard to use, especially for interactive use cases. + +### On a complex simulated network + +This time we consider a more heterogeneous network with 6 servers spread in 3 datacenter, giving us 2 servers per datacenters. +We consider that intra-DC communications are now very cheap with a latency of 0.5ms and without any jitter. +The inter-DC remains costly with the same value as before (100ms +/- 20ms of jitter). +We plot a similar graph as before: + +![Comparison of endpoints latency for minio and garage with 6 nodes in 3 DC](./endpoint-latency-dc.png) + +This new graph is very similar to the one before, neither minio or garage seems to benefit from this new topology, but they also do not suffer from it. + +Considering garage, this is expected: nodes in the same DC are put in the same zone, and then data are spread on different zones for data resiliency and availaibility. +Then, in the default mode, requesting data requires to query at least 2 zones to be sure that we have the most up to date information. +These requests will involve at least one inter-DC communication. +In other words, we prioritize data availability and synchronization over raw performances. + +Minio's case is a bit different as by default a minio cluster is not location aware, so we can't explain its performances through location awareness. +*We know that minio has a multi site mode but it is definitely not a first class citizen: data are asynchronously replicated from one minio cluster to another.* +We suppose that, due to the consensus, for many of its requests minio will wait for a response of the majority of the server, also involving inter-DC communications. + +As a conclusion, our new topology did not influence garage or minio performances, confirming that in presence of latency, garage is the best fit. + +### On a real world deployment + +*TODO* + + +## Performance stability + +A storage cluster will encounter different scenario over its life, many of them will not be predictable. +In this context, we argue that, more than peak performances, we should seek predictable and stable performances to ensure data availability. + +### Reference + +*TODO* + +### On a degraded cluster + +*TODO* + +### At scale + +*TODO* diff --git a/doc/book/design/img/endpoint-latency-dc.png b/doc/book/design/img/endpoint-latency-dc.png deleted file mode 100644 index 7c7411cd..00000000 Binary files a/doc/book/design/img/endpoint-latency-dc.png and /dev/null differ diff --git a/doc/book/design/img/endpoint-latency.png b/doc/book/design/img/endpoint-latency.png deleted file mode 100644 index 741539a7..00000000 Binary files a/doc/book/design/img/endpoint-latency.png and /dev/null differ diff --git a/doc/book/design/internals.md b/doc/book/design/internals.md index be531e97..05d852e2 100644 --- a/doc/book/design/internals.md +++ b/doc/book/design/internals.md @@ -17,7 +17,7 @@ In the meantime, you can find some information at the following links: - [this presentation (in French)](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/doc/talks/2020-12-02_wide-team/talk.pdf) -- [an old design draft](/working_documents/design_draft.md) +- [an old design draft](@/documentation/working-documents/design-draft.md) ## Garbage collection diff --git a/doc/book/design/related-work.md b/doc/book/design/related-work.md new file mode 100644 index 00000000..da883c06 --- /dev/null +++ b/doc/book/design/related-work.md @@ -0,0 +1,80 @@ ++++ +title = "Related work" +weight = 15 ++++ + +## Context + +Data storage is critical: it can lead to data loss if done badly and/or on hardware failure. +Filesystems + RAID can help on a single machine but a machine failure can put the whole storage offline. +Moreover, it put a hard limit on scalability. Often this limit can be pushed back far away by buying expensive machines. +But here we consider non specialized off the shelf machines that can be as low powered and subject to failures as a raspberry pi. + +Distributed storage may help to solve both availability and scalability problems on these machines. +Many solutions were proposed, they can be categorized as block storage, file storage and object storage depending on the abstraction they provide. + +## Overview + +Block storage is the most low level one, it's like exposing your raw hard drive over the network. +It requires very low latencies and stable network, that are often dedicated. +However it provides disk devices that can be manipulated by the operating system with the less constraints: it can be partitioned with any filesystem, meaning that it supports even the most exotic features. +We can cite [iSCSI](https://en.wikipedia.org/wiki/ISCSI) or [Fibre Channel](https://en.wikipedia.org/wiki/Fibre_Channel). +Openstack Cinder proxy previous solution to provide an uniform API. + +File storage provides a higher abstraction, they are one filesystem among others, which means they don't necessarily have all the exotic features of every filesystem. +Often, they relax some POSIX constraints while many applications will still be compatible without any modification. +As an example, we are able to run MariaDB (very slowly) over GlusterFS... +We can also mention CephFS (read [RADOS](https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf) whitepaper), Lustre, LizardFS, MooseFS, etc. +OpenStack Manila proxy previous solutions to provide an uniform API. + +Finally object storages provide the highest level abstraction. +They are the testimony that the POSIX filesystem API is not adapted to distributed filesystems. +Especially, the strong concistency has been dropped in favor of eventual consistency which is way more convenient and powerful in presence of high latencies and unreliability. +We often read about S3 that pioneered the concept that it's a filesystem for the WAN. +Applications must be adapted to work for the desired object storage service. +Today, the S3 HTTP REST API acts as a standard in the industry. +However, Amazon S3 source code is not open but alternatives were proposed. +We identified Minio, Pithos, Swift and Ceph. +Minio/Ceph enforces a total order, so properties similar to a (relaxed) filesystem. +Swift and Pithos are probably the most similar to AWS S3 with their consistent hashing ring. +However Pithos is not maintained anymore. More precisely the company that published Pithos version 1 has developped a second version 2 but has not open sourced it. +Some tests conducted by the [ACIDES project](https://acides.org/) have shown that Openstack Swift consumes way more resources (CPU+RAM) that we can afford. Furthermore, people developing Swift have not designed their software for geo-distribution. + +There were many attempts in research too. I am only thinking to [LBFS](https://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf) that was used as a basis for Seafile. But none of them have been effectively implemented yet. + +## Existing software + +**[MinIO](https://min.io/):** MinIO shares our *Self-contained & lightweight* goal but selected two of our non-goals: *Storage optimizations* through erasure coding and *POSIX/Filesystem compatibility* through strong consistency. +However, by pursuing these two non-goals, MinIO do not reach our desirable properties. +Firstly, it fails on the *Simple* property: due to the erasure coding, MinIO has severe limitations on how drives can be added or deleted from a cluster. +Secondly, it fails on the *Internet enabled* property: due to its strong consistency, MinIO is latency sensitive. +Furthermore, MinIO has no knowledge of "sites" and thus can not distribute data to minimize the failure of a given site. + +**[Openstack Swift](https://docs.openstack.org/swift/latest/):** +OpenStack Swift at least fails on the *Self-contained & lightweight* goal. +Starting it requires around 8GB of RAM, which is too much especially in an hyperconverged infrastructure. +We also do not classify Swift as *Simple*. + +**[Ceph](https://ceph.io/ceph-storage/object-storage/):** +This review holds for the whole Ceph stack, including the RADOS paper, Ceph Object Storage module, the RADOS Gateway, etc. +At its core, Ceph has been designed to provide *POSIX/Filesystem compatibility* which requires strong consistency, which in turn +makes Ceph latency-sensitive and fails our *Internet enabled* goal. +Due to its industry oriented design, Ceph is also far from being *Simple* to operate and from being *Self-contained & lightweight* which makes it hard to integrate it in an hyperconverged infrastructure. +In a certain way, Ceph and MinIO are closer together than they are from Garage or OpenStack Swift. + +**[Pithos](https://github.com/exoscale/pithos):** +Pithos has been abandonned and should probably not used yet, in the following we explain why we did not pick their design. +Pithos was relying as a S3 proxy in front of Cassandra (and was working with Scylla DB too). +From its designers' mouth, storing data in Cassandra has shown its limitations justifying the project abandonment. +They built a closed-source version 2 that does not store blobs in the database (only metadata) but did not communicate further on it. +We considered there v2's design but concluded that it does not fit both our *Self-contained & lightweight* and *Simple* properties. It makes the development, the deployment and the operations more complicated while reducing the flexibility. + +**[Riak CS](https://docs.riak.com/riak/cs/2.1.1/index.html):** +*Not written yet* + +**[IPFS](https://ipfs.io/):** +*Not written yet* + +## Specific research papers + +*Not yet written* diff --git a/doc/book/design/related_work.md b/doc/book/design/related_work.md deleted file mode 100644 index da883c06..00000000 --- a/doc/book/design/related_work.md +++ /dev/null @@ -1,80 +0,0 @@ -+++ -title = "Related work" -weight = 15 -+++ - -## Context - -Data storage is critical: it can lead to data loss if done badly and/or on hardware failure. -Filesystems + RAID can help on a single machine but a machine failure can put the whole storage offline. -Moreover, it put a hard limit on scalability. Often this limit can be pushed back far away by buying expensive machines. -But here we consider non specialized off the shelf machines that can be as low powered and subject to failures as a raspberry pi. - -Distributed storage may help to solve both availability and scalability problems on these machines. -Many solutions were proposed, they can be categorized as block storage, file storage and object storage depending on the abstraction they provide. - -## Overview - -Block storage is the most low level one, it's like exposing your raw hard drive over the network. -It requires very low latencies and stable network, that are often dedicated. -However it provides disk devices that can be manipulated by the operating system with the less constraints: it can be partitioned with any filesystem, meaning that it supports even the most exotic features. -We can cite [iSCSI](https://en.wikipedia.org/wiki/ISCSI) or [Fibre Channel](https://en.wikipedia.org/wiki/Fibre_Channel). -Openstack Cinder proxy previous solution to provide an uniform API. - -File storage provides a higher abstraction, they are one filesystem among others, which means they don't necessarily have all the exotic features of every filesystem. -Often, they relax some POSIX constraints while many applications will still be compatible without any modification. -As an example, we are able to run MariaDB (very slowly) over GlusterFS... -We can also mention CephFS (read [RADOS](https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf) whitepaper), Lustre, LizardFS, MooseFS, etc. -OpenStack Manila proxy previous solutions to provide an uniform API. - -Finally object storages provide the highest level abstraction. -They are the testimony that the POSIX filesystem API is not adapted to distributed filesystems. -Especially, the strong concistency has been dropped in favor of eventual consistency which is way more convenient and powerful in presence of high latencies and unreliability. -We often read about S3 that pioneered the concept that it's a filesystem for the WAN. -Applications must be adapted to work for the desired object storage service. -Today, the S3 HTTP REST API acts as a standard in the industry. -However, Amazon S3 source code is not open but alternatives were proposed. -We identified Minio, Pithos, Swift and Ceph. -Minio/Ceph enforces a total order, so properties similar to a (relaxed) filesystem. -Swift and Pithos are probably the most similar to AWS S3 with their consistent hashing ring. -However Pithos is not maintained anymore. More precisely the company that published Pithos version 1 has developped a second version 2 but has not open sourced it. -Some tests conducted by the [ACIDES project](https://acides.org/) have shown that Openstack Swift consumes way more resources (CPU+RAM) that we can afford. Furthermore, people developing Swift have not designed their software for geo-distribution. - -There were many attempts in research too. I am only thinking to [LBFS](https://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf) that was used as a basis for Seafile. But none of them have been effectively implemented yet. - -## Existing software - -**[MinIO](https://min.io/):** MinIO shares our *Self-contained & lightweight* goal but selected two of our non-goals: *Storage optimizations* through erasure coding and *POSIX/Filesystem compatibility* through strong consistency. -However, by pursuing these two non-goals, MinIO do not reach our desirable properties. -Firstly, it fails on the *Simple* property: due to the erasure coding, MinIO has severe limitations on how drives can be added or deleted from a cluster. -Secondly, it fails on the *Internet enabled* property: due to its strong consistency, MinIO is latency sensitive. -Furthermore, MinIO has no knowledge of "sites" and thus can not distribute data to minimize the failure of a given site. - -**[Openstack Swift](https://docs.openstack.org/swift/latest/):** -OpenStack Swift at least fails on the *Self-contained & lightweight* goal. -Starting it requires around 8GB of RAM, which is too much especially in an hyperconverged infrastructure. -We also do not classify Swift as *Simple*. - -**[Ceph](https://ceph.io/ceph-storage/object-storage/):** -This review holds for the whole Ceph stack, including the RADOS paper, Ceph Object Storage module, the RADOS Gateway, etc. -At its core, Ceph has been designed to provide *POSIX/Filesystem compatibility* which requires strong consistency, which in turn -makes Ceph latency-sensitive and fails our *Internet enabled* goal. -Due to its industry oriented design, Ceph is also far from being *Simple* to operate and from being *Self-contained & lightweight* which makes it hard to integrate it in an hyperconverged infrastructure. -In a certain way, Ceph and MinIO are closer together than they are from Garage or OpenStack Swift. - -**[Pithos](https://github.com/exoscale/pithos):** -Pithos has been abandonned and should probably not used yet, in the following we explain why we did not pick their design. -Pithos was relying as a S3 proxy in front of Cassandra (and was working with Scylla DB too). -From its designers' mouth, storing data in Cassandra has shown its limitations justifying the project abandonment. -They built a closed-source version 2 that does not store blobs in the database (only metadata) but did not communicate further on it. -We considered there v2's design but concluded that it does not fit both our *Self-contained & lightweight* and *Simple* properties. It makes the development, the deployment and the operations more complicated while reducing the flexibility. - -**[Riak CS](https://docs.riak.com/riak/cs/2.1.1/index.html):** -*Not written yet* - -**[IPFS](https://ipfs.io/):** -*Not written yet* - -## Specific research papers - -*Not yet written* diff --git a/doc/book/development/miscellaneous-notes.md b/doc/book/development/miscellaneous-notes.md new file mode 100644 index 00000000..f0083ae5 --- /dev/null +++ b/doc/book/development/miscellaneous-notes.md @@ -0,0 +1,101 @@ ++++ +title = "Miscellaneous notes" +weight = 20 ++++ + +## Quirks about cargo2nix/rust in Nix + +If you use submodules in your crate (like `crdt` and `replication` in `garage_table`), you must list them in `default.nix` + +The Windows target does not work. it might be solvable through [overrides](https://github.com/cargo2nix/cargo2nix/blob/master/overlay/overrides.nix). Indeed, we pass `x86_64-pc-windows-gnu` but mingw need `x86_64-w64-mingw32` + +We have a simple [PR on cargo2nix](https://github.com/cargo2nix/cargo2nix/pull/201) that fixes critical bugs but the project does not seem very active currently. We must use [my patched version of cargo2nix](https://github.com/superboum/cargo2nix) to enable i686 and armv6l compilation. We might need to contribute to cargo2nix in the future. + + +## Nix + +Nix has no armv7 + musl toolchains but armv7l is backward compatible with armv6l. + +```bash +cat > $HOME/.awsrc < $HOME/.awsrc < what about Serf (used by Consul/Nomad) : https://www.serf.io/? Seems a huge library with many features so maybe overkill/hard to integrate +- `metadata/`: metadata management +- `blocks/`: block management, writing, GC and rebalancing +- `internal/`: server to server communication (HTTP server and client that reuses connections, TLS if we want, etc) +- `api/`: S3 API +- `web/`: web management interface + +#### Metadata tables + +**Objects:** + +- *Hash key:* Bucket name (string) +- *Sort key:* Object key (string) +- *Sort key:* Version timestamp (int) +- *Sort key:* Version UUID (string) +- Complete: bool +- Inline: bool, true for objects < threshold (say 1024) +- Object size (int) +- Mime type (string) +- Data for inlined objects (blob) +- Hash of first block otherwise (string) + +*Having only a hash key on the bucket name will lead to storing all file entries of this table for a specific bucket on a single node. At the same time, it is the only way I see to rapidly being able to list all bucket entries...* + +**Blocks:** + +- *Hash key:* Version UUID (string) +- *Sort key:* Offset of block in total file (int) +- Hash of data block (string) + +A version is defined by the existence of at least one entry in the blocks table for a certain version UUID. +We must keep the following invariant: if a version exists in the blocks table, it has to be referenced in the objects table. +We explicitly manage concurrent versions of an object: the version timestamp and version UUID columns are index columns, thus we may have several concurrent versions of an object. +Important: before deleting an older version from the objects table, we must make sure that we did a successfull delete of the blocks of that version from the blocks table. + +Thus, the workflow for reading an object is as follows: + +1. Check permissions (LDAP) +2. Read entry in object table. If data is inline, we have its data, stop here. + -> if several versions, take newest one and launch deletion of old ones in background +3. Read first block from cluster. If size <= 1 block, stop here. +4. Simultaneously with previous step, if size > 1 block: query the Blocks table for the IDs of the next blocks +5. Read subsequent blocks from cluster + +Workflow for PUT: + +1. Check write permission (LDAP) +2. Select a new version UUID +3. Write a preliminary entry for the new version in the objects table with complete = false +4. Send blocks to cluster and write entries in the blocks table +5. Update the version with complete = true and all of the accurate information (size, etc) +6. Return success to the user +7. Launch a background job to check and delete older versions + +Workflow for DELETE: + +1. Check write permission (LDAP) +2. Get current version (or versions) in object table +3. Do the deletion of those versions NOT IN A BACKGROUND JOB THIS TIME +4. Return succes to the user if we were able to delete blocks from the blocks table and entries from the object table + +To delete a version: + +1. List the blocks from Cassandra +2. For each block, delete it from cluster. Don't care if some deletions fail, we can do GC. +3. Delete all of the blocks from the blocks table +4. Finally, delete the version from the objects table + +Known issue: if someone is reading from a version that we want to delete and the object is big, the read might be interrupted. I think it is ok to leave it like this, we just cut the connection if data disappears during a read. + +("Soit P un problème, on s'en fout est une solution à ce problème") + +#### Block storage on disk + +**Blocks themselves:** + +- file path = /blobs/(first 3 hex digits of hash)/(rest of hash) + +**Reverse index for GC & other block-level metadata:** + +- file path = /meta/(first 3 hex digits of hash)/(rest of hash) +- map block hash -> set of version UUIDs where it is referenced + +Usefull metadata: + +- list of versions that reference this block in the Casandra table, so that we can do GC by checking in Cassandra that the lines still exist +- list of other nodes that we know have acknowledged a write of this block, usefull in the rebalancing algorithm + +Write strategy: have a single thread that does all write IO so that it is serialized (or have several threads that manage independent parts of the hash space). When writing a blob, write it to a temporary file, close, then rename so that a concurrent read gets a consistent result (either not found or found with whole content). + +Read strategy: the only read operation is get(hash) that returns either the data or not found (can do a corruption check as well and return corrupted state if it is the case). Can be done concurrently with writes. + +**Internal API:** + +- get(block hash) -> ok+data/not found/corrupted +- put(block hash & data, version uuid + offset) -> ok/error +- put with no data(block hash, version uuid + offset) -> ok/not found plz send data/error +- delete(block hash, version uuid + offset) -> ok/error + +GC: when last ref is deleted, delete block. +Long GC procedure: check in Cassandra that version UUIDs still exist and references this block. + +Rebalancing: takes as argument the list of newly added nodes. + +- List all blocks that we have. For each block: +- If it hits a newly introduced node, send it to them. + Use put with no data first to check if it has to be sent to them already or not. + Use a random listing order to avoid race conditions (they do no harm but we might have two nodes sending the same thing at the same time thus wasting time). +- If it doesn't hit us anymore, delete it and its reference list. + +Only one balancing can be running at a same time. It can be restarted at the beginning with new parameters. + +#### Membership management + +Two sets of nodes: + +- set of nodes from which a ping was recently received, with status: number of stored blocks, request counters, error counters, GC%, rebalancing% + (eviction from this set after say 30 seconds without ping) +- set of nodes that are part of the system, explicitly modified by the operator using the web UI (persisted to disk), + is a CRDT using a version number for the value of the whole set + +Thus, three states for nodes: + +- healthy: in both sets +- missing: not pingable but part of desired cluster +- unused/draining: currently present but not part of the desired cluster, empty = if contains nothing, draining = if still contains some blocks + +Membership messages between nodes: + +- ping with current state + hash of current membership info -> reply with same info +- send&get back membership info (the ids of nodes that are in the two sets): used when no local membership change in a long time and membership info hash discrepancy detected with first message (passive membership fixing with full CRDT gossip) +- inform of newly pingable node(s) -> no result, when receive new info repeat to all (reliable broadcast) +- inform of operator membership change -> no result, when receive new info repeat to all (reliable broadcast) + +Ring: generated from the desired set of nodes, however when doing read/writes on the ring, skip nodes that are known to be not pingable. +The tokens are generated in a deterministic fashion from node IDs (hash of node id + token number from 1 to K). +Number K of tokens per node: decided by the operator & stored in the operator's list of nodes CRDT. Default value proposal: with node status information also broadcast disk total size and free space, and propose a default number of tokens equal to 80%Free space / 10Gb. (this is all user interface) + + +#### Constants + +- Block size: around 1MB ? --> Exoscale use 16MB chunks +- Number of tokens in the hash ring: one every 10Gb of allocated storage +- Threshold for storing data directly in Cassandra objects table: 1kb bytes (maybe up to 4kb?) +- Ping timeout (time after which a node is registered as unresponsive/missing): 30 seconds +- Ping interval: 10 seconds +- ?? + +#### Links + +- CDC: +- Erasure coding: +- [Openstack Storage Concepts](https://docs.openstack.org/arch-design/design-storage/design-storage-concepts.html) +- [RADOS](https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf) diff --git a/doc/book/working-documents/design_draft.md b/doc/book/working-documents/design_draft.md deleted file mode 100644 index 830496ee..00000000 --- a/doc/book/working-documents/design_draft.md +++ /dev/null @@ -1,165 +0,0 @@ -+++ -title = "Design draft" -weight = 25 -+++ - -**WARNING: this documentation is a design draft which was written before Garage's actual implementation. -The general principle are similar, but details have not been updated.** - - -#### Modules - -- `membership/`: configuration, membership management (gossip of node's presence and status), ring generation --> what about Serf (used by Consul/Nomad) : https://www.serf.io/? Seems a huge library with many features so maybe overkill/hard to integrate -- `metadata/`: metadata management -- `blocks/`: block management, writing, GC and rebalancing -- `internal/`: server to server communication (HTTP server and client that reuses connections, TLS if we want, etc) -- `api/`: S3 API -- `web/`: web management interface - -#### Metadata tables - -**Objects:** - -- *Hash key:* Bucket name (string) -- *Sort key:* Object key (string) -- *Sort key:* Version timestamp (int) -- *Sort key:* Version UUID (string) -- Complete: bool -- Inline: bool, true for objects < threshold (say 1024) -- Object size (int) -- Mime type (string) -- Data for inlined objects (blob) -- Hash of first block otherwise (string) - -*Having only a hash key on the bucket name will lead to storing all file entries of this table for a specific bucket on a single node. At the same time, it is the only way I see to rapidly being able to list all bucket entries...* - -**Blocks:** - -- *Hash key:* Version UUID (string) -- *Sort key:* Offset of block in total file (int) -- Hash of data block (string) - -A version is defined by the existence of at least one entry in the blocks table for a certain version UUID. -We must keep the following invariant: if a version exists in the blocks table, it has to be referenced in the objects table. -We explicitly manage concurrent versions of an object: the version timestamp and version UUID columns are index columns, thus we may have several concurrent versions of an object. -Important: before deleting an older version from the objects table, we must make sure that we did a successfull delete of the blocks of that version from the blocks table. - -Thus, the workflow for reading an object is as follows: - -1. Check permissions (LDAP) -2. Read entry in object table. If data is inline, we have its data, stop here. - -> if several versions, take newest one and launch deletion of old ones in background -3. Read first block from cluster. If size <= 1 block, stop here. -4. Simultaneously with previous step, if size > 1 block: query the Blocks table for the IDs of the next blocks -5. Read subsequent blocks from cluster - -Workflow for PUT: - -1. Check write permission (LDAP) -2. Select a new version UUID -3. Write a preliminary entry for the new version in the objects table with complete = false -4. Send blocks to cluster and write entries in the blocks table -5. Update the version with complete = true and all of the accurate information (size, etc) -6. Return success to the user -7. Launch a background job to check and delete older versions - -Workflow for DELETE: - -1. Check write permission (LDAP) -2. Get current version (or versions) in object table -3. Do the deletion of those versions NOT IN A BACKGROUND JOB THIS TIME -4. Return succes to the user if we were able to delete blocks from the blocks table and entries from the object table - -To delete a version: - -1. List the blocks from Cassandra -2. For each block, delete it from cluster. Don't care if some deletions fail, we can do GC. -3. Delete all of the blocks from the blocks table -4. Finally, delete the version from the objects table - -Known issue: if someone is reading from a version that we want to delete and the object is big, the read might be interrupted. I think it is ok to leave it like this, we just cut the connection if data disappears during a read. - -("Soit P un problème, on s'en fout est une solution à ce problème") - -#### Block storage on disk - -**Blocks themselves:** - -- file path = /blobs/(first 3 hex digits of hash)/(rest of hash) - -**Reverse index for GC & other block-level metadata:** - -- file path = /meta/(first 3 hex digits of hash)/(rest of hash) -- map block hash -> set of version UUIDs where it is referenced - -Usefull metadata: - -- list of versions that reference this block in the Casandra table, so that we can do GC by checking in Cassandra that the lines still exist -- list of other nodes that we know have acknowledged a write of this block, usefull in the rebalancing algorithm - -Write strategy: have a single thread that does all write IO so that it is serialized (or have several threads that manage independent parts of the hash space). When writing a blob, write it to a temporary file, close, then rename so that a concurrent read gets a consistent result (either not found or found with whole content). - -Read strategy: the only read operation is get(hash) that returns either the data or not found (can do a corruption check as well and return corrupted state if it is the case). Can be done concurrently with writes. - -**Internal API:** - -- get(block hash) -> ok+data/not found/corrupted -- put(block hash & data, version uuid + offset) -> ok/error -- put with no data(block hash, version uuid + offset) -> ok/not found plz send data/error -- delete(block hash, version uuid + offset) -> ok/error - -GC: when last ref is deleted, delete block. -Long GC procedure: check in Cassandra that version UUIDs still exist and references this block. - -Rebalancing: takes as argument the list of newly added nodes. - -- List all blocks that we have. For each block: -- If it hits a newly introduced node, send it to them. - Use put with no data first to check if it has to be sent to them already or not. - Use a random listing order to avoid race conditions (they do no harm but we might have two nodes sending the same thing at the same time thus wasting time). -- If it doesn't hit us anymore, delete it and its reference list. - -Only one balancing can be running at a same time. It can be restarted at the beginning with new parameters. - -#### Membership management - -Two sets of nodes: - -- set of nodes from which a ping was recently received, with status: number of stored blocks, request counters, error counters, GC%, rebalancing% - (eviction from this set after say 30 seconds without ping) -- set of nodes that are part of the system, explicitly modified by the operator using the web UI (persisted to disk), - is a CRDT using a version number for the value of the whole set - -Thus, three states for nodes: - -- healthy: in both sets -- missing: not pingable but part of desired cluster -- unused/draining: currently present but not part of the desired cluster, empty = if contains nothing, draining = if still contains some blocks - -Membership messages between nodes: - -- ping with current state + hash of current membership info -> reply with same info -- send&get back membership info (the ids of nodes that are in the two sets): used when no local membership change in a long time and membership info hash discrepancy detected with first message (passive membership fixing with full CRDT gossip) -- inform of newly pingable node(s) -> no result, when receive new info repeat to all (reliable broadcast) -- inform of operator membership change -> no result, when receive new info repeat to all (reliable broadcast) - -Ring: generated from the desired set of nodes, however when doing read/writes on the ring, skip nodes that are known to be not pingable. -The tokens are generated in a deterministic fashion from node IDs (hash of node id + token number from 1 to K). -Number K of tokens per node: decided by the operator & stored in the operator's list of nodes CRDT. Default value proposal: with node status information also broadcast disk total size and free space, and propose a default number of tokens equal to 80%Free space / 10Gb. (this is all user interface) - - -#### Constants - -- Block size: around 1MB ? --> Exoscale use 16MB chunks -- Number of tokens in the hash ring: one every 10Gb of allocated storage -- Threshold for storing data directly in Cassandra objects table: 1kb bytes (maybe up to 4kb?) -- Ping timeout (time after which a node is registered as unresponsive/missing): 30 seconds -- Ping interval: 10 seconds -- ?? - -#### Links - -- CDC: -- Erasure coding: -- [Openstack Storage Concepts](https://docs.openstack.org/arch-design/design-storage/design-storage-concepts.html) -- [RADOS](https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf) diff --git a/doc/book/working-documents/load-balancing.md b/doc/book/working-documents/load-balancing.md new file mode 100644 index 00000000..87298ae6 --- /dev/null +++ b/doc/book/working-documents/load-balancing.md @@ -0,0 +1,202 @@ ++++ +title = "Load balancing data" +weight = 10 ++++ + +**This is being yet improved in release 0.5. The working document has not been updated yet, it still only applies to Garage 0.2 through 0.4.** + +I have conducted a quick study of different methods to load-balance data over different Garage nodes using consistent hashing. + +## Requirements + +- *good balancing*: two nodes that have the same announced capacity should receive close to the same number of items + +- *multi-datacenter*: the replicas of a partition should be distributed over as many datacenters as possible + +- *minimal disruption*: when adding or removing a node, as few partitions as possible should have to move around + +- *order-agnostic*: the same set of nodes (each associated with a datacenter name + and a capacity) should always return the same distribution of partition + replicas, independently of the order in which nodes were added/removed (this + is to keep the implementation simple) + +## Methods + +### Naive multi-DC ring walking strategy + +This strategy can be used with any ring-like algorithm to make it aware of the *multi-datacenter* requirement: + +In this method, the ring is a list of positions, each associated with a single node in the cluster. +Partitions contain all the keys between two consecutive items of the ring. +To find the nodes that store replicas of a given partition: + +- select the node for the position of the partition's lower bound +- go clockwise on the ring, skipping nodes that: + - we halve already selected + - are in a datacenter of a node we have selected, except if we already have nodes from all possible datacenters + +In this way the selected nodes will always be distributed over +`min(n_datacenters, n_replicas)` different datacenters, which is the best we +can do. + +This method was implemented in the first version of Garage, with the basic +ring construction from Dynamo DB that consists in associating `n_token` random positions to +each node (I know it's not optimal, the Dynamo paper already studies this). + +### Better rings + +The ring construction that selects `n_token` random positions for each nodes gives a ring of positions that +is not well-balanced: the space between the tokens varies a lot, and some partitions are thus bigger than others. +This problem was demonstrated in the original Dynamo DB paper. + +To solve this, we want to apply a better second method for partitionning our dataset: + +1. fix an initially large number of partitions (say 1024) with evenly-spaced delimiters, + +2. attribute each partition randomly to a node, with a probability + proportionnal to its capacity (which `n_tokens` represented in the first + method) + +For now we continue using the multi-DC ring walking described above. + +I have studied two ways to do the attribution of partitions to nodes, in a way that is deterministic: + +- Min-hash: for each partition, select node that minimizes `hash(node, partition_number)` +- MagLev: see [here](https://blog.acolyer.org/2016/03/21/maglev-a-fast-and-reliable-software-network-load-balancer/) + +MagLev provided significantly better balancing, as it guarantees that the exact +same number of partitions is attributed to all nodes that have the same +capacity (and that this number is proportionnal to the node's capacity, except +for large values), however in both cases: + +- the distribution is still bad, because we use the naive multi-DC ring walking + that behaves strangely due to interactions between consecutive positions on + the ring + +- the disruption in case of adding/removing a node is not as low as it can be, + as we show with the following method. + +A quick description of MagLev (backend = node, lookup table = ring): + +> The basic idea of Maglev hashing is to assign a preference list of all the +> lookup table positions to each backend. Then all the backends take turns +> filling their most-preferred table positions that are still empty, until the +> lookup table is completely filled in. Hence, Maglev hashing gives an almost +> equal share of the lookup table to each of the backends. Heterogeneous +> backend weights can be achieved by altering the relative frequency of the +> backends’ turns… + +Here are some stats (run `scripts/simulate_ring.py` to reproduce): + +``` +##### Custom-ring (min-hash) ##### + +#partitions per node (capacity in parenthesis): +- datura (8) : 227 +- digitale (8) : 351 +- drosera (8) : 259 +- geant (16) : 476 +- gipsie (16) : 410 +- io (16) : 495 +- isou (8) : 231 +- mini (4) : 149 +- mixi (4) : 188 +- modi (4) : 127 +- moxi (4) : 159 + +Variance of load distribution for load normalized to intra-class mean +(a class being the set of nodes with the same announced capacity): 2.18% <-- REALLY BAD + +Disruption when removing nodes (partitions moved on 0/1/2/3 nodes): +removing atuin digitale : 63.09% 30.18% 6.64% 0.10% +removing atuin drosera : 72.36% 23.44% 4.10% 0.10% +removing atuin datura : 73.24% 21.48% 5.18% 0.10% +removing jupiter io : 48.34% 38.48% 12.30% 0.88% +removing jupiter isou : 74.12% 19.73% 6.05% 0.10% +removing grog mini : 84.47% 12.40% 2.93% 0.20% +removing grog mixi : 80.76% 16.60% 2.64% 0.00% +removing grog moxi : 83.59% 14.06% 2.34% 0.00% +removing grog modi : 87.01% 11.43% 1.46% 0.10% +removing grisou geant : 48.24% 37.40% 13.67% 0.68% +removing grisou gipsie : 53.03% 33.59% 13.09% 0.29% +on average: 69.84% 23.53% 6.40% 0.23% <-- COULD BE BETTER + +-------- + +##### MagLev ##### + +#partitions per node: +- datura (8) : 273 +- digitale (8) : 256 +- drosera (8) : 267 +- geant (16) : 452 +- gipsie (16) : 427 +- io (16) : 483 +- isou (8) : 272 +- mini (4) : 184 +- mixi (4) : 160 +- modi (4) : 144 +- moxi (4) : 154 + +Variance of load distribution: 0.37% <-- Already much better, but not optimal + +Disruption when removing nodes (partitions moved on 0/1/2/3 nodes): +removing atuin digitale : 62.60% 29.20% 7.91% 0.29% +removing atuin drosera : 65.92% 26.56% 7.23% 0.29% +removing atuin datura : 63.96% 27.83% 7.71% 0.49% +removing jupiter io : 44.63% 40.33% 14.06% 0.98% +removing jupiter isou : 63.38% 27.25% 8.98% 0.39% +removing grog mini : 72.46% 21.00% 6.35% 0.20% +removing grog mixi : 72.95% 22.46% 4.39% 0.20% +removing grog moxi : 74.22% 20.61% 4.98% 0.20% +removing grog modi : 75.98% 18.36% 5.27% 0.39% +removing grisou geant : 46.97% 36.62% 15.04% 1.37% +removing grisou gipsie : 49.22% 36.52% 12.79% 1.46% +on average: 62.94% 27.89% 8.61% 0.57% <-- WORSE THAN PREVIOUSLY +``` + +### The magical solution: multi-DC aware MagLev + +Suppose we want to select three replicas for each partition (this is what we do in our simulation and in most Garage deployments). +We apply MagLev three times consecutively, one for each replica selection. +The first time is pretty much the same as normal MagLev, but for the following times, when a node runs through its preference +list to select a partition to replicate, we skip partitions for which adding this node would not bring datacenter-diversity. +More precisely, we skip a partition in the preference list if: + +- the node already replicates the partition (from one of the previous rounds of MagLev) +- the node is in a datacenter where a node already replicates the partition and there are other datacenters available + +Refer to `method4` in the simulation script for a formal definition. + +``` +##### Multi-DC aware MagLev ##### + +#partitions per node: +- datura (8) : 268 <-- NODES WITH THE SAME CAPACITY +- digitale (8) : 267 HAVE THE SAME NUM OF PARTITIONS +- drosera (8) : 267 (+- 1) +- geant (16) : 470 +- gipsie (16) : 472 +- io (16) : 516 +- isou (8) : 268 +- mini (4) : 136 +- mixi (4) : 136 +- modi (4) : 136 +- moxi (4) : 136 + +Variance of load distribution: 0.06% <-- CAN'T DO BETTER THAN THIS + +Disruption when removing nodes (partitions moved on 0/1/2/3 nodes): +removing atuin digitale : 65.72% 33.01% 1.27% 0.00% +removing atuin drosera : 64.65% 33.89% 1.37% 0.10% +removing atuin datura : 66.11% 32.62% 1.27% 0.00% +removing jupiter io : 42.97% 53.42% 3.61% 0.00% +removing jupiter isou : 66.11% 32.32% 1.56% 0.00% +removing grog mini : 80.47% 18.85% 0.68% 0.00% +removing grog mixi : 80.27% 18.85% 0.88% 0.00% +removing grog moxi : 80.18% 19.04% 0.78% 0.00% +removing grog modi : 79.69% 19.92% 0.39% 0.00% +removing grisou geant : 44.63% 52.15% 3.22% 0.00% +removing grisou gipsie : 43.55% 52.54% 3.91% 0.00% +on average: 64.94% 33.33% 1.72% 0.01% <-- VERY GOOD (VERY LOW VALUES FOR 2 AND 3 NODES) +``` diff --git a/doc/book/working-documents/load_balancing.md b/doc/book/working-documents/load_balancing.md deleted file mode 100644 index 87298ae6..00000000 --- a/doc/book/working-documents/load_balancing.md +++ /dev/null @@ -1,202 +0,0 @@ -+++ -title = "Load balancing data" -weight = 10 -+++ - -**This is being yet improved in release 0.5. The working document has not been updated yet, it still only applies to Garage 0.2 through 0.4.** - -I have conducted a quick study of different methods to load-balance data over different Garage nodes using consistent hashing. - -## Requirements - -- *good balancing*: two nodes that have the same announced capacity should receive close to the same number of items - -- *multi-datacenter*: the replicas of a partition should be distributed over as many datacenters as possible - -- *minimal disruption*: when adding or removing a node, as few partitions as possible should have to move around - -- *order-agnostic*: the same set of nodes (each associated with a datacenter name - and a capacity) should always return the same distribution of partition - replicas, independently of the order in which nodes were added/removed (this - is to keep the implementation simple) - -## Methods - -### Naive multi-DC ring walking strategy - -This strategy can be used with any ring-like algorithm to make it aware of the *multi-datacenter* requirement: - -In this method, the ring is a list of positions, each associated with a single node in the cluster. -Partitions contain all the keys between two consecutive items of the ring. -To find the nodes that store replicas of a given partition: - -- select the node for the position of the partition's lower bound -- go clockwise on the ring, skipping nodes that: - - we halve already selected - - are in a datacenter of a node we have selected, except if we already have nodes from all possible datacenters - -In this way the selected nodes will always be distributed over -`min(n_datacenters, n_replicas)` different datacenters, which is the best we -can do. - -This method was implemented in the first version of Garage, with the basic -ring construction from Dynamo DB that consists in associating `n_token` random positions to -each node (I know it's not optimal, the Dynamo paper already studies this). - -### Better rings - -The ring construction that selects `n_token` random positions for each nodes gives a ring of positions that -is not well-balanced: the space between the tokens varies a lot, and some partitions are thus bigger than others. -This problem was demonstrated in the original Dynamo DB paper. - -To solve this, we want to apply a better second method for partitionning our dataset: - -1. fix an initially large number of partitions (say 1024) with evenly-spaced delimiters, - -2. attribute each partition randomly to a node, with a probability - proportionnal to its capacity (which `n_tokens` represented in the first - method) - -For now we continue using the multi-DC ring walking described above. - -I have studied two ways to do the attribution of partitions to nodes, in a way that is deterministic: - -- Min-hash: for each partition, select node that minimizes `hash(node, partition_number)` -- MagLev: see [here](https://blog.acolyer.org/2016/03/21/maglev-a-fast-and-reliable-software-network-load-balancer/) - -MagLev provided significantly better balancing, as it guarantees that the exact -same number of partitions is attributed to all nodes that have the same -capacity (and that this number is proportionnal to the node's capacity, except -for large values), however in both cases: - -- the distribution is still bad, because we use the naive multi-DC ring walking - that behaves strangely due to interactions between consecutive positions on - the ring - -- the disruption in case of adding/removing a node is not as low as it can be, - as we show with the following method. - -A quick description of MagLev (backend = node, lookup table = ring): - -> The basic idea of Maglev hashing is to assign a preference list of all the -> lookup table positions to each backend. Then all the backends take turns -> filling their most-preferred table positions that are still empty, until the -> lookup table is completely filled in. Hence, Maglev hashing gives an almost -> equal share of the lookup table to each of the backends. Heterogeneous -> backend weights can be achieved by altering the relative frequency of the -> backends’ turns… - -Here are some stats (run `scripts/simulate_ring.py` to reproduce): - -``` -##### Custom-ring (min-hash) ##### - -#partitions per node (capacity in parenthesis): -- datura (8) : 227 -- digitale (8) : 351 -- drosera (8) : 259 -- geant (16) : 476 -- gipsie (16) : 410 -- io (16) : 495 -- isou (8) : 231 -- mini (4) : 149 -- mixi (4) : 188 -- modi (4) : 127 -- moxi (4) : 159 - -Variance of load distribution for load normalized to intra-class mean -(a class being the set of nodes with the same announced capacity): 2.18% <-- REALLY BAD - -Disruption when removing nodes (partitions moved on 0/1/2/3 nodes): -removing atuin digitale : 63.09% 30.18% 6.64% 0.10% -removing atuin drosera : 72.36% 23.44% 4.10% 0.10% -removing atuin datura : 73.24% 21.48% 5.18% 0.10% -removing jupiter io : 48.34% 38.48% 12.30% 0.88% -removing jupiter isou : 74.12% 19.73% 6.05% 0.10% -removing grog mini : 84.47% 12.40% 2.93% 0.20% -removing grog mixi : 80.76% 16.60% 2.64% 0.00% -removing grog moxi : 83.59% 14.06% 2.34% 0.00% -removing grog modi : 87.01% 11.43% 1.46% 0.10% -removing grisou geant : 48.24% 37.40% 13.67% 0.68% -removing grisou gipsie : 53.03% 33.59% 13.09% 0.29% -on average: 69.84% 23.53% 6.40% 0.23% <-- COULD BE BETTER - --------- - -##### MagLev ##### - -#partitions per node: -- datura (8) : 273 -- digitale (8) : 256 -- drosera (8) : 267 -- geant (16) : 452 -- gipsie (16) : 427 -- io (16) : 483 -- isou (8) : 272 -- mini (4) : 184 -- mixi (4) : 160 -- modi (4) : 144 -- moxi (4) : 154 - -Variance of load distribution: 0.37% <-- Already much better, but not optimal - -Disruption when removing nodes (partitions moved on 0/1/2/3 nodes): -removing atuin digitale : 62.60% 29.20% 7.91% 0.29% -removing atuin drosera : 65.92% 26.56% 7.23% 0.29% -removing atuin datura : 63.96% 27.83% 7.71% 0.49% -removing jupiter io : 44.63% 40.33% 14.06% 0.98% -removing jupiter isou : 63.38% 27.25% 8.98% 0.39% -removing grog mini : 72.46% 21.00% 6.35% 0.20% -removing grog mixi : 72.95% 22.46% 4.39% 0.20% -removing grog moxi : 74.22% 20.61% 4.98% 0.20% -removing grog modi : 75.98% 18.36% 5.27% 0.39% -removing grisou geant : 46.97% 36.62% 15.04% 1.37% -removing grisou gipsie : 49.22% 36.52% 12.79% 1.46% -on average: 62.94% 27.89% 8.61% 0.57% <-- WORSE THAN PREVIOUSLY -``` - -### The magical solution: multi-DC aware MagLev - -Suppose we want to select three replicas for each partition (this is what we do in our simulation and in most Garage deployments). -We apply MagLev three times consecutively, one for each replica selection. -The first time is pretty much the same as normal MagLev, but for the following times, when a node runs through its preference -list to select a partition to replicate, we skip partitions for which adding this node would not bring datacenter-diversity. -More precisely, we skip a partition in the preference list if: - -- the node already replicates the partition (from one of the previous rounds of MagLev) -- the node is in a datacenter where a node already replicates the partition and there are other datacenters available - -Refer to `method4` in the simulation script for a formal definition. - -``` -##### Multi-DC aware MagLev ##### - -#partitions per node: -- datura (8) : 268 <-- NODES WITH THE SAME CAPACITY -- digitale (8) : 267 HAVE THE SAME NUM OF PARTITIONS -- drosera (8) : 267 (+- 1) -- geant (16) : 470 -- gipsie (16) : 472 -- io (16) : 516 -- isou (8) : 268 -- mini (4) : 136 -- mixi (4) : 136 -- modi (4) : 136 -- moxi (4) : 136 - -Variance of load distribution: 0.06% <-- CAN'T DO BETTER THAN THIS - -Disruption when removing nodes (partitions moved on 0/1/2/3 nodes): -removing atuin digitale : 65.72% 33.01% 1.27% 0.00% -removing atuin drosera : 64.65% 33.89% 1.37% 0.10% -removing atuin datura : 66.11% 32.62% 1.27% 0.00% -removing jupiter io : 42.97% 53.42% 3.61% 0.00% -removing jupiter isou : 66.11% 32.32% 1.56% 0.00% -removing grog mini : 80.47% 18.85% 0.68% 0.00% -removing grog mixi : 80.27% 18.85% 0.88% 0.00% -removing grog moxi : 80.18% 19.04% 0.78% 0.00% -removing grog modi : 79.69% 19.92% 0.39% 0.00% -removing grisou geant : 44.63% 52.15% 3.22% 0.00% -removing grisou gipsie : 43.55% 52.54% 3.91% 0.00% -on average: 64.94% 33.33% 1.72% 0.01% <-- VERY GOOD (VERY LOW VALUES FOR 2 AND 3 NODES) -``` diff --git a/doc/book/working-documents/migration-04.md b/doc/book/working-documents/migration-04.md new file mode 100644 index 00000000..d9d3ede1 --- /dev/null +++ b/doc/book/working-documents/migration-04.md @@ -0,0 +1,108 @@ ++++ +title = "Migrating from 0.3 to 0.4" +weight = 20 ++++ + +**Migrating from 0.3 to 0.4 is unsupported. This document is only intended to +document the process internally for the Deuxfleurs cluster where we have to do +it. Do not try it yourself, you will lose your data and we will not help you.** + +**Migrating from 0.2 to 0.4 will break everything for sure. Never try it.** + +The internal data format of Garage hasn't changed much between 0.3 and 0.4. +The Sled database is still the same, and the data directory as well. + +The following has changed, all in the meta directory: + +- `node_id` in 0.3 contains the identifier of the current node. In 0.4, this + file does nothing and should be deleted. It is replaced by `node_key` (the + secret key) and `node_key.pub` (the associated public key). A node's + identifier on the ring is its public key. + +- `peer_info` in 0.3 contains the list of peers saved automatically by Garage. + The format has changed and it is now stored in `peer_list` (`peer_info` + should be deleted). + +When migrating, all node identifiers will change. This also means that the +affectation of data partitions on the ring will change, and lots of data will +have to be rebalanced. + +- If your cluster has only 3 nodes, all nodes store everything, therefore nothing has to be rebalanced. + +- If your cluster has only 4 nodes, for any partition there will always be at + least 2 nodes that stored data before that still store it after. Therefore + the migration should in theory be transparent and Garage should continue to + work during the rebalance. + +- If your cluster has 5 or more nodes, data will disappear during the + migration. Do not migrate (fortunately we don't have this scenario at + Deuxfleurs), or if you do, make Garage unavailable until things stabilize + (disable web and api access). + + +The migration steps are as follows: + +1. Prepare a new configuration file for 0.4. For each node, point to the same + meta and data directories as Garage 0.3. Basically, the things that change + are the following: + + - No more `rpc_tls` section + - You have to generate a shared `rpc_secret` and put it in all config files + - `bootstrap_peers` has a different syntax as it has to contain node keys. + Leave it empty and use `garage node-id` and `garage node connect` instead (new features of 0.4) + - put the publicly accessible RPC address of your node in `rpc_public_addr` if possible (its optional but recommended) + - If you are using Consul, change the `consul_service_name` to NOT be the name advertised by Nomad. + Now Garage is responsible for advertising its own service itself. + +2. Disable api and web access for some time (Garage does not support disabling + these endpoints but you can change the port number or stop your reverse + proxy for instance). + +3. Do `garage repair -a --yes tables` and `garage repair -a --yes blocks`, + check the logs and check that all data seems to be synced correctly between + nodes. + +4. Save somewhere the output of `garage status`. We will need this to remember + how to reconfigure nodes in 0.4. + +5. Turn off Garage 0.3 + +6. Backup metadata folders if you can (i.e. if you have space to do it + somewhere). Backuping data folders could also be usefull but that's much + harder to do. If your filesystem supports snapshots, this could be a good + time to use them. + +7. Turn on Garage 0.4 + +8. At this point, running `garage status` should indicate that all nodes of the + previous cluster are "unavailable". The nodes have new identifiers that + should appear in healthy nodes once they can talk to one another (use + `garage node connect` if necessary`). They should have NO ROLE ASSIGNED at + the moment. + +9. Prepare a script with several `garage node configure` commands that replace + each of the v0.3 node ID with the corresponding v0.4 node ID, with the same + zone/tag/capacity. For example if your node `drosera` had identifier `c24e` + before and now has identifier `789a`, and it was configured with capacity + `2` in zone `dc1`, put the following command in your script: + +```bash +garage node configure 789a -z dc1 -c 2 -t drosera --replace c24e +``` + +10. Run your reconfiguration script. Check that the new output of `garage + status` contains the correct node IDs with the correct values for capacity + and zone. Old nodes should no longer be mentioned. + +11. If your cluster has 4 nodes or less, and you are feeling adventurous, you + can reenable Web and API access now. Things will probably work. + +12. Garage might already be resyncing stuff. Issue a `garage repair -a --yes + tables` and `garage repair -a --yes blocks` to force it to do so. + +13. Wait for resyncing activity to stop in the logs. Do steps 12 and 13 two or + three times, until you see that when you issue the repair commands, nothing + gets resynced any longer. + +14. Your upgraded cluster should be in a working state. Re-enable API and Web + access and check that everything went well. diff --git a/doc/book/working-documents/migration-06.md b/doc/book/working-documents/migration-06.md new file mode 100644 index 00000000..28e2c32e --- /dev/null +++ b/doc/book/working-documents/migration-06.md @@ -0,0 +1,53 @@ ++++ +title = "Migrating from 0.5 to 0.6" +weight = 15 ++++ + +**This guide explains how to migrate to 0.6 if you have an existing 0.5 cluster. +We don't recommend trying to migrate directly from 0.4 or older to 0.6.** + +**We make no guarantee that this migration will work perfectly: +back up all your data before attempting it!** + +Garage v0.6 (not yet released) introduces a new data model for buckets, +that allows buckets to have many names (aliases). +Buckets can also have "private" aliases (called local aliases), +which are only visible when using a certain access key. + +This new data model means that the metadata tables have changed quite a bit in structure, +and a manual migration step is required. + +The migration steps are as follows: + +1. Disable api and web access for some time (Garage does not support disabling + these endpoints but you can change the port number or stop your reverse + proxy for instance). + +2. Do `garage repair -a --yes tables` and `garage repair -a --yes blocks`, + check the logs and check that all data seems to be synced correctly between + nodes. + +4. Turn off Garage 0.5 + +5. **Backup your metadata folders!!** + +6. Turn on Garage 0.6 + +7. At this point, `garage bucket list` should indicate that no buckets are present + in the cluster. `garage key list` should show all of the previously existing + access key, however these keys should not have any permissions to access buckets. + +8. Run `garage migrate buckets050`: this will populate the new bucket table with + the buckets that existed previously. This will also give access to API keys + as it was before. + +9. Do `garage repair -a --yes tables` and `garage repair -a --yes blocks`, + check the logs and check that all data seems to be synced correctly between + nodes. + +10. Check that all your buckets indeed appear in `garage bucket list`, and that + keys have the proper access flags set. If that is not the case, revert + everything and file a bug! + +11. Your upgraded cluster should be in a working state. Re-enable API and Web + access and check that everything went well. diff --git a/doc/book/working-documents/migration_04.md b/doc/book/working-documents/migration_04.md deleted file mode 100644 index d9d3ede1..00000000 --- a/doc/book/working-documents/migration_04.md +++ /dev/null @@ -1,108 +0,0 @@ -+++ -title = "Migrating from 0.3 to 0.4" -weight = 20 -+++ - -**Migrating from 0.3 to 0.4 is unsupported. This document is only intended to -document the process internally for the Deuxfleurs cluster where we have to do -it. Do not try it yourself, you will lose your data and we will not help you.** - -**Migrating from 0.2 to 0.4 will break everything for sure. Never try it.** - -The internal data format of Garage hasn't changed much between 0.3 and 0.4. -The Sled database is still the same, and the data directory as well. - -The following has changed, all in the meta directory: - -- `node_id` in 0.3 contains the identifier of the current node. In 0.4, this - file does nothing and should be deleted. It is replaced by `node_key` (the - secret key) and `node_key.pub` (the associated public key). A node's - identifier on the ring is its public key. - -- `peer_info` in 0.3 contains the list of peers saved automatically by Garage. - The format has changed and it is now stored in `peer_list` (`peer_info` - should be deleted). - -When migrating, all node identifiers will change. This also means that the -affectation of data partitions on the ring will change, and lots of data will -have to be rebalanced. - -- If your cluster has only 3 nodes, all nodes store everything, therefore nothing has to be rebalanced. - -- If your cluster has only 4 nodes, for any partition there will always be at - least 2 nodes that stored data before that still store it after. Therefore - the migration should in theory be transparent and Garage should continue to - work during the rebalance. - -- If your cluster has 5 or more nodes, data will disappear during the - migration. Do not migrate (fortunately we don't have this scenario at - Deuxfleurs), or if you do, make Garage unavailable until things stabilize - (disable web and api access). - - -The migration steps are as follows: - -1. Prepare a new configuration file for 0.4. For each node, point to the same - meta and data directories as Garage 0.3. Basically, the things that change - are the following: - - - No more `rpc_tls` section - - You have to generate a shared `rpc_secret` and put it in all config files - - `bootstrap_peers` has a different syntax as it has to contain node keys. - Leave it empty and use `garage node-id` and `garage node connect` instead (new features of 0.4) - - put the publicly accessible RPC address of your node in `rpc_public_addr` if possible (its optional but recommended) - - If you are using Consul, change the `consul_service_name` to NOT be the name advertised by Nomad. - Now Garage is responsible for advertising its own service itself. - -2. Disable api and web access for some time (Garage does not support disabling - these endpoints but you can change the port number or stop your reverse - proxy for instance). - -3. Do `garage repair -a --yes tables` and `garage repair -a --yes blocks`, - check the logs and check that all data seems to be synced correctly between - nodes. - -4. Save somewhere the output of `garage status`. We will need this to remember - how to reconfigure nodes in 0.4. - -5. Turn off Garage 0.3 - -6. Backup metadata folders if you can (i.e. if you have space to do it - somewhere). Backuping data folders could also be usefull but that's much - harder to do. If your filesystem supports snapshots, this could be a good - time to use them. - -7. Turn on Garage 0.4 - -8. At this point, running `garage status` should indicate that all nodes of the - previous cluster are "unavailable". The nodes have new identifiers that - should appear in healthy nodes once they can talk to one another (use - `garage node connect` if necessary`). They should have NO ROLE ASSIGNED at - the moment. - -9. Prepare a script with several `garage node configure` commands that replace - each of the v0.3 node ID with the corresponding v0.4 node ID, with the same - zone/tag/capacity. For example if your node `drosera` had identifier `c24e` - before and now has identifier `789a`, and it was configured with capacity - `2` in zone `dc1`, put the following command in your script: - -```bash -garage node configure 789a -z dc1 -c 2 -t drosera --replace c24e -``` - -10. Run your reconfiguration script. Check that the new output of `garage - status` contains the correct node IDs with the correct values for capacity - and zone. Old nodes should no longer be mentioned. - -11. If your cluster has 4 nodes or less, and you are feeling adventurous, you - can reenable Web and API access now. Things will probably work. - -12. Garage might already be resyncing stuff. Issue a `garage repair -a --yes - tables` and `garage repair -a --yes blocks` to force it to do so. - -13. Wait for resyncing activity to stop in the logs. Do steps 12 and 13 two or - three times, until you see that when you issue the repair commands, nothing - gets resynced any longer. - -14. Your upgraded cluster should be in a working state. Re-enable API and Web - access and check that everything went well. diff --git a/doc/book/working-documents/migration_06.md b/doc/book/working-documents/migration_06.md deleted file mode 100644 index 28e2c32e..00000000 --- a/doc/book/working-documents/migration_06.md +++ /dev/null @@ -1,53 +0,0 @@ -+++ -title = "Migrating from 0.5 to 0.6" -weight = 15 -+++ - -**This guide explains how to migrate to 0.6 if you have an existing 0.5 cluster. -We don't recommend trying to migrate directly from 0.4 or older to 0.6.** - -**We make no guarantee that this migration will work perfectly: -back up all your data before attempting it!** - -Garage v0.6 (not yet released) introduces a new data model for buckets, -that allows buckets to have many names (aliases). -Buckets can also have "private" aliases (called local aliases), -which are only visible when using a certain access key. - -This new data model means that the metadata tables have changed quite a bit in structure, -and a manual migration step is required. - -The migration steps are as follows: - -1. Disable api and web access for some time (Garage does not support disabling - these endpoints but you can change the port number or stop your reverse - proxy for instance). - -2. Do `garage repair -a --yes tables` and `garage repair -a --yes blocks`, - check the logs and check that all data seems to be synced correctly between - nodes. - -4. Turn off Garage 0.5 - -5. **Backup your metadata folders!!** - -6. Turn on Garage 0.6 - -7. At this point, `garage bucket list` should indicate that no buckets are present - in the cluster. `garage key list` should show all of the previously existing - access key, however these keys should not have any permissions to access buckets. - -8. Run `garage migrate buckets050`: this will populate the new bucket table with - the buckets that existed previously. This will also give access to API keys - as it was before. - -9. Do `garage repair -a --yes tables` and `garage repair -a --yes blocks`, - check the logs and check that all data seems to be synced correctly between - nodes. - -10. Check that all your buckets indeed appear in `garage bucket list`, and that - keys have the proper access flags set. If that is not the case, revert - everything and file a bug! - -11. Your upgraded cluster should be in a working state. Re-enable API and Web - access and check that everything went well. -- cgit v1.2.3