aboutsummaryrefslogtreecommitdiff
path: root/doc/book
diff options
context:
space:
mode:
Diffstat (limited to 'doc/book')
-rw-r--r--doc/book/connect/apps/index.md8
-rw-r--r--doc/book/connect/backup.md2
-rw-r--r--doc/book/connect/repositories.md4
-rw-r--r--doc/book/cookbook/real-world.md11
-rw-r--r--doc/book/operations/durability-repairs.md11
-rw-r--r--doc/book/operations/multi-hdd.md101
-rw-r--r--doc/book/operations/upgrading.md2
-rw-r--r--doc/book/quick-start/_index.md2
-rw-r--r--doc/book/reference-manual/configuration.md58
-rw-r--r--doc/book/reference-manual/s3-compatibility.md32
10 files changed, 199 insertions, 32 deletions
diff --git a/doc/book/connect/apps/index.md b/doc/book/connect/apps/index.md
index 83aadec2..7bad9d09 100644
--- a/doc/book/connect/apps/index.md
+++ b/doc/book/connect/apps/index.md
@@ -37,7 +37,7 @@ Second, we suppose you have created a key and a bucket.
As a reminder, you can create a key for your nextcloud instance as follow:
```bash
-garage key new --name nextcloud-key
+garage key create nextcloud-key
```
Keep the Key ID and the Secret key in a pad, they will be needed later.
@@ -139,7 +139,7 @@ a reasonable trade-off for some instances.
Create a key for Peertube:
```bash
-garage key new --name peertube-key
+garage key create peertube-key
```
Keep the Key ID and the Secret key in a pad, they will be needed later.
@@ -253,7 +253,7 @@ As such, your Garage cluster should be configured appropriately for good perform
This is the usual Garage setup:
```bash
-garage key new --name mastodon-key
+garage key create mastodon-key
garage bucket create mastodon-data
garage bucket allow mastodon-data --read --write --key mastodon-key
```
@@ -379,7 +379,7 @@ Supposing you have a working synapse installation, you can add the module with p
Now create a bucket and a key for your matrix instance (note your Key ID and Secret Key somewhere, they will be needed later):
```bash
-garage key new --name matrix-key
+garage key create matrix-key
garage bucket create matrix
garage bucket allow matrix --read --write --key matrix-key
```
diff --git a/doc/book/connect/backup.md b/doc/book/connect/backup.md
index d20c3c96..585ec469 100644
--- a/doc/book/connect/backup.md
+++ b/doc/book/connect/backup.md
@@ -54,7 +54,7 @@ how to configure this.
Create your key and bucket:
```bash
-garage key new my-key
+garage key create my-key
garage bucket create backup
garage bucket allow backup --read --write --key my-key
```
diff --git a/doc/book/connect/repositories.md b/doc/book/connect/repositories.md
index 4b14bb46..66365d64 100644
--- a/doc/book/connect/repositories.md
+++ b/doc/book/connect/repositories.md
@@ -23,7 +23,7 @@ You can configure a different target for each data type (check `[lfs]` and `[att
Let's start by creating a key and a bucket (your key id and secret will be needed later, keep them somewhere):
```bash
-garage key new --name gitea-key
+garage key create gitea-key
garage bucket create gitea
garage bucket allow gitea --read --write --key gitea-key
```
@@ -118,7 +118,7 @@ through another support, like a git repository.
As a first step, we will need to create a bucket on Garage and enabling website access on it:
```bash
-garage key new --name nix-key
+garage key create nix-key
garage bucket create nix.example.com
garage bucket allow nix.example.com --read --write --key nix-key
garage bucket website nix.example.com --allow
diff --git a/doc/book/cookbook/real-world.md b/doc/book/cookbook/real-world.md
index 7061069f..a8fbb371 100644
--- a/doc/book/cookbook/real-world.md
+++ b/doc/book/cookbook/real-world.md
@@ -75,16 +75,11 @@ to store 2 TB of data in total.
- For the metadata storage, Garage does not do checksumming and integrity
verification on its own. If you are afraid of bitrot/data corruption,
- put your metadata directory on a BTRFS partition. Otherwise, just use regular
+ put your metadata directory on a ZFS or BTRFS partition. Otherwise, just use regular
EXT4 or XFS.
-- Having a single server with several storage drives is currently not very well
- supported in Garage ([#218](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/218)).
- For an easy setup, just put all your drives in a RAID0 or a ZFS RAIDZ array.
- If you're adventurous, you can try to format each of your disk as
- a separate XFS partition, and then run one `garage` daemon per disk drive,
- or use something like [`mergerfs`](https://github.com/trapexit/mergerfs) to merge
- all your disks in a single union filesystem that spreads load over them.
+- Servers with multiple HDDs are supported natively by Garage without resorting
+ to RAID, see [our dedicated documentation page](@/documentation/operations/multi-hdd.md).
## Get a Docker image
diff --git a/doc/book/operations/durability-repairs.md b/doc/book/operations/durability-repairs.md
index 498c8fda..b0d2c78a 100644
--- a/doc/book/operations/durability-repairs.md
+++ b/doc/book/operations/durability-repairs.md
@@ -91,6 +91,16 @@ is definitely lost, then there is no other choice than to declare your S3 object
as unrecoverable, and to delete them properly from the data store. This can be done
using the `garage block purge` command.
+## Rebalancing data directories
+
+In [multi-HDD setups](@/documentation/operations/multi-hdd.md), to ensure that
+data blocks are well balanced between storage locations, you may run a
+rebalance operation using `garage repair rebalance`. This is usefull when
+adding storage locations or when capacities of the storage locations have been
+changed. Once this is finished, Garage will know for each block of a single
+possible location where it can be, which can increase access speed. This
+operation will also move out all data from locations marked as read-only.
+
# Metadata operations
@@ -114,4 +124,3 @@ in your cluster, you can run one of the following repair procedures:
- `garage repair versions`: checks that all versions belong to a non-deleted object, and purges any orphan version
- `garage repair block_refs`: checks that all block references belong to a non-deleted object version, and purges any orphan block reference (this will then allow the blocks to be garbage-collected)
-
diff --git a/doc/book/operations/multi-hdd.md b/doc/book/operations/multi-hdd.md
new file mode 100644
index 00000000..36445b0a
--- /dev/null
+++ b/doc/book/operations/multi-hdd.md
@@ -0,0 +1,101 @@
++++
+title = "Multi-HDD support"
+weight = 15
++++
+
+
+Since v0.9, Garage natively supports nodes that have several storage drives
+for storing data blocks (not for metadata storage).
+
+## Initial setup
+
+To set up a new Garage storage node with multiple HDDs,
+format and mount all your drives in different directories,
+and use a Garage configuration as follows:
+
+```toml
+data_dir = [
+ { path = "/path/to/hdd1", capacity = "2T" },
+ { path = "/path/to/hdd2", capacity = "4T" },
+]
+```
+
+Garage will automatically balance all blocks stored by the node
+among the different specified directories, proportionnally to the
+specified capacities.
+
+## Updating the list of storage locations
+
+If you add new storage locations to your `data_dir`,
+Garage will not rebalance existing data between storage locations.
+Newly written blocks will be balanced proportionnally to the specified capacities,
+and existing data may be moved between drives to improve balancing,
+but only opportunistically when a data block is re-written (e.g. an object
+is re-uploaded, or an object with a duplicate block is uploaded).
+
+To understand precisely what is happening, we need to dive in to how Garage
+splits data among the different storage locations.
+
+First of all, Garage divides the set of all possible block hashes
+in a fixed number of slices (currently 1024), and assigns
+to each slice a primary storage location among the specified data directories.
+The number of slices having their primary location in each data directory
+is proportionnal to the capacity specified in the config file.
+
+When Garage receives a block to write, it will always write it in the primary
+directory of the slice that contains its hash.
+
+Now, to be able to not lose existing data blocks when storage locations
+are added, Garage also keeps a list of secondary data directories
+for all of the hash slices. Secondary data directories for a slice indicates
+storage locations that once were primary directories for that slice, i.e. where
+Garage knows that data blocks of that slice might be stored.
+When Garage is requested to read a certain data block,
+it will first look in the primary storage directory of its slice,
+and if it doesn't find it there it goes through all of the secondary storage
+locations until it finds it. This allows Garage to continue operating
+normally when storage locations are added, without having to shuffle
+files between drives to place them in the correct location.
+
+This relatively simple strategy works well but does not ensure that data
+is correctly balanced among drives according to their capacity.
+To rebalance data, two strategies can be used:
+
+- Lazy rebalancing: when a block is re-written (e.g. the object is re-uploaded),
+ Garage checks whether the existing copy is in the primary directory of the slice
+ or in a secondary directory. If the current copy is in a secondary directory,
+ Garage re-writes a copy in the primary directory and deletes the one from the
+ secondary directory. This might never end up rebalancing everything if there
+ are data blocks that are only read and never written.
+
+- Active rebalancing: an operator of a Garage node can explicitly launch a repair
+ procedure that rebalances the data directories, moving all blocks to their
+ primary location. Once done, all secondary locations for all hash slices are
+ removed so that they won't be checked anymore when looking for a data block.
+
+## Read-only storage locations
+
+If you would like to move all data blocks from an existing data directory to one
+or several new data directories, mark the old directory as read-only:
+
+```toml
+data_dir = [
+ { path = "/path/to/old_data", read_only = true },
+ { path = "/path/to/new_hdd1", capacity = "2T" },
+ { path = "/path/to/new_hdd2", capacity = "4T" },
+]
+```
+
+Garage will be able to read requested blocks from the read-only directory.
+Garage will also move data out of the read-only directory either progressively
+(lazy rebalancing) or if requested explicitly (active rebalancing).
+
+Once an active rebalancing has finished, your read-only directory should be empty:
+it might still contain subdirectories, but no data files. You can check that
+it contains no files using:
+
+```bash
+find -type f /path/to/old_data # should not print anything
+```
+
+at which point it can be removed from the `data_dir` list in your config file.
diff --git a/doc/book/operations/upgrading.md b/doc/book/operations/upgrading.md
index e8919a19..9a738282 100644
--- a/doc/book/operations/upgrading.md
+++ b/doc/book/operations/upgrading.md
@@ -80,6 +80,6 @@ The entire procedure would look something like this:
5. If any specific migration procedure is required, it is usually in one of the two cases:
- It can be run on online nodes after the new version has started, during regular cluster operation.
- - it has to be run offline
+ - it has to be run offline, in which case you will have to again take all nodes offline one after the other to run the repair
For this last step, please refer to the specific documentation pertaining to the version upgrade you are doing.
diff --git a/doc/book/quick-start/_index.md b/doc/book/quick-start/_index.md
index 4f974ea5..bd64e3eb 100644
--- a/doc/book/quick-start/_index.md
+++ b/doc/book/quick-start/_index.md
@@ -209,7 +209,7 @@ one key can access multiple buckets, multiple keys can access one bucket.
Create an API key using the following command:
```
-garage key new --name nextcloud-app-key
+garage key create nextcloud-app-key
```
The output should look as follows:
diff --git a/doc/book/reference-manual/configuration.md b/doc/book/reference-manual/configuration.md
index 08c013f7..f07fb1e0 100644
--- a/doc/book/reference-manual/configuration.md
+++ b/doc/book/reference-manual/configuration.md
@@ -10,6 +10,8 @@ Here is an example `garage.toml` configuration file that illustrates all of the
```toml
metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"
+metadata_fsync = true
+data_fsync = false
db_engine = "lmdb"
@@ -90,6 +92,19 @@ This folder can be placed on an HDD. The space available for `data_dir`
should be counted to determine a node's capacity
when [adding it to the cluster layout](@/documentation/cookbook/real-world.md).
+Since `v0.9.0`, Garage supports multiple data directories with the following syntax:
+
+```toml
+data_dir = [
+ { path = "/path/to/old_data", read_only = true },
+ { path = "/path/to/new_hdd1", capacity = "2T" },
+ { path = "/path/to/new_hdd2", capacity = "4T" },
+]
+```
+
+See [the dedicated documentation page](@/documentation/operations/multi-hdd.md)
+on how to operate Garage in such a setup.
+
### `db_engine` (since `v0.8.0`)
By default, Garage uses the Sled embedded database library
@@ -131,6 +146,49 @@ convert-db -a <input db engine> -i <input db path> \
Make sure to specify the full database path as presented in the table above,
and not just the path to the metadata directory.
+### `metadata_fsync`
+
+Whether to enable synchronous mode for the database engine or not.
+This is disabled (`false`) by default.
+
+This reduces the risk of metadata corruption in case of power failures,
+at the cost of a significant drop in write performance,
+as Garage will have to pause to sync data to disk much more often
+(several times for API calls such as PutObject).
+
+Using this option reduces the risk of simultaneous metadata corruption on several
+cluster nodes, which could lead to data loss.
+
+If multi-site replication is used, this option is most likely not necessary, as
+it is extremely unlikely that two nodes in different locations will have a
+power failure at the exact same time.
+
+(Metadata corruption on a single node is not an issue, the corrupted data file
+can always be deleted and reconstructed from the other nodes in the cluster.)
+
+Here is how this option impacts the different database engines:
+
+| Database | `metadata_fsync = false` (default) | `metadata_fsync = true` |
+|----------|------------------------------------|-------------------------------|
+| Sled | default options | *unsupported* |
+| Sqlite | `PRAGMA synchronous = OFF` | `PRAGMA synchronous = NORMAL` |
+| LMDB | `MDB_NOMETASYNC` + `MDB_NOSYNC` | `MDB_NOMETASYNC` |
+
+Note that the Sqlite database is always ran in `WAL` mode (`PRAGMA journal_mode = WAL`).
+
+### `data_fsync`
+
+Whether to `fsync` data blocks and their containing directory after they are
+saved to disk.
+This is disabled (`false`) by default.
+
+This might reduce the risk that a data block is lost in rare
+situations such as simultaneous node losing power,
+at the cost of a moderate drop in write performance.
+
+Similarly to `metatada_fsync`, this is likely not necessary
+if geographical replication is used.
+
### `block_size`
Garage splits stored objects in consecutive chunks of size `block_size`
diff --git a/doc/book/reference-manual/s3-compatibility.md b/doc/book/reference-manual/s3-compatibility.md
index 15b29bd1..1bcfd123 100644
--- a/doc/book/reference-manual/s3-compatibility.md
+++ b/doc/book/reference-manual/s3-compatibility.md
@@ -75,16 +75,13 @@ but these endpoints are documented in [Red Hat Ceph Storage - Chapter 2. Ceph Ob
| Endpoint | Garage | [Openstack Swift](https://docs.openstack.org/swift/latest/s3_compat.html) | [Ceph Object Gateway](https://docs.ceph.com/en/latest/radosgw/s3/) | [Riak CS](https://docs.riak.com/riak/cs/2.1.1/references/apis/storage/s3/index.html) | [OpenIO](https://docs.openio.io/latest/source/arch-design/s3_compliancy.html) |
|------------------------------|----------------------------------|-----------------|---------------|---------|-----|
-| [AbortMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html) | ✅ Implemented | ✅ | ✅ | ✅ | ✅ |
-| [CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) | ✅ Implemented (see details below) | ✅ | ✅ | ✅ | ✅ |
-| [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) | ✅ Implemented | ✅| ✅ | ✅ | ✅ |
-| [ListMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUpload.html) | ✅ Implemented | ✅ | ✅ | ✅ | ✅ |
-| [ListParts](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html) | ✅ Implemented | ✅ | ✅ | ✅ | ✅ |
-| [UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) | ✅ Implemented (see details below) | ✅ | ✅| ✅ | ✅ |
-| [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html) | ✅ Implemented | ✅ | ✅ | ✅ | ✅ |
-
-Our implementation of Multipart Upload is currently a bit more restrictive than Amazon's one in some edge cases.
-For more information, please refer to our [issue tracker](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/204).
+| [AbortMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html) | ✅ Implemented | ✅ | ✅ | ✅ | ✅ |
+| [CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) | ✅ Implemented | ✅ | ✅ | ✅ | ✅ |
+| [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) | ✅ Implemented | ✅| ✅ | ✅ | ✅ |
+| [ListMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUpload.html) | ✅ Implemented | ✅ | ✅ | ✅ | ✅ |
+| [ListParts](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html) | ✅ Implemented | ✅ | ✅ | ✅ | ✅ |
+| [UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) | ✅ Implemented | ✅ | ✅| ✅ | ✅ |
+| [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html) | ✅ Implemented | ✅ | ✅ | ✅ | ✅ |
### Website endpoints
@@ -127,15 +124,22 @@ If you need this feature, please [share your use case in our dedicated issue](ht
| Endpoint | Garage | [Openstack Swift](https://docs.openstack.org/swift/latest/s3_compat.html) | [Ceph Object Gateway](https://docs.ceph.com/en/latest/radosgw/s3/) | [Riak CS](https://docs.riak.com/riak/cs/2.1.1/references/apis/storage/s3/index.html) | [OpenIO](https://docs.openio.io/latest/source/arch-design/s3_compliancy.html) |
|------------------------------|----------------------------------|-----------------|---------------|---------|-----|
-| [DeleteBucketLifecycle](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html) | ❌ Missing | ❌| ✅| ❌| ✅|
-| [GetBucketLifecycleConfiguration](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycleConfiguration.html) | ❌ Missing | ❌| ✅ | ❌| ✅|
-| [PutBucketLifecycleConfiguration](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html) | ❌ Missing | ❌| ✅ | ❌| ✅|
+| [DeleteBucketLifecycle](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html) | ✅ Implemented | ❌| ✅| ❌| ✅|
+| [GetBucketLifecycleConfiguration](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycleConfiguration.html) | ✅ Implemented | ❌| ✅ | ❌| ✅|
+| [PutBucketLifecycleConfiguration](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html) | ⚠ Partially implemented (see below) | ❌| ✅ | ❌| ✅|
| [GetBucketVersioning](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html) | ❌ Stub (see below) | ✅| ✅ | ❌| ✅|
| [ListObjectVersions](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html) | ❌ Missing | ❌| ✅ | ❌| ✅|
| [PutBucketVersioning](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html) | ❌ Missing | ❌| ✅| ❌| ✅|
+**PutBucketLifecycleConfiguration:** The only actions supported are
+`AbortIncompleteMultipartUpload` and `Expiration` (without the
+`ExpiredObjectDeleteMarker` field). All other operations are dependent on
+either bucket versionning or storage classes which Garage currently does not
+implement. The deprecated `Prefix` member directly in the the `Rule`
+structure/XML tag is not supported, specified prefixes must be inside the
+`Filter` structure/XML tag.
-**GetBucketVersioning:** Stub implementation (Garage does not yet support versionning so this always returns "versionning not enabled").
+**GetBucketVersioning:** Stub implementation which always returns "versionning not enabled", since Garage does not yet support bucket versionning.
### Replication endpoints