aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--content/blog/2022-v0.7-released.md185
1 files changed, 180 insertions, 5 deletions
diff --git a/content/blog/2022-v0.7-released.md b/content/blog/2022-v0.7-released.md
index 8c34703..be2939a 100644
--- a/content/blog/2022-v0.7-released.md
+++ b/content/blog/2022-v0.7-released.md
@@ -1,5 +1,5 @@
+++
-title="Garage v0.7: a tour of the new features"
+title="Garage v0.7: Kubernetes and OpenTelemetry"
date=2022-04-04
+++
@@ -11,17 +11,192 @@ date=2022-04-04
Two months ago, we were impressed by the success of our open beta launch at the FOSDEM and on Hacker News: [our intial post](https://garagehq.deuxfleurs.fr/blog/2022-introducing-garage/) lead to more than 40k views in 10 days, going up to 100 views/minute.
Since this event, we continued improving Garage, and 2 months after the initial release, we are happy to announce a new version: v0.7.0.
-We would like to thank all the contributors that made this new release possible: Alex, Jill, Max Audron, Maximilien, Quentin, Rune Henrisken, Steam, and trinity-1686a.
-This is also for the first time for Garage that we have contributions from outside of our organization: we are very proud and we want to renew our commitment to foster an open community around Garage.
+Before all, we would like to thank all the contributors that made this new release possible: Alex, Jill, Max Audron, Maximilien, Quentin, Rune Henrisken, Steam, and trinity-1686a.
+This is also the first time for Garage that we have contributions from the outside: we are very happy as we want to build a community-driven project.
If you want to test this new version, you have 2 solutions: using our binaries or the ones from your OS.
We ship [statically compiled binaries](https://garagehq.deuxfleurs.fr/download/) for Linux (amd64, i386, aarch64 and armv6) and their associated [Docker containers](https://hub.docker.com/u/dxflrs).
Garage is also packaged by some OS/distributions, we are currently aware of [FreeBSD](https://cgit.freebsd.org/ports/tree/www/garage/Makefile) and [AUR for Arch Linux](https://aur.archlinux.org/packages/garage).
Feel free to [reach us](mailto:garagehq@deuxfleurs.fr) if you are packaging or planning to package Garage, we are willing to adapt our software to make packaging easier and we plan to reference your work in our documentation.
-Obviously, this new version includes many bug fixes that are listed in our [changelogs](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases), but also 2 new features: Kubernetes integration and OpenTelemetry support, we review them in the following.
+
+Speaking about the changes of this new version, it obviously includes many bug fixes.
+We listed them in our [changelogs](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases), take a look, we might have fixed something that annoyed you!
+In this blog post, we want to introduce you to another aspect of this new release, its 2 new features: a better Kubernetes integration and support for OpenTelemetry.
## Kubernetes integration
+Before Garage v0.7.0, you had to deploy a Consul cluster or spawn a coordinating pod to deploy Garage on Kubernetes.
+In this new version, Garage integrates a method to discover other peers by using Kubernetes [Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) to ease your deployments.
+Garage is even able to automatically create the [Custom Resource Definition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) (CRD) before using it to discover other peers.
+
+Let's see practically how it works with a minimalistic example (not secured nor suitable for production).
+You can run it on [minikube](https://minikube.sigs.k8s.io) if you a more interactive reading.
+
+Start by creating a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) containg Garage's configuration (let's name it `config.yaml`):
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: garage-config
+ namespace: default
+data:
+ garage.toml: |-
+ metadata_dir = "/mnt/fast"
+ data_dir = "/mnt/slow"
+
+ replication_mode = "3"
+
+ rpc_bind_addr = "[::]:3901"
+ rpc_secret = "<secret>"
+
+ bootstrap_peers = []
+
+ kubernetes_namespace = "default"
+ kubernetes_service_name = "garage-daemon"
+ kubernetes_skip_crd = false
+
+ [s3_api]
+ s3_region = "garage"
+ api_bind_addr = "[::]:3900"
+ root_domain = ".s3.garage.tld"
+
+ [s3_web]
+ bind_addr = "[::]:3902"
+ root_domain = ".web.garage.tld"
+ index = "index.html"
+```
+
+The 3 important parameters are `kubernetes_namespace`, `kubernetes_service_name`, and `kubernetes_skip_crd`.
+Configure them according to your planned deployment.
+The last one controls wether you want to create the CRD manually or allow Garage to create it automatically on boot.
+In this example, we keep it to `false`, which means we allow Garage to automatically create the CRD.
+
+Apply this configuration on your cluster:
+
+```bash
+kubectl apply -f config.yaml
+```
+
+Allowing Garage to create the CRD is not enough, the process must have enough permissions.
+A quick unsecure way to add the permission is to create a [ClusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) to give admin rights to our local user, effectively breaking Kubernetes' security model (we name this file `admin.yml`):
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: garage-admin
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: cluster-admin
+subjects:
+- apiGroup: rbac.authorization.k8s.io
+ kind: User
+ name: system:serviceaccount:default:default
+```
+
+Apply it:
+
+```bash
+kubectl apply -f admin.yaml
+```
+
+Finally, we create a [StatefulSet](https://kubernetes.io/fr/docs/concepts/workloads/controllers/statefulset/) to run our service (`service.yaml`):
+
+```yaml
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: garage
+spec:
+ selector:
+ matchLabels:
+ app: garage
+ serviceName: "garage"
+ replicas: 3
+ template:
+ metadata:
+ labels:
+ app: garage
+ spec:
+ terminationGracePeriodSeconds: 10
+ containers:
+ - name: garage
+ image: dxflrs/amd64_garage:v0.7.0
+ ports:
+ - containerPort: 3900
+ name: s3-api
+ - containerPort: 3902
+ name: web-api
+ volumeMounts:
+ - name: fast
+ mountPath: /mnt/fast
+ - name: slow
+ mountPath: /mnt/slow
+ - name: etc
+ mountPath: /etc/garage.toml
+ subPath: garage.toml
+ volumes:
+ - name: etc
+ configMap:
+ name: garage-config
+ volumeClaimTemplates:
+ - metadata:
+ name: fast
+ spec:
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 100Mi
+ - metadata:
+ name: slow
+ spec:
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 100Mi
+```
+
+Garage is a stateful program, so it needs a stable place to store its data and metadata.
+This feature is provided by Kubernetes' [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) that can be used only from a [StatefulSet](https://kubernetes.io/fr/docs/concepts/workloads/controllers/statefulset/), hence the choice of this K8S object to deploy our service.
+
+Kubernetes has many "drivers" for Persistent Volumes, for production uses we recommend **only** the `local` driver.
+Using other drivers may lead to huge performance issues or data corruption, probably both in practice.
+
+In the example, we are claiming 2 volumes of 100MB.
+We use 2 volumes instead of 1 because Garage separates its metadata from its data.
+By having 2 volumes, you can reserve a smaller capacity on a SSD for the metadata and a larger capacity on a regular HDD for the data.
+Do not forget to change the reserved capacity, 100MB is only suitable for testing.
+
+*Note how we are mounting our ConfigMap: we need to set the `subpath` property to mount only the `garage.toml` file and not the whole `/etc` folder that would prevent K8S from writing its own files
+in `/etc` and fail the pod.*
+
+You can apply this file with:
+
+```bash
+kubectl apply -f service.yaml
+```
+
+Now, you are ready to interact with your cluster, each instance must have discovered the other ones:
+
+```bash
+kubectl exec -it garage-0 --container garage -- /garage status
+# ==== HEALTHY NODES ====
+# ID Hostname Address Tags Zone Capacity
+# e6284331c321a23c garage-0 172.17.0.5:3901 NO ROLE ASSIGNED
+# 570ff9b0ed3648a7 garage-2 [::ffff:172.17.0.7]:3901 NO ROLE ASSIGNED
+# e1990a2069429428 garage-1 [::ffff:172.17.0.6]:3901 NO ROLE ASSIGNED
+```
+
+Of course, to have a full deployment, you will probably want to deploy a [Service](https://kubernetes.io/docs/concepts/services-networking/service/) in front of your cluster and/or a reverse proxy.
+
+If Kubernetes is not your thing, know that we are running Garage on a Nomad+Consul cluster.
+We have not documented it yet but you can get a look at [our Nomad service](https://git.deuxfleurs.fr/Deuxfleurs/infrastructure/src/commit/1e5e4af35c073d04698bb10dd4ad1330d6c62a0d/app/garage/deploy/garage.hcl).
+
## OpenTelemetry support
-T
+## And next?
+
+roadmap: k2v, allocation simulator, s3 compatibility, community feedback, whitepaper
+