From 538a72d1982e1fd2e3cf582b427e8d84e7edf708 Mon Sep 17 00:00:00 2001 From: Quentin Date: Sat, 23 Jan 2021 18:28:29 +0100 Subject: Refactor website --- src/Guide/index.md | 0 "src/Technique/D\303\251veloppement/Garage.md" | 372 --------------------- .../D\303\251veloppement/Garage/Internals.md" | 156 +++++++++ .../D\303\251veloppement/Garage/Quickstart.md" | 140 ++++++++ .../D\303\251veloppement/Garage/Related Work.md" | 38 +++ .../D\303\251veloppement/Garage/index.md" | 28 ++ "src/Technique/D\303\251veloppement/index.md" | 0 src/_mixin/menu.pug | 3 +- src/_next/css/main.css | 82 +++++ src/_next/fonts/IMFellEnglish-Italic.ttf | Bin 0 -> 198756 bytes src/_next/fonts/IMFellEnglish-Regular.ttf | Bin 0 -> 191256 bytes src/_next/img/illus.jpg | Bin 0 -> 563445 bytes src/_next/img/paper-clip.png | Bin 0 -> 26353 bytes src/_next/index.pug | 39 +++ src/next/css/main.css | 82 ----- src/next/fonts/IMFellEnglish-Italic.ttf | Bin 198756 -> 0 bytes src/next/fonts/IMFellEnglish-Regular.ttf | Bin 191256 -> 0 bytes src/next/img/illus.jpg | Bin 563445 -> 0 bytes src/next/img/paper-clip.png | Bin 26353 -> 0 bytes src/next/index.pug | 39 --- 20 files changed, 485 insertions(+), 494 deletions(-) create mode 100644 src/Guide/index.md delete mode 100644 "src/Technique/D\303\251veloppement/Garage.md" create mode 100644 "src/Technique/D\303\251veloppement/Garage/Internals.md" create mode 100644 "src/Technique/D\303\251veloppement/Garage/Quickstart.md" create mode 100644 "src/Technique/D\303\251veloppement/Garage/Related Work.md" create mode 100644 "src/Technique/D\303\251veloppement/Garage/index.md" create mode 100644 "src/Technique/D\303\251veloppement/index.md" create mode 100644 src/_next/css/main.css create mode 100644 src/_next/fonts/IMFellEnglish-Italic.ttf create mode 100644 src/_next/fonts/IMFellEnglish-Regular.ttf create mode 100644 src/_next/img/illus.jpg create mode 100644 src/_next/img/paper-clip.png create mode 100644 src/_next/index.pug delete mode 100644 src/next/css/main.css delete mode 100644 src/next/fonts/IMFellEnglish-Italic.ttf delete mode 100644 src/next/fonts/IMFellEnglish-Regular.ttf delete mode 100644 src/next/img/illus.jpg delete mode 100644 src/next/img/paper-clip.png delete mode 100644 src/next/index.pug (limited to 'src') diff --git a/src/Guide/index.md b/src/Guide/index.md new file mode 100644 index 0000000..e69de29 diff --git "a/src/Technique/D\303\251veloppement/Garage.md" "b/src/Technique/D\303\251veloppement/Garage.md" deleted file mode 100644 index d80dae8..0000000 --- "a/src/Technique/D\303\251veloppement/Garage.md" +++ /dev/null @@ -1,372 +0,0 @@ -# Garage: a S3-like object storage - -Store pile of bytes in your garage. - -## Quickstart on an existing deployment - -First, chances are that your garage deployment is secured by TLS. -All your commands must be prefixed with their certificates. -I will define an alias once and for all to ease future commands. -Please adapt the path of the binary and certificates to your installation! - -``` -alias grg="/garage/garage --ca-cert /secrets/garage-ca.crt --client-cert /secrets/garage.crt --client-key /secrets/garage.key" -``` - -Now we can check that everything is going well by checking our cluster status: - -``` -grg status -``` - -Don't forget that `help` command and `--help` subcommands can help you anywhere, the CLI tool is self-documented! Two examples: - -``` -grg help -grg bucket allow --help -``` - -Fine, now let's create a bucket (we imagine that you want to deploy nextcloud): - -``` -grg bucket create nextcloud-bucket -``` - -Check that everything went well: - -``` -grg bucket list -grg bucket info nextcloud-bucket -``` - -Now we will generate an API key to access this bucket. -Note that API keys are independent of buckets: one key can access multiple buckets, multiple keys can access one bucket. - -Now, let's start by creating a key only for our PHP application: - -``` -grg key new --name nextcloud-app-key -``` - -You will have the following output (this one is fake, `key_id` and `secret_key` were generated with the openssl CLI tool): - -``` -Key { key_id: "GK3515373e4c851ebaad366558", secret_key: "7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34", name: "nextcloud-app-key", name_timestamp: 1603280506694, deleted: false, authorized_buckets: [] } -``` - -Check that everything works as intended (be careful, info works only with your key identifier and not with its friendly name!): - -``` -grg key list -grg key info GK3515373e4c851ebaad366558 -``` - -Now that we have a bucket and a key, we need to give permissions to the key on the bucket! - -``` -grg bucket allow --read --write nextcloud-bucket --key GK3515373e4c851ebaad366558 -``` - -You can check at any times allowed keys on your bucket with: - -``` -grg bucket info nextcloud-bucket -``` - -Now, let's move to the S3 API! -We will use the `s3cmd` CLI tool. -You can install it via your favorite package manager. -Otherwise, check [their website](https://s3tools.org/s3cmd) - -We will configure `s3cmd` with its interactive configuration tool, be careful not all endpoints are implemented! -Especially, the test run at the end does not work (yet). - -``` -$ s3cmd --configure - -Enter new values or accept defaults in brackets with Enter. -Refer to user manual for detailed description of all options. - -Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. -Access Key: GK3515373e4c851ebaad366558 -Secret Key: 7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34 -Default Region [US]: garage - -Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. -S3 Endpoint [s3.amazonaws.com]: garage.deuxfleurs.fr - -Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used -if the target S3 system supports dns based buckets. -DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: garage.deuxfleurs.fr - -Encryption password is used to protect your files from reading -by unauthorized persons while in transfer to S3 -Encryption password: -Path to GPG program [/usr/bin/gpg]: - -When using secure HTTPS protocol all communication with Amazon S3 -servers is protected from 3rd party eavesdropping. This method is -slower than plain HTTP, and can only be proxied with Python 2.7 or newer -Use HTTPS protocol [Yes]: - -On some networks all internet access must go through a HTTP proxy. -Try setting it here if you can't connect to S3 directly -HTTP Proxy server name: - -New settings: - Access Key: GK3515373e4c851ebaad366558 - Secret Key: 7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34 - Default Region: garage - S3 Endpoint: garage.deuxfleurs.fr - DNS-style bucket+hostname:port template for accessing a bucket: garage.deuxfleurs.fr - Encryption password: - Path to GPG program: /usr/bin/gpg - Use HTTPS protocol: True - HTTP Proxy server name: - HTTP Proxy server port: 0 - -Test access with supplied credentials? [Y/n] n - -Save settings? [y/N] y -Configuration saved to '/home/quentin/.s3cfg' -``` - -Now, if everything works, the following commands should work: - -``` -echo hello world > hello.txt -s3cmd put hello.txt s3://nextcloud-bucket -s3cmd ls s3://nextcloud-bucket -s3cmd rm s3://nextcloud-bucket/hello.txt -``` - -That's all for now! - -## Context - -Data storage is critical: it can lead to data loss if done badly and/or on hardware failure. -Filesystems + RAID can help on a single machine but a machine failure can put the whole storage offline. -Moreover, it put a hard limit on scalability. Often this limit can be pushed back far away by buying expensive machines. -But here we consider non specialized off the shelf machines that can be as low powered and subject to failures as a raspberry pi. - -Distributed storage may help to solve both availability and scalability problems on these machines. -Many solutions were proposed, they can be categorized as block storage, file storage and object storage depending on the abstraction they provide. - -## Related work - -Block storage is the most low level one, it's like exposing your raw hard drive over the network. -It requires very low latencies and stable network, that are often dedicated. -However it provides disk devices that can be manipulated by the operating system with the less constraints: it can be partitioned with any filesystem, meaning that it supports even the most exotic features. -We can cite [iSCSI](https://en.wikipedia.org/wiki/ISCSI) or [Fibre Channel](https://en.wikipedia.org/wiki/Fibre_Channel). -Openstack Cinder proxy previous solution to provide an uniform API. - -File storage provides a higher abstraction, they are one filesystem among others, which means they don't necessarily have all the exotic features of every filesystem. -Often, they relax some POSIX constraints while many applications will still be compatible without any modification. -As an example, we are able to run MariaDB (very slowly) over GlusterFS... -We can also mention CephFS (read [RADOS](https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf) whitepaper), Lustre, LizardFS, MooseFS, etc. -OpenStack Manila proxy previous solutions to provide an uniform API. - -Finally object storages provide the highest level abstraction. -They are the testimony that the POSIX filesystem API is not adapted to distributed filesystems. -Especially, the strong concistency has been dropped in favor of eventual consistency which is way more convenient and powerful in presence of high latencies and unreliability. -We often read about S3 that pioneered the concept that it's a filesystem for the WAN. -Applications must be adapted to work for the desired object storage service. -Today, the S3 HTTP REST API acts as a standard in the industry. -However, Amazon S3 source code is not open but alternatives were proposed. -We identified Minio, Pithos, Swift and Ceph. -Minio/Ceph enforces a total order, so properties similar to a (relaxed) filesystem. -Swift and Pithos are probably the most similar to AWS S3 with their consistent hashing ring. - -There were many attempts in research too. I am only thinking to [LBFS](https://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf) that was used as a basis for Seafile. - ----- - -## Random notes - - -- Cassandra (ScyllaDB) for metadata -- Own system using consistent hashing for data chunks - -**Quentin:** - -- pas d'erasure coding mais des checksums à côté des fichiers (ou dans les meta données) -- 2 ou 3 copies, configurable, potentiellement on a per bucket or per file basis -- on ne setup pas à la main en effet, je pensais au système qui scan sa partition de stockage et qui fait stockage géré = min(stockage de la partition - stockage que je ne gère pas, stockage alloué) -- La DHT/Ring à la dynamo, on doit pouvoir repomper un millier de truc sur leur papier. Surtout que je me le suis déja cogné deux fois. Le nombre d'entrées que tu mets est un multiple de ton stockage. eg: 500 Go, on fait des tranches de 10 Go --> 50 entrées dans le ring. -- un protocole de maintenance pompé sur le papier dynamo avec de l'anti entropy qui vérifie les blobs et leurs checksums (en plus de la vérification réalisée à la lecture) -- une interface web qui te donne en presque direct la santé de ton cluster (noeuds en vie, états de la réplication des données, problèmes de checksums) - -**Other ideas:** - -- split objects in constant size blocks or use the SeaFile strategy for better de-duplication? (Content Defined Chunking, Rabin's algorithm etc) - -_Remark 1_ I really like the Rabin fingerprinting approach however deduplication means we need to implement reference counting. How do we implement it? If we suppose a CRDT counter, if we do +1, +1, -1 but counter is registered as +1, -1, +1, we are at zero at one point and lost ou chunk. ---> we need to be careful in our implementation if we want to play. - -_Remark 2_ Seafile idea has been stolen from this article: https://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf - - - ---> we should not talk about block. It is the abstraction that manipulate your FS to interact with your hard drive. "Chunk" is probably more appropriate. Block storage are a class of distributed storage where you expose the abstraction of your hard drive over the network, mainly SATA over ethernet, thinking to SCSI, FiberChannel, and so on - -### Questions à résoudre - - 1. est-ce que cassandra support de mettre certaines tables sur un SSD et d'autres sur un disque rotatif ? - 2. est-ce que cassandra/scylladb a un format de table on disk qui ne s'écroule pas complètement losque tu as des gros blobs ? (les devs de sqlite ont écrit tout un article pour dire que même avec leur lib qui est quand même sacrément optimisés, ils considèrent qu'à partir de je crois 4ko c'est plus efficace de mettre les blobs dans des fichiers séparés) - https://www.sqlite.org/intern-v-extern-blob.html - 3. Quelle taille de blocs ? L'idée c'est qu'on a quand même des liens en WAN avec des débits pas forcéments incroyables. Et ça serait bien que le temps de répliquer un bloc soit de l'ordre de la seconde maxi. En cas de retry, pour pouvoir mieux monitorer la progression, etc. Exoscale utilise 16Mo. LX propose 1Mo. - -#### Modules - -- `membership/`: configuration, membership management (gossip of node's presence and status), ring generation --> what about Serf (used by Consul/Nomad) : https://www.serf.io/? Seems a huge library with many features so maybe overkill/hard to integrate -- `metadata/`: metadata management -- `blocks/`: block management, writing, GC and rebalancing -- `internal/`: server to server communication (HTTP server and client that reuses connections, TLS if we want, etc) -- `api/`: S3 API -- `web/`: web management interface - -#### Metadata tables - -**Objects:** - -- *Hash key:* Bucket name (string) -- *Sort key:* Object key (string) -- *Sort key:* Version timestamp (int) -- *Sort key:* Version UUID (string) -- Complete: bool -- Inline: bool, true for objects < threshold (say 1024) -- Object size (int) -- Mime type (string) -- Data for inlined objects (blob) -- Hash of first block otherwise (string) - -*Having only a hash key on the bucket name will lead to storing all file entries of this table for a specific bucket on a single node. At the same time, it is the only way I see to rapidly being able to list all bucket entries...* - -**Blocks:** - -- *Hash key:* Version UUID (string) -- *Sort key:* Offset of block in total file (int) -- Hash of data block (string) - -A version is defined by the existence of at least one entry in the blocks table for a certain version UUID. -We must keep the following invariant: if a version exists in the blocks table, it has to be referenced in the objects table. -We explicitly manage concurrent versions of an object: the version timestamp and version UUID columns are index columns, thus we may have several concurrent versions of an object. -Important: before deleting an older version from the objects table, we must make sure that we did a successfull delete of the blocks of that version from the blocks table. - -Thus, the workflow for reading an object is as follows: - -1. Check permissions (LDAP) -2. Read entry in object table. If data is inline, we have its data, stop here. - -> if several versions, take newest one and launch deletion of old ones in background -3. Read first block from cluster. If size <= 1 block, stop here. -4. Simultaneously with previous step, if size > 1 block: query the Blocks table for the IDs of the next blocks -5. Read subsequent blocks from cluster - -Workflow for PUT: - -1. Check write permission (LDAP) -2. Select a new version UUID -3. Write a preliminary entry for the new version in the objects table with complete = false -4. Send blocks to cluster and write entries in the blocks table -5. Update the version with complete = true and all of the accurate information (size, etc) -6. Return success to the user -7. Launch a background job to check and delete older versions - -Workflow for DELETE: - -1. Check write permission (LDAP) -2. Get current version (or versions) in object table -3. Do the deletion of those versions NOT IN A BACKGROUND JOB THIS TIME -4. Return succes to the user if we were able to delete blocks from the blocks table and entries from the object table - -To delete a version: - -1. List the blocks from Cassandra -2. For each block, delete it from cluster. Don't care if some deletions fail, we can do GC. -3. Delete all of the blocks from the blocks table -4. Finally, delete the version from the objects table - -Known issue: if someone is reading from a version that we want to delete and the object is big, the read might be interrupted. I think it is ok to leave it like this, we just cut the connection if data disappears during a read. - -("Soit P un problème, on s'en fout est une solution à ce problème") - -#### Block storage on disk - -**Blocks themselves:** - -- file path = /blobs/(first 3 hex digits of hash)/(rest of hash) - -**Reverse index for GC & other block-level metadata:** - -- file path = /meta/(first 3 hex digits of hash)/(rest of hash) -- map block hash -> set of version UUIDs where it is referenced - -Usefull metadata: - -- list of versions that reference this block in the Casandra table, so that we can do GC by checking in Cassandra that the lines still exist -- list of other nodes that we know have acknowledged a write of this block, usefull in the rebalancing algorithm - -Write strategy: have a single thread that does all write IO so that it is serialized (or have several threads that manage independent parts of the hash space). When writing a blob, write it to a temporary file, close, then rename so that a concurrent read gets a consistent result (either not found or found with whole content). - -Read strategy: the only read operation is get(hash) that returns either the data or not found (can do a corruption check as well and return corrupted state if it is the case). Can be done concurrently with writes. - -**Internal API:** - -- get(block hash) -> ok+data/not found/corrupted -- put(block hash & data, version uuid + offset) -> ok/error -- put with no data(block hash, version uuid + offset) -> ok/not found plz send data/error -- delete(block hash, version uuid + offset) -> ok/error - -GC: when last ref is deleted, delete block. -Long GC procedure: check in Cassandra that version UUIDs still exist and references this block. - -Rebalancing: takes as argument the list of newly added nodes. - -- List all blocks that we have. For each block: -- If it hits a newly introduced node, send it to them. - Use put with no data first to check if it has to be sent to them already or not. - Use a random listing order to avoid race conditions (they do no harm but we might have two nodes sending the same thing at the same time thus wasting time). -- If it doesn't hit us anymore, delete it and its reference list. - -Only one balancing can be running at a same time. It can be restarted at the beginning with new parameters. - -#### Membership management - -Two sets of nodes: - -- set of nodes from which a ping was recently received, with status: number of stored blocks, request counters, error counters, GC%, rebalancing% - (eviction from this set after say 30 seconds without ping) -- set of nodes that are part of the system, explicitly modified by the operator using the web UI (persisted to disk), - is a CRDT using a version number for the value of the whole set - -Thus, three states for nodes: - -- healthy: in both sets -- missing: not pingable but part of desired cluster -- unused/draining: currently present but not part of the desired cluster, empty = if contains nothing, draining = if still contains some blocks - -Membership messages between nodes: - -- ping with current state + hash of current membership info -> reply with same info -- send&get back membership info (the ids of nodes that are in the two sets): used when no local membership change in a long time and membership info hash discrepancy detected with first message (passive membership fixing with full CRDT gossip) -- inform of newly pingable node(s) -> no result, when receive new info repeat to all (reliable broadcast) -- inform of operator membership change -> no result, when receive new info repeat to all (reliable broadcast) - -Ring: generated from the desired set of nodes, however when doing read/writes on the ring, skip nodes that are known to be not pingable. -The tokens are generated in a deterministic fashion from node IDs (hash of node id + token number from 1 to K). -Number K of tokens per node: decided by the operator & stored in the operator's list of nodes CRDT. Default value proposal: with node status information also broadcast disk total size and free space, and propose a default number of tokens equal to 80%Free space / 10Gb. (this is all user interface) - - -#### Constants - -- Block size: around 1MB ? --> Exoscale use 16MB chunks -- Number of tokens in the hash ring: one every 10Gb of allocated storage -- Threshold for storing data directly in Cassandra objects table: 1kb bytes (maybe up to 4kb?) -- Ping timeout (time after which a node is registered as unresponsive/missing): 30 seconds -- Ping interval: 10 seconds -- ?? - -#### Links - -- CDC: -- Erasure coding: -- [Openstack Storage Concepts](https://docs.openstack.org/arch-design/design-storage/design-storage-concepts.html) -- [RADOS](https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf) diff --git "a/src/Technique/D\303\251veloppement/Garage/Internals.md" "b/src/Technique/D\303\251veloppement/Garage/Internals.md" new file mode 100644 index 0000000..dd98246 --- /dev/null +++ "b/src/Technique/D\303\251veloppement/Garage/Internals.md" @@ -0,0 +1,156 @@ +#### Modules + +- `membership/`: configuration, membership management (gossip of node's presence and status), ring generation --> what about Serf (used by Consul/Nomad) : https://www.serf.io/? Seems a huge library with many features so maybe overkill/hard to integrate +- `metadata/`: metadata management +- `blocks/`: block management, writing, GC and rebalancing +- `internal/`: server to server communication (HTTP server and client that reuses connections, TLS if we want, etc) +- `api/`: S3 API +- `web/`: web management interface + +#### Metadata tables + +**Objects:** + +- *Hash key:* Bucket name (string) +- *Sort key:* Object key (string) +- *Sort key:* Version timestamp (int) +- *Sort key:* Version UUID (string) +- Complete: bool +- Inline: bool, true for objects < threshold (say 1024) +- Object size (int) +- Mime type (string) +- Data for inlined objects (blob) +- Hash of first block otherwise (string) + +*Having only a hash key on the bucket name will lead to storing all file entries of this table for a specific bucket on a single node. At the same time, it is the only way I see to rapidly being able to list all bucket entries...* + +**Blocks:** + +- *Hash key:* Version UUID (string) +- *Sort key:* Offset of block in total file (int) +- Hash of data block (string) + +A version is defined by the existence of at least one entry in the blocks table for a certain version UUID. +We must keep the following invariant: if a version exists in the blocks table, it has to be referenced in the objects table. +We explicitly manage concurrent versions of an object: the version timestamp and version UUID columns are index columns, thus we may have several concurrent versions of an object. +Important: before deleting an older version from the objects table, we must make sure that we did a successfull delete of the blocks of that version from the blocks table. + +Thus, the workflow for reading an object is as follows: + +1. Check permissions (LDAP) +2. Read entry in object table. If data is inline, we have its data, stop here. + -> if several versions, take newest one and launch deletion of old ones in background +3. Read first block from cluster. If size <= 1 block, stop here. +4. Simultaneously with previous step, if size > 1 block: query the Blocks table for the IDs of the next blocks +5. Read subsequent blocks from cluster + +Workflow for PUT: + +1. Check write permission (LDAP) +2. Select a new version UUID +3. Write a preliminary entry for the new version in the objects table with complete = false +4. Send blocks to cluster and write entries in the blocks table +5. Update the version with complete = true and all of the accurate information (size, etc) +6. Return success to the user +7. Launch a background job to check and delete older versions + +Workflow for DELETE: + +1. Check write permission (LDAP) +2. Get current version (or versions) in object table +3. Do the deletion of those versions NOT IN A BACKGROUND JOB THIS TIME +4. Return succes to the user if we were able to delete blocks from the blocks table and entries from the object table + +To delete a version: + +1. List the blocks from Cassandra +2. For each block, delete it from cluster. Don't care if some deletions fail, we can do GC. +3. Delete all of the blocks from the blocks table +4. Finally, delete the version from the objects table + +Known issue: if someone is reading from a version that we want to delete and the object is big, the read might be interrupted. I think it is ok to leave it like this, we just cut the connection if data disappears during a read. + +("Soit P un problème, on s'en fout est une solution à ce problème") + +#### Block storage on disk + +**Blocks themselves:** + +- file path = /blobs/(first 3 hex digits of hash)/(rest of hash) + +**Reverse index for GC & other block-level metadata:** + +- file path = /meta/(first 3 hex digits of hash)/(rest of hash) +- map block hash -> set of version UUIDs where it is referenced + +Usefull metadata: + +- list of versions that reference this block in the Casandra table, so that we can do GC by checking in Cassandra that the lines still exist +- list of other nodes that we know have acknowledged a write of this block, usefull in the rebalancing algorithm + +Write strategy: have a single thread that does all write IO so that it is serialized (or have several threads that manage independent parts of the hash space). When writing a blob, write it to a temporary file, close, then rename so that a concurrent read gets a consistent result (either not found or found with whole content). + +Read strategy: the only read operation is get(hash) that returns either the data or not found (can do a corruption check as well and return corrupted state if it is the case). Can be done concurrently with writes. + +**Internal API:** + +- get(block hash) -> ok+data/not found/corrupted +- put(block hash & data, version uuid + offset) -> ok/error +- put with no data(block hash, version uuid + offset) -> ok/not found plz send data/error +- delete(block hash, version uuid + offset) -> ok/error + +GC: when last ref is deleted, delete block. +Long GC procedure: check in Cassandra that version UUIDs still exist and references this block. + +Rebalancing: takes as argument the list of newly added nodes. + +- List all blocks that we have. For each block: +- If it hits a newly introduced node, send it to them. + Use put with no data first to check if it has to be sent to them already or not. + Use a random listing order to avoid race conditions (they do no harm but we might have two nodes sending the same thing at the same time thus wasting time). +- If it doesn't hit us anymore, delete it and its reference list. + +Only one balancing can be running at a same time. It can be restarted at the beginning with new parameters. + +#### Membership management + +Two sets of nodes: + +- set of nodes from which a ping was recently received, with status: number of stored blocks, request counters, error counters, GC%, rebalancing% + (eviction from this set after say 30 seconds without ping) +- set of nodes that are part of the system, explicitly modified by the operator using the web UI (persisted to disk), + is a CRDT using a version number for the value of the whole set + +Thus, three states for nodes: + +- healthy: in both sets +- missing: not pingable but part of desired cluster +- unused/draining: currently present but not part of the desired cluster, empty = if contains nothing, draining = if still contains some blocks + +Membership messages between nodes: + +- ping with current state + hash of current membership info -> reply with same info +- send&get back membership info (the ids of nodes that are in the two sets): used when no local membership change in a long time and membership info hash discrepancy detected with first message (passive membership fixing with full CRDT gossip) +- inform of newly pingable node(s) -> no result, when receive new info repeat to all (reliable broadcast) +- inform of operator membership change -> no result, when receive new info repeat to all (reliable broadcast) + +Ring: generated from the desired set of nodes, however when doing read/writes on the ring, skip nodes that are known to be not pingable. +The tokens are generated in a deterministic fashion from node IDs (hash of node id + token number from 1 to K). +Number K of tokens per node: decided by the operator & stored in the operator's list of nodes CRDT. Default value proposal: with node status information also broadcast disk total size and free space, and propose a default number of tokens equal to 80%Free space / 10Gb. (this is all user interface) + + +#### Constants + +- Block size: around 1MB ? --> Exoscale use 16MB chunks +- Number of tokens in the hash ring: one every 10Gb of allocated storage +- Threshold for storing data directly in Cassandra objects table: 1kb bytes (maybe up to 4kb?) +- Ping timeout (time after which a node is registered as unresponsive/missing): 30 seconds +- Ping interval: 10 seconds +- ?? + +#### Links + +- CDC: +- Erasure coding: +- [Openstack Storage Concepts](https://docs.openstack.org/arch-design/design-storage/design-storage-concepts.html) +- [RADOS](https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf) diff --git "a/src/Technique/D\303\251veloppement/Garage/Quickstart.md" "b/src/Technique/D\303\251veloppement/Garage/Quickstart.md" new file mode 100644 index 0000000..6d0993a --- /dev/null +++ "b/src/Technique/D\303\251veloppement/Garage/Quickstart.md" @@ -0,0 +1,140 @@ +# Quickstart on an existing deployment + +First, chances are that your garage deployment is secured by TLS. +All your commands must be prefixed with their certificates. +I will define an alias once and for all to ease future commands. +Please adapt the path of the binary and certificates to your installation! + +``` +alias grg="/garage/garage --ca-cert /secrets/garage-ca.crt --client-cert /secrets/garage.crt --client-key /secrets/garage.key" +``` + +Now we can check that everything is going well by checking our cluster status: + +``` +grg status +``` + +Don't forget that `help` command and `--help` subcommands can help you anywhere, the CLI tool is self-documented! Two examples: + +``` +grg help +grg bucket allow --help +``` + +Fine, now let's create a bucket (we imagine that you want to deploy nextcloud): + +``` +grg bucket create nextcloud-bucket +``` + +Check that everything went well: + +``` +grg bucket list +grg bucket info nextcloud-bucket +``` + +Now we will generate an API key to access this bucket. +Note that API keys are independent of buckets: one key can access multiple buckets, multiple keys can access one bucket. + +Now, let's start by creating a key only for our PHP application: + +``` +grg key new --name nextcloud-app-key +``` + +You will have the following output (this one is fake, `key_id` and `secret_key` were generated with the openssl CLI tool): + +``` +Key { key_id: "GK3515373e4c851ebaad366558", secret_key: "7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34", name: "nextcloud-app-key", name_timestamp: 1603280506694, deleted: false, authorized_buckets: [] } +``` + +Check that everything works as intended (be careful, info works only with your key identifier and not with its friendly name!): + +``` +grg key list +grg key info GK3515373e4c851ebaad366558 +``` + +Now that we have a bucket and a key, we need to give permissions to the key on the bucket! + +``` +grg bucket allow --read --write nextcloud-bucket --key GK3515373e4c851ebaad366558 +``` + +You can check at any times allowed keys on your bucket with: + +``` +grg bucket info nextcloud-bucket +``` + +Now, let's move to the S3 API! +We will use the `s3cmd` CLI tool. +You can install it via your favorite package manager. +Otherwise, check [their website](https://s3tools.org/s3cmd) + +We will configure `s3cmd` with its interactive configuration tool, be careful not all endpoints are implemented! +Especially, the test run at the end does not work (yet). + +``` +$ s3cmd --configure + +Enter new values or accept defaults in brackets with Enter. +Refer to user manual for detailed description of all options. + +Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. +Access Key: GK3515373e4c851ebaad366558 +Secret Key: 7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34 +Default Region [US]: garage + +Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. +S3 Endpoint [s3.amazonaws.com]: garage.deuxfleurs.fr + +Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used +if the target S3 system supports dns based buckets. +DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: garage.deuxfleurs.fr + +Encryption password is used to protect your files from reading +by unauthorized persons while in transfer to S3 +Encryption password: +Path to GPG program [/usr/bin/gpg]: + +When using secure HTTPS protocol all communication with Amazon S3 +servers is protected from 3rd party eavesdropping. This method is +slower than plain HTTP, and can only be proxied with Python 2.7 or newer +Use HTTPS protocol [Yes]: + +On some networks all internet access must go through a HTTP proxy. +Try setting it here if you can't connect to S3 directly +HTTP Proxy server name: + +New settings: + Access Key: GK3515373e4c851ebaad366558 + Secret Key: 7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34 + Default Region: garage + S3 Endpoint: garage.deuxfleurs.fr + DNS-style bucket+hostname:port template for accessing a bucket: garage.deuxfleurs.fr + Encryption password: + Path to GPG program: /usr/bin/gpg + Use HTTPS protocol: True + HTTP Proxy server name: + HTTP Proxy server port: 0 + +Test access with supplied credentials? [Y/n] n + +Save settings? [y/N] y +Configuration saved to '/home/quentin/.s3cfg' +``` + +Now, if everything works, the following commands should work: + +``` +echo hello world > hello.txt +s3cmd put hello.txt s3://nextcloud-bucket +s3cmd ls s3://nextcloud-bucket +s3cmd rm s3://nextcloud-bucket/hello.txt +``` + +That's all for now! + diff --git "a/src/Technique/D\303\251veloppement/Garage/Related Work.md" "b/src/Technique/D\303\251veloppement/Garage/Related Work.md" new file mode 100644 index 0000000..c1a4eed --- /dev/null +++ "b/src/Technique/D\303\251veloppement/Garage/Related Work.md" @@ -0,0 +1,38 @@ +## Context + +Data storage is critical: it can lead to data loss if done badly and/or on hardware failure. +Filesystems + RAID can help on a single machine but a machine failure can put the whole storage offline. +Moreover, it put a hard limit on scalability. Often this limit can be pushed back far away by buying expensive machines. +But here we consider non specialized off the shelf machines that can be as low powered and subject to failures as a raspberry pi. + +Distributed storage may help to solve both availability and scalability problems on these machines. +Many solutions were proposed, they can be categorized as block storage, file storage and object storage depending on the abstraction they provide. + +## Related work + +Block storage is the most low level one, it's like exposing your raw hard drive over the network. +It requires very low latencies and stable network, that are often dedicated. +However it provides disk devices that can be manipulated by the operating system with the less constraints: it can be partitioned with any filesystem, meaning that it supports even the most exotic features. +We can cite [iSCSI](https://en.wikipedia.org/wiki/ISCSI) or [Fibre Channel](https://en.wikipedia.org/wiki/Fibre_Channel). +Openstack Cinder proxy previous solution to provide an uniform API. + +File storage provides a higher abstraction, they are one filesystem among others, which means they don't necessarily have all the exotic features of every filesystem. +Often, they relax some POSIX constraints while many applications will still be compatible without any modification. +As an example, we are able to run MariaDB (very slowly) over GlusterFS... +We can also mention CephFS (read [RADOS](https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf) whitepaper), Lustre, LizardFS, MooseFS, etc. +OpenStack Manila proxy previous solutions to provide an uniform API. + +Finally object storages provide the highest level abstraction. +They are the testimony that the POSIX filesystem API is not adapted to distributed filesystems. +Especially, the strong concistency has been dropped in favor of eventual consistency which is way more convenient and powerful in presence of high latencies and unreliability. +We often read about S3 that pioneered the concept that it's a filesystem for the WAN. +Applications must be adapted to work for the desired object storage service. +Today, the S3 HTTP REST API acts as a standard in the industry. +However, Amazon S3 source code is not open but alternatives were proposed. +We identified Minio, Pithos, Swift and Ceph. +Minio/Ceph enforces a total order, so properties similar to a (relaxed) filesystem. +Swift and Pithos are probably the most similar to AWS S3 with their consistent hashing ring. +However Pithos is not maintained anymore. More precisely the company that published Pithos version 1 has developped a second version 2 but has not open sourced it. +Some tests conducted by the [ACIDES project](https://acides.org/) have shown that Openstack Swift consumes way more resources (CPU+RAM) that we can afford. Furthermore, people developing Swift have not designed their software for geo-distribution. + +There were many attempts in research too. I am only thinking to [LBFS](https://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf) that was used as a basis for Seafile. But none of them have been effectively implemented yet. diff --git "a/src/Technique/D\303\251veloppement/Garage/index.md" "b/src/Technique/D\303\251veloppement/Garage/index.md" new file mode 100644 index 0000000..d710788 --- /dev/null +++ "b/src/Technique/D\303\251veloppement/Garage/index.md" @@ -0,0 +1,28 @@ +# Garage + +Garage is a lightweight AWS S3-compatible distributed object store, with the following goals: + + - As self-contained as possible + - Easy to set up + - Highly resilient to network failures, network latency, disk failures, sysadmin failures + - Relatively simple + - Made for multi-datacenter deployments + +Non-goals include: + + - Extremely high performance + - Complete implementation of the S3 API + - Erasure coding (our replication model is simply to copy the data as is on several nodes) + +Currently, Garage is deployed on our cluster (this very website is hosted on garage!) but must be considered as a technical preview. + +If you want to learn more about Garage, you can check our documentation: + - [Quickstart](/Technique/Développement/Garage/Quickstart.html), learn how to quickly interact with garage. + - [Related Work](/Technique/Développement/Garage/Related%20Work.html), understand why we decided to build a new software instead of using existing ones. + - [Internals](/Technique/Développement/Garage/Internals.html), contains a quick description of the data models that are used in Garage. + +External links: + - [Repository](https://git.deuxfleurs.fr/Deuxfleurs/garage/), Garage is a free software, developed on our own Gitea instance + +Talks: + - [(fr, informal) 2020-12-02 Why we decided to reinvent the wheel](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/master/doc/20201202_talk/talk.pdf) diff --git "a/src/Technique/D\303\251veloppement/index.md" "b/src/Technique/D\303\251veloppement/index.md" new file mode 100644 index 0000000..e69de29 diff --git a/src/_mixin/menu.pug b/src/_mixin/menu.pug index 939f6e4..d724eaf 100644 --- a/src/_mixin/menu.pug +++ b/src/_mixin/menu.pug @@ -7,7 +7,8 @@ mixin menu(o, element) a(href=val.url, class=val.index.url == element.url ? 'selected' : null)= val.nice_name - else span= val.nice_name - +menu(val, element) + - if (element.url.includes(val.url)) + +menu(val, element) - else if (val.type == 'file' && val.tags.includes('document') && !val.tags.includes('is_index')) li a(href=val.url, class=val.url == element.url ? 'selected' : null)= val.nice_name diff --git a/src/_next/css/main.css b/src/_next/css/main.css new file mode 100644 index 0000000..fbbaa8a --- /dev/null +++ b/src/_next/css/main.css @@ -0,0 +1,82 @@ +/* some reset */ +html, body { + margin: 0; + padding: 0; + border: 0; + font-size: 100%; + font: inherit; + vertical-align: baseline; + height: 100%; + font-size: 24px; +} + +/* fonts */ +@font-face { + font-family: 'IM Fell English'; + font-style: italic; + font-weight: 400; + font-display: swap; + src: url(../fonts/IMFellEnglish-Italic.ttf) format('truetype'); + unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD; +} +@font-face { + font-family: 'IM Fell English'; + font-style: normal; + font-weight: 400; + font-display: swap; + src: url(../fonts/IMFellEnglish-Regular.ttf) format('truetype'); + unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD; +} + +html { + background-color: #fbfcf8; +} + +body { + max-width: 900px; + padding: 1em; + margin: auto; + font-family: 'IM Fell English', Times, Serif; + color: #5c544a; +} + +.hero { + background-image: url("../img/illus.jpg"); + transform: rotate(-2deg); + height: 530px; + background-size: cover; + background-position: 50% 0%; + border-radius: 10px; + margin: -1.8em -1em 3em -1em; +} + +.hero img { + position: absolute; + top: -14px; + left: 25px; + height: 100px; +} + +header { + padding: 1em 0em 2em 0em; +} + +article { + background-color: #f4eccd; + padding: 1em; + border-radius: 10px; +} + +.brand { + font-size: 2em; + float: left; +} + +nav { + float: right; +} + +nav li { + padding: 0.5em; + display: inline; +} diff --git a/src/_next/fonts/IMFellEnglish-Italic.ttf b/src/_next/fonts/IMFellEnglish-Italic.ttf new file mode 100644 index 0000000..57a7338 Binary files /dev/null and b/src/_next/fonts/IMFellEnglish-Italic.ttf differ diff --git a/src/_next/fonts/IMFellEnglish-Regular.ttf b/src/_next/fonts/IMFellEnglish-Regular.ttf new file mode 100644 index 0000000..c8481a1 Binary files /dev/null and b/src/_next/fonts/IMFellEnglish-Regular.ttf differ diff --git a/src/_next/img/illus.jpg b/src/_next/img/illus.jpg new file mode 100644 index 0000000..6eefb8a Binary files /dev/null and b/src/_next/img/illus.jpg differ diff --git a/src/_next/img/paper-clip.png b/src/_next/img/paper-clip.png new file mode 100644 index 0000000..255912f Binary files /dev/null and b/src/_next/img/paper-clip.png differ diff --git a/src/_next/index.pug b/src/_next/index.pug new file mode 100644 index 0000000..edfa5c4 --- /dev/null +++ b/src/_next/index.pug @@ -0,0 +1,39 @@ +doctype html +html + head + meta(charset="utf-8") + title df #{title} + link(rel="stylesheet", href="css/main.css") + meta(name="viewport", content="width=device-width, initial-scale=1.0") + body + header + .brand Deuxfleurs + nav + ul + li L'association + li Nos services + li Documentation + li + a(href="https://guichet.deuxfleurs.fr") Mon compte + br(style="clear:both") + + main + article + .hero + img(src="img/paper-clip.png") + h1 Cher·e internaute, un autre internet est possible + p + | Arrêtez-vous cinq minutes. Prenez le temps de l'imaginer avec moi. + | Imaginez pouvoir accéder à l'information sans avoir à se demander si c'est une publicité ou non ? + | Imaginez que vos communications avec vos proches ne soient pas espionnées pour prédire votre prochain malus d'assurance ? + | Imaginez un lieux où vos amis verrez le contenu que vous publiez et non un obscure contenu viral ? + // Peut être rajouter des petits dessins genre gribouillage carnet. + + h2 Des cabanes pour se protéger du sombre nuage + p Blabla + + h2 Rejoignez-nous + + footer + p Un footer + diff --git a/src/next/css/main.css b/src/next/css/main.css deleted file mode 100644 index fbbaa8a..0000000 --- a/src/next/css/main.css +++ /dev/null @@ -1,82 +0,0 @@ -/* some reset */ -html, body { - margin: 0; - padding: 0; - border: 0; - font-size: 100%; - font: inherit; - vertical-align: baseline; - height: 100%; - font-size: 24px; -} - -/* fonts */ -@font-face { - font-family: 'IM Fell English'; - font-style: italic; - font-weight: 400; - font-display: swap; - src: url(../fonts/IMFellEnglish-Italic.ttf) format('truetype'); - unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD; -} -@font-face { - font-family: 'IM Fell English'; - font-style: normal; - font-weight: 400; - font-display: swap; - src: url(../fonts/IMFellEnglish-Regular.ttf) format('truetype'); - unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD; -} - -html { - background-color: #fbfcf8; -} - -body { - max-width: 900px; - padding: 1em; - margin: auto; - font-family: 'IM Fell English', Times, Serif; - color: #5c544a; -} - -.hero { - background-image: url("../img/illus.jpg"); - transform: rotate(-2deg); - height: 530px; - background-size: cover; - background-position: 50% 0%; - border-radius: 10px; - margin: -1.8em -1em 3em -1em; -} - -.hero img { - position: absolute; - top: -14px; - left: 25px; - height: 100px; -} - -header { - padding: 1em 0em 2em 0em; -} - -article { - background-color: #f4eccd; - padding: 1em; - border-radius: 10px; -} - -.brand { - font-size: 2em; - float: left; -} - -nav { - float: right; -} - -nav li { - padding: 0.5em; - display: inline; -} diff --git a/src/next/fonts/IMFellEnglish-Italic.ttf b/src/next/fonts/IMFellEnglish-Italic.ttf deleted file mode 100644 index 57a7338..0000000 Binary files a/src/next/fonts/IMFellEnglish-Italic.ttf and /dev/null differ diff --git a/src/next/fonts/IMFellEnglish-Regular.ttf b/src/next/fonts/IMFellEnglish-Regular.ttf deleted file mode 100644 index c8481a1..0000000 Binary files a/src/next/fonts/IMFellEnglish-Regular.ttf and /dev/null differ diff --git a/src/next/img/illus.jpg b/src/next/img/illus.jpg deleted file mode 100644 index 6eefb8a..0000000 Binary files a/src/next/img/illus.jpg and /dev/null differ diff --git a/src/next/img/paper-clip.png b/src/next/img/paper-clip.png deleted file mode 100644 index 255912f..0000000 Binary files a/src/next/img/paper-clip.png and /dev/null differ diff --git a/src/next/index.pug b/src/next/index.pug deleted file mode 100644 index edfa5c4..0000000 --- a/src/next/index.pug +++ /dev/null @@ -1,39 +0,0 @@ -doctype html -html - head - meta(charset="utf-8") - title df #{title} - link(rel="stylesheet", href="css/main.css") - meta(name="viewport", content="width=device-width, initial-scale=1.0") - body - header - .brand Deuxfleurs - nav - ul - li L'association - li Nos services - li Documentation - li - a(href="https://guichet.deuxfleurs.fr") Mon compte - br(style="clear:both") - - main - article - .hero - img(src="img/paper-clip.png") - h1 Cher·e internaute, un autre internet est possible - p - | Arrêtez-vous cinq minutes. Prenez le temps de l'imaginer avec moi. - | Imaginez pouvoir accéder à l'information sans avoir à se demander si c'est une publicité ou non ? - | Imaginez que vos communications avec vos proches ne soient pas espionnées pour prédire votre prochain malus d'assurance ? - | Imaginez un lieux où vos amis verrez le contenu que vous publiez et non un obscure contenu viral ? - // Peut être rajouter des petits dessins genre gribouillage carnet. - - h2 Des cabanes pour se protéger du sombre nuage - p Blabla - - h2 Rejoignez-nous - - footer - p Un footer - -- cgit v1.2.3