From ce75a7795d092c198d2b8cca78c2680ee3637ac8 Mon Sep 17 00:00:00 2001 From: Alex Auvolat Date: Fri, 8 Jul 2022 13:55:16 +0200 Subject: fixes on fixes --- content/blog/2022-ipfs/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'content/blog/2022-ipfs') diff --git a/content/blog/2022-ipfs/index.md b/content/blog/2022-ipfs/index.md index 87e9753..65e0297 100644 --- a/content/blog/2022-ipfs/index.md +++ b/content/blog/2022-ipfs/index.md @@ -61,7 +61,7 @@ servers from different clusters can't collaborate to serve together the same dat ➡️ **Garage is designed to durably store content.** -In this blog post, we will explore whether we can combine delivary and durability by connecting an IPFS node to a Garage cluster. +In this blog post, we will explore whether we can combine efficient delivery and strong durability by connecting an IPFS node to a Garage cluster. ## Try #1: Vanilla IPFS over Garage @@ -223,7 +223,7 @@ as there are IPFS blocks in the object to be read. On the receiving end, this means that any fully-fledged IPFS node has to answer large numbers of requests for blocks required by users everywhere on the network, which is what we observed in our experiment above. We were however surprised to observe that many requests coming from the IPFS network were for blocks -which our node didn't had a copy for: this means that somewhere in the IPFS protocol, an overly optimistic +which our node didn't have a copy of: this means that somewhere in the IPFS protocol, an overly optimistic assumption is made on where data could be found in the network, and this ends up translating into many requests between nodes that return negative results. When IPFS blocks are stored on a local filesystem, answering these requests fast might be possible. -- cgit v1.2.3