aboutsummaryrefslogtreecommitdiff
path: root/content/blog/2022-ipfs/index.md
diff options
context:
space:
mode:
Diffstat (limited to 'content/blog/2022-ipfs/index.md')
-rw-r--r--content/blog/2022-ipfs/index.md13
1 files changed, 6 insertions, 7 deletions
diff --git a/content/blog/2022-ipfs/index.md b/content/blog/2022-ipfs/index.md
index b56eeb1..03ce073 100644
--- a/content/blog/2022-ipfs/index.md
+++ b/content/blog/2022-ipfs/index.md
@@ -198,20 +198,19 @@ Even if all requests have not the same cost on the cluster, processing a request
## Conclusion
Running IPFS over a S3 backend does not quite work out of the box in term of performances yet.
-We have identified some possible measures for improvement (disabling the DHT server, keeping an in-memory index of the blocks, using the S3 backend only for your data)
-that might allow you to still run an IPFS node over Garage.
+We have identified that the main problem is linked with the DHT service,
+and proposed some improvements (disabling the DHT server, keeping an in-memory index of the blocks, using the S3 backend only for your data).
From a design perspective, it seems however that the numerous small blocks created by IPFS
do not map trivially to efficient S3 requests, and thus could be a limiting factor to any optimization work.
-As part of our test journey, we read some posts about performance issues on IPFS (eg. [#6283 - Reduce the impact of the DHT](https://github.com/ipfs/go-ipfs/issues/6283)) that are not
+As part of our test journey, we also read some posts about performance issues on IPFS (eg. [#6283](https://github.com/ipfs/go-ipfs/issues/6283)) that are not
linked with the S3 connector. We might be negatively influenced by our failure to connect IPFS with S3,
-but we are tempted to think that in any case, IPFS will be ressource intensive for your hardware.
+but we are tempted to think that IPFS is intrinsically ressource intensive.
-On our side, we will continue our investigations towards more *minimalist* software that tends to limit the
-number of requests they send.
+On our side, we will continue our investigations towards more *minimalist* software.
This choice makes sense for us as we want to reduce the ecological impact of our services
-by deploying optimized software on a limited number of second-hand servers.
+by deploying less servers, that use less energy, and that are renewed less frequently.
*Yes we are aware of the existence of Nextcloud, Owncloud, Owncloud Infinite Scale, Seafile, Filestash, Pydio, SOLID, Remote Storage, etc.
We might even try one of them in a future post, so stay tuned!*