aboutsummaryrefslogtreecommitdiff
path: root/content/blog/2022-perf
diff options
context:
space:
mode:
Diffstat (limited to 'content/blog/2022-perf')
-rw-r--r--content/blog/2022-perf/index.md19
-rw-r--r--content/blog/2022-perf/schema-streaming.pngbin0 -> 51925 bytes
2 files changed, 14 insertions, 5 deletions
diff --git a/content/blog/2022-perf/index.md b/content/blog/2022-perf/index.md
index da78a76..3f597a8 100644
--- a/content/blog/2022-perf/index.md
+++ b/content/blog/2022-perf/index.md
@@ -1,10 +1,10 @@
+++
-title="Bringing theoretical design and real word performances face to face"
+title="Bringing theoretical design and observed performances face to face"
date=2022-09-26
+++
-*For the past years, we have extensively analyzed possible design decisions and their theoretical tradeoffs on Garage, being it on the network, data structure, or scheduling side. And it worked well enough for our production cluster at Deuxfleurs, but we also knew that people started discovering some unexpected behaviors. We thus started a round of benchmark and performance improvement to make Garage more versatile and better understand what we can expect from it.*
+*For the past years, we have extensively analyzed possible design decisions and their theoretical tradeoffs on Garage, being it on the network, data structure, or scheduling side. And it worked well enough for our production cluster at Deuxfleurs, but we also knew that people started discovering some unexpected behaviors. We thus started a round of benchmark and performance measurement to see how Garage behaves compared to our expectations.*
<!-- more -->
@@ -20,7 +20,7 @@ It must also be noted that Garage and Minio are systems with different feature s
Impact of the testing environment is also not evaluated (kernel patches, configuration, parameters, filesystem, hardware configuration, etc.), some of these configurations could favor one configuration/software over another. Especially, it must be noted that most of the tests were done on a consumer-grade computer and SSD only, which will be different from most production setups. Finally, our results are also provided without statistical tests to check their significance, and thus might be statistically not significative.
-When reading this post, please keep in mind that **we are not making any business or technical recommendation here**, we only share bits of our development process.
+When reading this post, please keep in mind that **we are not making any business or technical recommendation here, this is not a scientific paper either**; we only share bits of our development process.
Read [benchmarking crimes](https://gernot-heiser.org/benchmarking-crimes.html), make your own tests if you need to take a decision, and remain supportive and caring with your peers...
## About our testing environment
@@ -31,10 +31,19 @@ To reproduce some environments locally, we have a small set of Python scripts na
## Efficient I/O
-- streaming
+**Time To First Byte** - One specificity of Garage is that we implemented S3 web endpoints, with the idea to make it the platform of choice to publish your static website. When publishing a website, one metric you observe is Time To First Byte (TTFB), as it will impact the perceived reactivity of your wbesite. On Garage, time to first byte was a bit high, especially for objects of 1MB and more. This is not surprising as, until now, the smallest level of granularity was the block level, which are set to at most 1MB by default. Hence, when you were sending a GET request, the block had to be fully retrieved by the gateway node from the storage node before starting sending any data to the client. With Garage v0.8, we integrated a block streaming logic which allows the gateway to send the beginning of a block without having to wait for the full block from the storage node. We can visually represent the difference as follow:
-![](ttfb.png)
+![A schema depicting how streaming improves the delivery of a block](schema-streaming.png)
+As our default block size is only 1MB, the difference will be very small on fast networks: it takes only 8ms to transfer 1MB on a 1Gbps network. However, on a very slow network (or a very congested link with many parallel requests handled), the impact can be much more important: at 5Mbps, it takes 1.6 second to transfer our 1MB block, and streaming could heavily improve user experience.
+
+We wanted to see if this theory helds in practise: we simulated a low latency but slow network on mknet and did some request with (garage v0.8 beta) and without (garage v0.7) block streaming. We also added Minio as a reference.
+
+![Plot showing the TTFB observed on Garage v0.8, v0.7 and Minio](ttfb.png)
+
+As planned, Garage v0.7 that does not support block streaming features TTFB between 1.6s and 2s, which correspond to the computed time to transfer the full block. On the other side of the plot, we see Garage v0.8 with very low TTFB thanks to our streaming approach (the lowest value is 43 ms). Minio sits between our 2 implementations: we suppose that it does some form of batching, but on less than 1MB.
+
+**Read/write throughput** -
- fsync, semaphore, timeouts, etc.
![](io.png)
diff --git a/content/blog/2022-perf/schema-streaming.png b/content/blog/2022-perf/schema-streaming.png
new file mode 100644
index 0000000..7d4c51c
--- /dev/null
+++ b/content/blog/2022-perf/schema-streaming.png
Binary files differ