| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Trying to separate:
1. Stuff for handling the swarm of nodes and generic table data replication
2. Stuff for the object store core application: metadata tables and block management
3. Stuff for the S3 API
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
...if syncer doesn't need them because he's going to delete the partition anyway.
Also, fix block resync queue
|
| |
|
|
|
|
|
|
| |
TODOs:
- ensure sync goes both way
- finish sending blocks to other nodes when they need them before deleting
|
|
|
|
|
|
|
| |
Issue: RC increases also when the block ref entry is first put by the actual client.
At that point the client is probably already sending us the block content,
so we don't need to do a get...
We should add a delay before the task is added or find something to do.
|
|
|
|
|
|
| |
advantages
- reads don't prevent preparing writes
- can be followed from other parts of the system by cloning the receiver
|
| |
|
|
|
|
|
|
|
| |
So the HTTP client future of Hyper is not Sync, thus the stream
that read blocks wasn't either. However Hyper's default Body type
requires a stream to be Sync for wrap_stream. Solution: reimplement
a custom HTTP body type.
|
|
|