Ugarit
View Ticket
Login
Ticket Hash: 97b608385736f7f946bc3b3b2a037b0b8945306e
Title: Streaming extracts - faster over high latency links
Status: Open Type: Feature_Request
Severity: Cosmetic Priority: 3_Low
Subsystem: Backends Resolution: Open
Last Modified: 2020-05-28 14:31:47
Version Found In:
Description:
Currently, extraction can be slow over a high-latency link as it tends to follow this pattern:

1) Request a block full of pointers to data blocks 1a) Wait for round trip to back-end, doing nothing 2) Request first data block 2a) Wait for round trip to back-end, doing nothing 3) Request second data block 3a) Wait for round trip to back-end, doing nothing ...

The client and the backend and the data link all sit idle for prolonged periods. That sucks.

If the backend had the encryption keys and knew about vault formats, it could spot the loading of an indirect block, look up the keys inside, and start sending the data blocks to the client pre-emptively (via an extension to the protocol allowing pre-emptive blocks; the client would send its next request, and try to read a response, but find one or more such blocks waiting before the response, so loop until they're gone before getting the response. They'd go into a local in-memory cache so that subsequent requests would be satisfiable without another round-trip).

But, the backend does not have that information.

However, it *could* notice that the storing of the blocks went in the opposite pattern - N data blocks are written then an indirect block (and it *does* have access to block types). Given a list of what block types are indirect, it could spot the temporal pattern of a string of data blocks then an indirect block, and note the list of data block keys against the indirect block key in the metadata and use that as a guide to pre-fetch blocks on extraction.

Or, better, the backend protocol could be extended to allow, when put!-ing a block, the provision of an optional list of "child keys" that are likely to be requested in order after an explicit get of that block.

Is this a security issue due to being, essentially, known plaintext in the block, particularly for "di" or "fi" blocks? I don't think so - the encryption algorithm should be strong enough against that.

One issue would be how many blocks to pre-send. There could be a queue in the backend which the prefetch list is placed into, and the first N sent. If the next request from the client is the N+1th block, now at the head of the queue, then reply with the next N blocks from the queue. Otherwise, clear the queue and replace it with the prefetch list of the requested block.

Also, we'd need to make the protocol client, before requesting a block, read in and cache any pending pre-fetched blocks before attempting the read, as the block it's about to request will often be the next one pending on the socket. This will need to be done with non-blocking I/O of some kind.

Of course, this would be done as a proxy backend, like backend-cache but for the *remote* end of a high-latency link - rather than building the logic into every storage backend!

User Comments:
alaric added on 2020-05-28 14:31:47:

If we use bokbok as a backend access protocol, we can overlap requests to the backend.

Given that, we could parallelise when iterating over contents of a directory, and effectively extract (or snapshot, for that matter) up to N things at once, for some configurable N.

For indirect blocks, we could also parallelise fetching the data blocks (probably not worth parallelising fetching indirect blocks themselves), although we'll have to put them back into order if they arrive out of order.

To do this nicely, we should write a nice parallel-for-each for the directory case, and a parallel-for-map that does a parallel map of closure 1 over some list, then executes closure 2 on each result *in sequence*, taking maximum parallelism from a parameter. Make sure that nested parallel-* things use the same parallelism limit (implement it with a global mutex-protected counter or something) so we don't go crazy with recursions.