Streaming extracts - faster over high latency links
|User & Date:||alaric 2020-05-28 14:31:47|
- Change icomment to:
If we use bokbok as a backend access protocol, we can overlap requests to the backend.
Given that, we could parallelise when iterating over contents of a directory, and effectively extract (or snapshot, for that matter) up to N things at once, for some configurable N.
For indirect blocks, we could also parallelise fetching the data blocks (probably not worth parallelising fetching indirect blocks themselves), although we'll have to put them back into order if they arrive out of order.
To do this nicely, we should write a nice parallel-for-each for the directory case, and a parallel-for-map that does a parallel map of closure 1 over some list, then executes closure 2 on each result *in sequence*, taking maximum parallelism from a parameter. Make sure that nested parallel-* things use the same parallelism limit (implement it with a global mutex-protected counter or something) so we don't go crazy with recursions.
- Change login to "alaric"
- Change mimetype to "text/x-fossil-wiki"