Ugarit
View Ticket
Login
Ticket Hash: bb6d227d92cbfb3931d17cb49e0763a05030082d
Title: Pipelined put!
Status: Open Type: Feature_Request
Severity: UNSPECIFIED Priority: 2_Medium
Subsystem: Backends Resolution: Open
Last Modified: 2020-05-28 14:33:57
Version Found In:
Description:
Improve performance over high-latency links by making the
  <code>import-storage</code> procedure not block for the response from <code>put!</code>
  requests, but to increment a "pending responses" counter.

Then make
  all calls *other* than <code>put!</code> call a procedure that loops once per
  pending response, reading it and checking it's not an error
  (returning the error as usual if so).

That will enable us to
  pipeline <code>put!</code> requests, improving the speed of dumping to very
  remote archives, as long as a cache is helping to speed up
  <code>exists?</code>. It might be worth extending this behavior to other
  <code>(void)</code>-returning requests - except, of course, <code>flush!</code>,
  <code>lock-tag!</code>, <code>unlock-tag!</code> and <code>close!</code>, but I doubt it.
User Comments:
alaric added on 2020-05-28 14:33:57: (text/x-fossil-wiki)
If we use bokbok as a backend access protocol, we can have multiple put! requests inflight. In which case, the techniques in this ticket can be applied to snapshotting as well:

https://www.kitten-technologies.co.uk/project/ugarit/tktview/97b608385736f7f946bc3b3b2a037b0b8945306e

OR we could continue with the original plan, kind of, and make put! fire off a thread (subject to a concurrency limit) to do the actual put, and put any errors returned into an error queue to be returned from the next operation on that storage that returns.