Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Overview
Comment: | wolfram: FIXME about lightweight job generation interface |
---|---|
Timelines: | family | ancestors | descendants | both | trunk |
Files: | files | file ages | folders |
SHA1: |
e6c37344e928750b5b6ae023e182135d |
User & Date: | alaric 2012-07-26 08:36:41 |
Context
2012-11-30
| ||
11:51 | Mainly NITROGEN - documenting the node lifecycle state machine. Updated other sections to refer properly to it. Removed the bootstrap code from the ARGON page as it's all been eaten up by NITROGEN. check-in: 309ad96ecf user: alaric tags: trunk | |
2012-07-26
| ||
08:36 | wolfram: FIXME about lightweight job generation interface check-in: e6c37344e9 user: alaric tags: trunk | |
2012-07-23
| ||
16:31 | Noted that one can schedule SINGLE jobs as well as job generators in HELIUM and WOLFRAM. check-in: 5cfd191570 user: alaric tags: trunk | |
Changes
Changes to intro/wolfram.wiki.
︙ | ︙ | |||
327 328 329 330 331 332 333 334 335 336 337 338 339 340 | all of the jobs have completed, the distributed job as a whole is complete. The job generator system basically implements a lazy list of jobs to do, allowing it to be computed on the fly. And for parallelisable tasks that aren't generated in bulk, it will be possible to submit a single job to be run, as a LITHIUM handler ID and arguments, which may be distributed to a lightly-loaded node in the cluster to be run if the local node is busy. <h1>Distributed caching</h1> WOLFRAM also provides a distributed cache mechanism. There are several caches in the system. As well as one cache created implicitly per node, which is stored only on that node, and a global | > > > > > > | 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 | all of the jobs have completed, the distributed job as a whole is complete. The job generator system basically implements a lazy list of jobs to do, allowing it to be computed on the fly. And for parallelisable tasks that aren't generated in bulk, it will be possible to submit a single job to be run, as a LITHIUM handler ID and arguments, which may be distributed to a lightly-loaded node in the cluster to be run if the local node is busy. FIXME: Is it worth having a lighter-weight job generator interface where you provide a local next-job-please closure, and remote nodes call back via WOLFRAM to ask for new jobs? Actually distributing job generation (causing the job generator state to be synchronised at the TUNGSTEN level) might be too heavyweight. <h1>Distributed caching</h1> WOLFRAM also provides a distributed cache mechanism. There are several caches in the system. As well as one cache created implicitly per node, which is stored only on that node, and a global |
︙ | ︙ |