ForkAndReturn implements a couple of methods that simplify running a block of code in a subprocess. The result (Ruby object or exception) of the block will be available in the parent process.

ForkAndReturn also enriches Enumerable with a couple of methods (e.g. Enumerable#concurrent_collect), in order to simplify the concurrent execution of a block for a collection of objects.

The intermediate return value (or exception) will be Marshal’led to disk. This means that it is possible to (concurrently) run thousands of child process, with a relative low memory footprint. Just gather the results once all child process are done. ForkAndReturn will handle the writing, reading and deleting of the temporary file.

The core of these methods is fork_and_return_core(). It returns some nested lambdas, which are handled by the other methods and by Enumerable#concurrent_collect. These lambdas handle the WAITing, LOADing and RESULTing (explained in fork_and_return_core()).

The child process exits with Process.exit!(), so at_exit() blocks are skipped in the child process. However, both $stdout and $stderr will be flushed.

Only Marshal’lable Ruby objects can be returned.

ForkAndReturn uses Process.fork(), so it only runs on platforms where Process.fork() is implemented.

Example: (See example.txt for another example.)

[1, 2, 3, 4].collect do |object|
  Thread.fork do
    ForkAndReturn.fork_and_return do
end.collect do |thread|
end   # ===> [2, 4, 6, 8]

This runs each “2*object” in a seperate process. Hopefully, the processes are spread over all available CPU’s. That’s a simple way of parallel processing! Although Enumerable#concurrent_collect is even simpler:

[1, 2, 3, 4].concurrent_collect do |object|
end   # ===> [2, 4, 6, 8]

Note that the code in the block is run in a seperate process, so updating objects and variables in the block won’t affect the parent process:

count = 0
[...].concurrent_collect do
  count += 1
count   # ==> 0

Enuemerable#concurrent_collect() is suitable for handling a couple of very CPU intensive jobs, like parsing large XML files.

Enuemerable#clustered_concurrent_collect() is suitable for handling a lot of not too CPU intensive jobs. The situations where the overhead of forking is too expensive, but where you still want to use all available CPU’s.