profit.run.local

Local Runner & Memory-map Interface

  • LocalRunner: start Workers locally via the shell (subprocess.Popen)

  • ForkRunner: start Workers locally with forking (multiprocessing.Process)

  • MemmapInterface: share date using a memory-mapped, structured array (using numpy)

Module Contents

Classes

LocalRunner

start Workers locally via the shell

ForkRunner

start Workers locally using forking (multiprocessing.Process)

MemmapRunnerInterface

Runner-Worker Interface using a memory mapped numpy array

MemmapWorkerInterface

Runner-Worker Interface using a memory mapped numpy array

class profit.run.local.LocalRunner(command='profit-worker', parallel='all', **kwargs)[source]

Bases: profit.run.runner.Runner

start Workers locally via the shell

property config
__repr__()[source]

Return repr(self).

spawn(params=None, wait=False)[source]

spawn a single run

Parameters:
  • params – a mapping which defines input parameters to be set

  • wait – whether to wait for the run to complete

poll(run_id)[source]

check the status of the run directly

cancel(run_id)[source]
class profit.run.local.ForkRunner(parallel='all', **kwargs)[source]

Bases: profit.run.runner.Runner

start Workers locally using forking (multiprocessing.Process)

spawn(params=None, wait=False)[source]

spawn a single run

Parameters:
  • params – a mapping which defines input parameters to be set

  • wait – whether to wait for the run to complete

poll(run_id)[source]

check the status of the run directly

cancel(run_id)[source]
class profit.run.local.MemmapRunnerInterface(size, input_config, output_config, *, path: str = 'interface.npy', logger_parent: logging.Logger = None)[source]

Bases: profit.run.interface.RunnerInterface

Runner-Worker Interface using a memory mapped numpy array

  • expected to be very fast with the local Runner as each Worker can access the array directly (unverified)

  • expected to be inefficient if used on a cluster with a shared filesystem (unverified)

  • reliable

  • known issue: resizing the array (to add more runs) is dangerous, needs a workaround (e.g. several arrays in the same file)

property config
resize(size)[source]

Resizing the Interface

Attention: this is dangerous and may lead to unexpected errors! The problem is that the memory mapped file is overwritten. Any Workers which have this file mapped will run into severe problems. Possible future workarounds: multiple files or multiple headers in one file.

clean()[source]
class profit.run.local.MemmapWorkerInterface(run_id: int, *, path='interface.npy', logger_parent: logging.Logger = None)[source]

Bases: profit.run.interface.WorkerInterface

Runner-Worker Interface using a memory mapped numpy array

counterpart to MemmapRunnerInterface

property config
property time
retrieve()[source]

retrieve the input

  1. connect to the Runner-Interface

  2. retrieve the input data and store it in .input

transmit()[source]

transmit the output

  1. transmit the output and time data (.output and .time)

  2. signal the Worker has finished

  3. close the connection to the Runner-Interface

clean()[source]