Python usage notes/joblib

From Helpful
Jump to: navigation, search
Syntaxish: syntax and language · importing, modules, packages · iterable stuff · concurrency

IO: networking and web · filesystem

Data: Numpy, scipy · pandas · struct, buffer, array, bytes, memoryview · Python database notes

Image, Visualization: PIL · Matplotlib, pylab · seaborn · bokeh · plotly


Processes: threading · subprocess · multiprocessing · joblib · pty and pexpect

Stringy: strings, unicode, encodings · regexp · command line argument parsing · XML

date and time

semi-sorted

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

joblib.Memory is disk-backed memoization, which you can

decorate functions with
wrap in more explicitly to do the occasional checkpoint
wrap into every Parallel call

https://joblib.readthedocs.io/en/latest/auto_examples/nested_parallel_memory.html


It's numpy-aware (and should deal okayish with large arrays).


joblib.Parallel uses threading or multiprocessing or loky (joblib's own, the default, and itself capable of threading and multiprocessing) to parallelize.

(threading is often lower overhead when the thing you're calling is a compiled extension anyway and doesn't lock the GIL while doing it, multiprocessing has more overhead but better when the thing you're calling is basically sequential, extension or not)


joblib.dump() and joblib.load()) help serialize numpy data (and in general more than plain pickle handles).

This is not a portable format, because the underlying cloudpickle is only guaranteed to work in the exact same version of python. (verify) In particularly creating in different major versions is likely to break (ValueError: unsupported pickle protocol: 3 means you created it in py3 and loading it in py2



delayed() is basically a cleaner-looking way to pass in the function and its arguments to Parallel, without accidentally doing the work in the main interpreter.

For example, the example from [1] is parallelize

[sqrt(i ** 2) for i in range(10)]

which works out as something like

Parallel(n_jobs=2)(delayed(sqrt)(i ** 2) for i in range(10))

(Note that i**2 is still computed in the main thread(verify)



https://joblib.readthedocs.io/en/latest/