Isolating shell and/or package environments

From Helpful
(Redirected from Environment modules)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Linux-related notes
Linux user notes

Shell, admin, and both:

Shell - command line and bash notes · shell login - profiles and scripts ·· find and xargs and parallel · screen and tmux ·· Shell and process nitty gritty ·· Isolating shell environments ·· Shell flow control notes


Linux admin - disk and filesystem · Linux networking · Init systems and service management (upstart notes, systemd notes) · users and permissions · Debugging · security enhanced linux · PAM notes · health and statistics · Machine Check Events · kernel modules · YP notes · unsorted and muck


Logging and graphing - Logging · RRDtool and munin notes
Network admin - Firewalling and other packet stuff ·


Remote desktops



This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


The problem

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


Software installed by your OS package manager tends to not create conflicts within the software that same package manager installed, in part because the packages have had to follow specific rules to become that package, in part because the package manager knowing about what it does.

(until you start mixing package managers, but surely that's not everyday practice, right? /s)


Niche software or custom installations, though?

  • A lot of them have the habit of just putting their own binaries, libraries, and/or package paths in front of everything else.
Which can break other software, occasionally even system utilities.
  • Alternatively, it creates a mess of instructions
("yeah it overrides the system package directory, so install everything into that as well. Oh, the fix is simple, just learn this environment hacking tool", or "yeah you need to have it not be distracted by seeing a system MPI at install time but, then it will still work correctly at runtime, maybe we should fix that but other than an error that isn't an error it isn't broken sooo", or "oh it works on my system, which distribution do you use? Hmmm, which version? Hmm, what did you install before? Hmm, yeah I don't know maybe try uninstall and reinstall?")


And you might even manage that on your one workstation, in a "fiddle until it works" way.

But when your everyday involves distributing software to varied computers, shared production environment where new versions come in steadily, a bunch of niche/custom software, or clusters?

Good luck with your sanity, and/or your helpdesk's.


What are the concrete moving parts here? The bits we might want to patch up?

For linux:

  • most of the things that get resolved at runtime from your system install are
  • there are further things, often picked up in their own ways. Consider:
    • compilers
    • other runtimes external to programs (mpi, java, nvidia stuff)
  • Scripting language runtimes could count as either one
and hashbangs may make things better - or worse, because they refer into an environment you also have to control (you can count on a distro being consistent-ish, but not in the long run)

We would like predictability for each of these.


From a wider view, we might want to give a unique, controlled, environment to individual programs, and development environments for projects, nodes in clusters / swarms, and more.

This is sometimes called virtual environments.


This solves varied needs, but most commonly:

isolate specific libraries to just the program that needs them
and not accidentally conflict with others (as it might when installed system-wide)
ability to install things just into specific project without having permissions to do so system-wide
get build tools to create a specific environment (or run in one), making your dev more easily transplanted
makes software that relies on very specific (often older) versions a lot less fragile than installing everything into system and hoping for the best


The above is intentionally still abstract/vague, because implementations vary.

For example,

C is decent for shared libraries when people adhere to a versioning system
and relies on things like LD_LIBRARY_PATH when people do not.
Python has the concept of system-shared libraries, but does not have good versioning.
You can isolate an environment by (roughly) pointing that at your own instead of the system's.
Java has only a "load extra classes you need from here", which is essentially a manual thing. By default it never shares, every app is larger and independent.

(...yes, I know, all of those come with footnotes)

A quick fix

When the issue is mostly path and libraries, and that you can't oversee when they are hooked in, then there is a quick fix in making the user responsible for doing that explicitly.


Say, you've probably thought about writing things like:

function activate-myprog {
  export PATH=$PATH:/opt/myprog/bin
  export LD_LIBRARY_PATH=$PATH:/opt/myprog/lib
}


This is a halfway decent fix already. Sure it's manual, sure it can still have issues, yet:

As an admin writing these on purpose, you've thought about the order in the paths, and
it loads nothing by default, while making it easy to have users choose one at a time, avoiding most conflicts
you can install multiple versions (mostly) without conflict
...or at least centralize your knowledge of the conflicts


Upsides

For personal use, this work well for something so basic.

For coworkers, the explanation is now at worst, "activate what you need, and please start a new shell for each of these" (maybe adding "You don't always need a new shell, but it avoids potential problems")

(Also, on various shells, if you start the name with something like activate-, you get tab completion of all your activate- things just because these functions exist).


Limitations

It doesn't solve cases like

  • running things on cluster nodes, because you probably can't cleanly do that from batch scripts that its queue manager wants.
Particularly if it's not necessarily the same shell.
  • where there are deeper, external dependencies on (varying implementations and/or version of) system-ish things, like MPI or a compiler
  • depending on specific versions of software can get hairy
  • ...and anywhere where "don't touch my workstation y'bastards" does not work.

It may also not be trivial to explain to other people how to do this well.


So people have thought up some frameworks that stay cleaner over time.

Language agnostic

environment modules

Environment modules are the more flexible and configurable version of the basic duct-tape fix mentioned above.

It is itself a scripting language (Tcl plus some helpers).


Each module you write has a well defined set of operations for it to be loaded. End users mostly just need to know:

module load progname[/version]


Upsides

  • module load is easy to explain
  • makes it easy to
    • have different versions of modules
    • give specific subsets of modules to different people,
    • help deal with specific dependencies, in that you can write those as other modules to be loaded.
  • can make sense to also do on cluster nodes, so putting specific module loads in your scripts is a lot more controlled


Limitations

It still only changes the environment that it's run from, so can't fix nonsense like scripts hardcoding their hashbangs.



first-time setup

You need to alias module to your installed modulecmd.


If you want only a few users to use this, look at the add.modules command, which edits your personal shell files to hook it in (can deal with a few different shells).


Sysadmins may want to put that in (these days) /etc/profile.d/modules.sh, so that user shells get it automatically.

Scripts can source the same thing explicitly, which can be useful/necessary on queue systems.

[1]


MODULEPATH, which controls the places where modulecmd looks for module files, will often be set in the same central place (unless you want more control of the sets of modules).


Notes:

  • admins sometimes wish to vary MODULEPATH, for example when admins want to expose different sets of modules to different kinds of users.
  • yes, you can create your own modules, and hook them in by adding something like to following to your shell startup:
export MODULEPATH=$MODULEPATH:~/modulefiles/
(or, if only used sometimes, alias something to module use --append ~/modulefiles)
...though this makes less sense on clusters)


By example
using
Writing module files
Bending the rules

Can I get the result of a command within a modulefile?

Yes, though it's advisable to keep this minimal, and/or read things like [2] on making this more robust.

Example:

set  PROCESSORS   [exec cat /proc/cpuinfo | grep rocessor | wc -l]



My software says to source a script, can I just do that?

There are roughly two reasons you might not want to.

  • it will not work on all shells
  • you can't unload that

If you don't care about unloading or shells beyond the one it's made for, or it's too annoying to transplant what it does to the module file (note there are tools to help here), then here's how to cheat:

puts stdout "source /path/to/script;"

This works because stdout is the child shell (which is where envmod is also sending things).

Note that 'source' command should work if that shell is bash or csh (but will only source successfully if what's sourced fits the shell), not in many others.



Can I automatically load other modules?

Yes, you can cheat by sending module load commands to the shell.

puts stdout "module load thing"

Try to avoid doing this more than necessary, because it can get you into conflicts that are confusing to users, and you may not really be able to solve within envmod.

Note also that in shell scripts you can do things like:

module is-loaded foo || module load foo



Can I print things towards users? Yes. (Note you muse use stderr because stdout is the child shell)

puts stderr "echo foo"

Note that if you want it to print only during load operations, you'll want something like:

if { [module-info mode load] } {
  puts stderr "echo foo"
}

Avoid doing anything more than printing messages, unless you understand the internals of module loading logic.

Versioning
Dependencies and conflicts
setting up
more notes
See also

lmod

Newer variation on environment modules, with a few more features. Based on Lua.


https://lmod.readthedocs.io/en/latest/

direnv

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

(not to be confused with dotenv, and its .env)

direnv gives a shell directory-specific environments.


When you hook in direnv into your shell (e.g. ~/.bashrc if you use bash), then every directory change adds a check for .envrc. If that contains things like:

PATH_add ~/myscripts
PYTHONPATH=~/mymodules

it would run a new shell(verify) that adds the things mentioned on that shell.


You can also hook in executables. This is not recommended unless you need it, and it's still recommended you avoid side effects, and accept that sometimes it will make the prompt slow.


Upsides

  • specific projects can be made to automatically get a proper environment - well isolated if you cared to do that
makes it easy to have environment modules, virtualenvs etc. without any typing
  • makes it easier to put the above into code versioning
...as it's already in a file in the same directory
  • potentially extending to automated package setups
consider e.g. integration with nix
  • can also unload variables
and seems smart enough to restore the values that was set before


Limitations

  • anything not a shell will not pick this up - think of cron, services, scripts, exec*() from programs
workarounds usually amounts to "run a shell that runs direnv that runs the thing that needs this environment"
which can be fine for things like services and cron
which can be awkward when you have to hardcode it (e.g. the subprocess case)




https://direnv.net/

nix

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


Practically, you can use nix for as much or as little as you want.

It seems a lot of people just use it for a predictable, portable virtual environment for their projects.

...for development possibly integrated with direnv so that changing to a project dir automatically gets you the installs and the virtual environment that project needs

The abstractions behind it, or even the fact that it has its own package store, is not something you necessarily care about.


What is it? What does it give me?

Nix has its own package store, and its own way of pulling them into each project.

The fact that it resolves them only within each environment means having different versions in each project becomes a non-issue, and is a large part of avoiding installing into a single dependency graph of modules that becomes less and less solvable as it grows (you can still create package dependency hell but only isolated to each project).


Beyond that,

it also supports non-destructive and atomic updates to that environment
...also meaning you can roll each update update back (and it's well defined what that actually does)
also makes it a lot easier to try things out in a way that has zero effect on your system once removed
can be used as a build tool (has to be for its own packages)
builds happen to be deterministic, making it easier to parallelize those builds without side effects
...plus some further implementation details you may or may not care about


If you need most of those things, the more it can be a singular tool cleaner than the set of existing tools you might replace,

If you don't need any of that, it's overkill, but may still be nice.


On the more technical side (that you may care about less), it also makes a big point of being 'purely functional' meaning its own builds are immutable, and free of side effects, and never pick up stuff from the filesystem implicitly.



Limitations and arguables

If you want to understand it thoroughly there is a steep learning curve. You can make things (unnecessaruily) hard on yourself.

The command line is pretty spartan. It will not hold your hand.

Nix commands may be a bit slow - various take a handful of seconds

Builds can take hours

in particular when we are actually wrapping other, granular, packaging system (like JS)
If you approach it like "each must be a nix package to keep versioning well defined", which is sort of the whole point, then each JS package needs a full build of everything it depends on.
in practice, you may end up making nix packages for the major components, and whatever code you threw at that. This defeats some of the well-defined-versioning point of nix, but even then it may still be a nice build tool.

Nix wants a big cache of results, or will take that time each time

that cache doesn't carry to other systems - so initial nix builds still do take a long time. Which isn't great inside containers, where it means "every time".
that build cache can grow large, for reasons similar to why docker build caches can get ridiculous

Nix doesn't integrate so well with services(verify) due to the way those are run.

Your security audits may be a little messier with nix in place

the build stuff means it's hard to evaluate code in isolation
builds are deterministic only when using versioning
you're still trusting external code, via nixpkgs or your own
and when that wraps another package manager, it's


It's yet another system, introducing yet another layer of abstraction

any complexity it introduces better not be structural complexity,
we better not make the abstraction leaky by common practice,
we better have though of all the problems, rather than just pushing the problem around


It only solves dependency issues if you're precise about it

(say, there's a reason that a lot of Dockerfiles don't build today, which has nothing to do with docker itself)
and nix requiring you to be precise is sometimes also the reason you can't cheat your way around problems
that problem may be "no solution"
...this is a classical tradeoff in depdendency systems, that tends to be a reason people abuse it as much as necessary



Technically, nix can refer to

  • a bunch of tools
userspace installs - users have distinct stores, and installing into your own profile(/project) is easier
  • a language that lets you specify builds/dependencies [3] for the nix tools to consume.

Additionally/optionally, there are

  • NixOS - seems to be an attempt to do system packages using Nix, which basically makes it its own software distribution.
You'ld almost call it a linux distribution - though it runs on MacOS too
  • NixOps[4] to deploy on multiple hosts


nix-env
nix-shell
nix glossary
nix expression language
nix-daemon

nix-daemon is required for multi-user installs, running build actions of users.


It performs build actions and other operations on the Nix store on behalf of non-root users. Usually you don't run the daemon directly; instead it's managed by a service management framework such as systemd.


https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-daemon.html

See also

Language specific

Python

virtualenv (python2, python3)
This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

(see also the similar but distinct conda virtual environments)


virtualenv allows you to create a directory that represents

a specific python interpreter
often used to settle it to a specific python version
a distinct set of packages
that (by default) apply on top of the system ones
setuptools and pip that will install into this environment, instead of into the system

This makes it useful for user-installed apps where you need supporting libraries, and where that shouldn't conflict with other things.


Example - creating

Assuming that

  • the default python is python2.6
  • you run virtualenv NAME

Then you'll now have at least:

  • ./NAME/lib/python2.6/distutils
  • ./NAME/lib/python2.6/site-packages
  • ./NAME/include
  • ./NAME/bin
    • ./NAME/bin/python
    • ./NAME/bin/python2.6
    • ./NAME/bin/easy_install (installs into this environment)
    • ./NAME/bin/activate (lets you use the environment in the shell -- must be sourced through bash)

...which is site-packages, setuptools, a copy of the interpreter that uses this environment, and a few other things (e.g. recently also pip, wheel).


Example - using

It's useful context that where import looks (see also [5]) - e.g. that sys.path is initialized with the script's (if run via a script hashbang) or the python executable's (if invoked directly) containing path (then site stuff, then then PYTHON_PATH)


There are various ways to use that resulting file tree:

  • run source NAME/bin/activate is most typical
prepends the path to that python binary, meaning running 'python' will be the one being run over others
  • run the python binary in there
activate_this = '/path/to/env/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))

See also

https://www.dabapps.com/blog/introduction-to-pip-and-virtualenv-python/


Reproducing the same set of packages elsewhere isn't a virtualenv feature, but but something you typically want to do.

This is often done via pip freeze and pip install -r





https://docs.python-guide.org/dev/virtualenvs/


"No module named 'virtualenv.seed.via_app_data'"

Seems to indicate conflicting versons of virtualenv (yes, the virtualenv module that has been around since python2(verify). It's still perfectly usable in python3).

https://github.com/pypa/virtualenv/issues/1875

venv (python3)
This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

venv is a module and tool introduced in py3.3.


Much like virtualenv, but became standard library (and cleaned up some details)


https://docs.python.org/3/library/venv.html

virtualenv/venv and packaging
Finding what virtual environments you have lying around
pipenv
This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Conceptually, pipenv is the the combination of...

pip
and the virtualenv concept,

...giving you installs into isolated projects.


You may care that

  • the project directories it sets up will not be as cluttered as in virualenv/venv.
  • It also considers code versioning, in that
    • Users need only care about the Pipfile in a directory
    • the software you install is stored elsewhere, and potentially shared
~/.local/share/virtualenvs/ (rather than in env/lib within each project, as with virtualenv/venv), which e.g. keeps things cleaner around code versioning.


You can start a new one like:

mkdir myproj
cd myproj
pipenv --python 3.6   # create project in curdir.  Optional (`pipenv install` creates a Pipfile too), but this way you control py version
pipenv install numpy  # install software in this environment

The only thing it puts in this directory is a Pipfile (there is more, but it's hidden in your homedir, because it's potentially shared state)


You can then start a subshell, for the environment implied by the current directory, like:

pipenv shell


See also:

Anaconda, miniconda, conda (python and more)
This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

These amount to its own package manager and its own environments isolation

With the aims to

more easily reproduce environments,
more controllable versions of things,
more portability within win+lin+osx,
do more than pure python (or python at all)
(avoid having a compilation environment - see the last two points)

...and seems aimed somewhat at academia


anaconda is a download-a-few-gigabytes-of-the-most-common-stuff-up-front, base of common things you can select from.

which may be most or all that you'll use

miniconda is a 'start installing from scratch' variant of the same thing

it mainly bootstraps the repository system, and downloads everything on demand


conda is the package manager they share


Conda environments

A conda environment is a distinct installation of everything, including python itself (if it's in there, which it usuaolly is. But if not, you may get a system python, pyenv shim, or such).


Consider e.g.:

$ which python3
/usr/bin/python3
$ conda env list
# conda environments:
#
base                  *  /home/me/miniconda3 
foo                      /home/me/miniconda3/envs/foo
$ conda activate base
(base) $ which python3
home/me/miniconda3/bin/python3

"activating" a conda environment just places it first in resolving its bin/, which includes the executables of conda packages installed into that environment.

Notes:

this also helps separate it from anything relying on your system python being on the PATH
and is why you generally wouldn't add the conda bin to your path directly


as to cleaning: https://stackoverflow.com/questions/56266229/is-it-safe-to-manually-delete-all-files-in-pkgs-folder-in-anaconda-python


Getting conda in your shell

That the above assumed that you can already run your own conda, but that is something you have to set up. There are two practical parts to that:

  • getting conda into your PATH
  • whether or not it should activate the a base conda environment by default
in a new install it will do this, because auto_activate_base is true

The thing that conda init hooks in does both.

During the install there's a question whether to do that. If you said no but want this later, get to your conda command and do conda init)}}

If you want it to just put conda in the path and not active the base environment, you'll want to:

conda config --set auto_activate_base false



Workflow stuff - environments and dependency files

You could treat conda as one overall environment, but you probably want to isolate projects:

conda create -n yourenvname python=x.x anaconda

difference between conda create and conda env create?

source activate yourenvname


Note that conda environments are not really compatible with virtualenv or pipenv due to specific features.

So yeah, if you previously had the virtualenv idea in your project workflow to recreate environments elsewhere, you'll need to switch to conda for that. Consider e.g.

conda env export > environment.yml

and

conda env create -f environment.yml


You can' use pip within a conda environment, and can hook pip installs into such conda environment YAML files[6]

https://jakevdp.github.io/blog/2016/08/25/conda-myths-and-misconceptions/#Myth-#5:-conda-doesn't-work-with-virtualenv,-so-it's-useless-for-my-workflow


See also:

pyenv
This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Lets you

  • install and select user-specific versions of python,
    • and packages
  • do virtual environments (optional)
so until you use them, things using the same python version will share their own dist-packages


Implementation-wise, it intercepts python commands (via PATH trickery), and determines which python executable version (...and related commands, like pip) should be, depending on context.

The way it does that is *nix-specific, so the windows implementation[7] is a little different.


This lets you

  • set preferred python version per user (before you first set this, you will probably be using system)
pyenv global
  • set preferred python version for current shell
pyenv shell
  • set preferred python version under directory
pyenv local


There is a special version name, system, meaning "whatever's on the path". Before you install any of your own, pyenv versions would only mention system.


All the python variants pyenv knows how to install:

pyenv install -l

Installing one:

pyenv install 3.9     # which might e.g. install 3.9.16



pyenv and virtual environments
This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

If you also wanted virtual environments, this is something you do separately [8].

Consider

pyenv virtualenv 3.9 NAME

This creates a new environment by that name (within $PYENV_ROOT, not the current directory), which you can now select with e.g. pyenv local.

It's good practice to name it clearly, and possibly include the python version it's based on.


I had questions
This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

"Where does it install python versions? (and packages)"

Under $PYENV_ROOT/versions/

which in user setups (probably most) will be ~/.pyenv/versions/.
and which before installing anything is empty (you're still using system)

Note that it will add shims (which pretend to be the main executables) in /root/.pyenv/shims, which is directly in the PATH.


So if you pyenv install 3.9.2 and 3.9.16, python3.9 is a shim that resolves that.

And if none is considered activated, that shim will fail.


"Where does it store version/env preferences?"

pyenv global goes to ~/.python-version

pyenv local goes to .python-version in the directory you execute that.

pyenv shell goes to PYENV_VERSION environment variable


"What if there are multiple preferences set?"

More specific override more general - PYENV_VERSION over .python-version over ~.python-version and if nothing is set you get the system python. (verify)


"Does it pick up system-installed packages?"

Unless it ends up picking system, no.

It seems to be pyenv's position that this is a mistake, that the whole point of pyenv is to have your own, that cannot conflict with or break your system python install.


You can, in theory, symlink the system python under ~/.pyenv/versions/ but this might be dangerous around the uninstall command(verify).


https://github.com/pyenv/pyenv

Some overlap with packagers

See python packaging



Ruby

Rust

R

Unsorted

Stow, graft

These help centralize software

  • help install in isolated directories
  • make them appear to be installed in the same hierarchy
  • lets you manage multiple versions.


"Stow is a symlink farm manager which takes distinct packages of software and/or data 
located in separate directories on the filesystem, 
and makes them appear to be installed in the same place.

This is particularly useful for keeping track of system-wide and per-user installations
of software built from source, but can also facilitate a more controlled approach 
to management of configuration files in the user's home directory,
especially when coupled with version control systems."


Graft lets users manage multiple packages under a single directory hierarchy.
It was inspired by Depot (from Carnegie Mellon University) and Stow (by Bob Glickstein).
It installs packages in self-contained directory trees 
and makes symbolic links from a common area to the package files."


See also:




https://www.gnu.org/software/stow/