Python usage notes - importing, modules, packages

From Helpful
Jump to: navigation, search
Syntaxish: syntax and language · changes and py2/3 · decorators · importing, modules, packages · iterable stuff · concurrency

IO: networking and web · filesystem

Data: Numpy, scipy · pandas, dask · struct, buffer, array, bytes, memoryview · Python database notes

Image, Visualization: PIL · Matplotlib, pylab · seaborn · bokeh · plotly

Tasky: Concurrency (threads, processes, more) · joblib · pty and pexpect

Stringy: strings, unicode, encodings · regexp · command line argument parsing · XML

date and time


speed, memory, debugging, profiling · Python extensions · semi-sorted

Import related notes

Specifying import fallbacks

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

You've probably seen fallback import tricks like:

import StringIO
    import cStringIO as StringIO
except ImportError:


except NameError:
    from sets import Set as set


    import cElementTree as ElementTree
except ImportError:
    import ElementTree

For ElementTree you may want something fancier; see Python notes - XML#Lineage.2C_variants.2C_and_importing

Reference to current module

There are a few ways of getting a reference to the current module object (which is rarely truly necessary, and note that if you need only the names of the members, you can use dir() without arguments).

The generally preferred way is to evaluate sys.modules[__name__], because this is needs no knowledge of where you put that code, and can be copy-pasted directly. (The variable __name__ is defined in each module and package (it will be '__main__' if the python file itself is run as a script, or you are running python interactively).

Another way is to import the current module by its own name, which actually just binds the by-then-already loaded module, to a name that happens to be in its own scope (will also work for __main__).

There are a few details to this, including:

  • you shouldn't do this at module-global scope(verify), since the module won't be loaded at that point
  • will work for packages, by its name as well as by __init__, but there is a difference between those two (possible confusion you may want to avoid): the former will only be a bind, while the latter is a new name so may cause a load, which might pick the pyc file that Python created, so while it should be the same code it may not be id()-identical ( case that matters to your use)

Importing and binding, runtime-wise

In general, importing may include:

  • explicit module imports: you typing
    import something
    in your code
  • implicit module imports: anything imported by modules, and package-specific details (see */__all__)
  • binding the module, or some part of it, as a local name

Module imports are recorded in sys.modules, which allows Python to import everything only once.

All later
s fetch the reference to the module from that cache and only bind it in the importing scope.

Binding specific names from a module

You can also specify that you want to bind a few names from within a module, leaving you in control of (avoiding) name conflicts.

None of this changes what is imported or why, it's only different in what names get bound into your scope.

So aside from avoiding some messy things, it's mainly personal taste.

Say you are interested in the function comma() from lists.format (package lists, module formats). You can do:

import format as fm             # people seem to like this for things like numpy, because it saves typing for something you reference a lot
# binds 'fm', so usable like:
import format.lists
# binds 'format', so usable like:
from format import lists
# binds lists (and not format), so:
from format import lists as L
# locally binds lists as L (and neither format or lists), so:
import format.lists as L
# same as the last
from format.lists import *
# binds all public names from lists, so:
comma()      # and who knows what else is now in your scope
from format.lists import comma
# binds only a specific member
from format.lists import comma as C
# like the last, but binds to an alias you give it


For context, a module is a python file that can be imported, and any file can be a module. The requirements amount to

  • the filesystem name must not be a syntax error when mentioned in code
  • needs a .py extension
(the import system is more complex and you can get other behaviour, but the default behaviour mostly looks for .py)

Packages are a little extra structure on top of modules, an optional way to organize modules.

A package is a directory with an __init.py__ file and that directory will typically contain some module files it is organizing.

A package doesn't much special status (it's mostly just a module with a funny name) except in how the import system handles further importing and name resolution.

What should be in

If you are only using packages to collect modules in a namespacing tree structure sort of way, you can have be empty

however, sometimes this only means more typing and not more structure. A lot of people like keeping things as flat as sensible

(you might still like to have a docstring, an __author__, maybe __version__ (a quasi-standard) in there)

If you like to run some code when the package is first imported (usually for some initial setup), that can go in

however, as this amounts to module state, this is sort of a shared-global situation that most libraries intentionally avoid

You could put all code in -- because that's just an awkward way to make what amounts to module. With a weird internal name.

If you want to use a package as an API to more complex code, and/or selective loading of package contents, read on

You can drag in some PEPs about clean style, but in the end there are mostly just suggestions.

The most convincing argument I've seen is centered around "think about what API(s) you are providing".

  • If you leave empty, that means users have to explicitly import all modules.
very selective loading
lots of typing
users have to know module names already, as the package won't tell them
  • itself mostly imports modules' contents into the package scope
doesn't require you to import parts before use.
Lets that package be an API of sorts, cleaner in that it shielding some private code users don't care about
for the same reason, makes it easier to reorganize without changing that API
doesn't allow user to load only a small part
  • imports parts from submodules
can be cleaner, e.g. letting you mask 'private' parts from the package scope
can be messier, in that it's less

  • uses __all__
see below, but roughly:
upsides: lets users have some control over whether to load submodules or not
downsides: a little black-box-magical before you understand the details

Relative imports

importing *, and __all__

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

Using import * from something / from something import * asks python to import all names from a module or package, and bind them in the scope you said that in.

Within user code, importing * may be bad style, mostly because you cannot easily know what names you are importing into your scope (they are not mentioned in your code, and may change externally), so in the long run you won't know what name collisions that may lead to. This problem is not actually common, but that itself is why it can lead to very confusing, well hidden bug once it happens (particularly as same-named things will often do similar things).

It should be much less of an issue for packages to import * from their modules, because and should be well known to the package's programmer what names that imples.

Side note: the package's code doesn't tell you want names will be there at runtime. This can be annoying when users are trying to e.g. find a particular function's code.

import * around modules

  • if there is no __all__ member in a module, all the module-global names that do not start with an underscore (<text>_</text>) are bound
which means that basic modules can use underscore variable to have global-like things that won't pollute others

  • if there is an __all__ member in a module, it will bind those things
lets a programmer minimize namespace cluttering from their own modules. If people want to throw a lot of names onto a pile, they have to work for it

import * around packages

  • if there is no __all__ member, it only picks up things already bound in this package scope (except underscore things)
  • if there is an __all__ member, it seems to go through that list and
if that name is not an attribute bound in the package, try to import it as a module.
if that name is an attribute already bound in the module, don't do anything extra
binds only names in that list

You can mix and match, if you want to confuse yourself.

For example, consider a package like:

__all__ = ['sub1', 'myvar']
__version__ = '0.1'
import sys
myvar    = 'foo'
othervar = 'bar'
import sub2


  • always import sub2 (it's global code)
  • import sub1 only when you do an import * from this package, not when you import the package itself
  • bind sub1 and myvar, but not othervar or sub2 or sys
  • if __all__ includes a name that is neither bound or can be imported, it's an AttributeError

This ties into the "are you providing distinct things" discussion above in that you problably don't want explicit imports then.

importing from packages
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

The examples below assume a package named format with a module in it called lists.

To experiment yourself to see when things happen, try:

mkdir format
echo 'print "format, __init__"'  > format/
echo 'print "format, lists"'     > format/

In an import, everything up to the last dot has to be a package/subpackage, and the last part must be a module.

The package itself can also be imported, because a file is a module that gets imported when you import the package (or something from it) and aliased as the directory name. With the test modules from the last section:

>>> import format.lists
format, __init__
format, lists

The import above bound 'format' at local scope, within which a member 'lists' was also bound:

>>> format
<module 'format' from 'format/'>
>>> dir(format)
['__builtins__', '__doc__', '__file__', '__name__', '__path__', 'lists']
>>> format.lists
<module 'format.lists' from 'format/'>

Modules in packages are not imported unless you (or its __init__ module) explicitly do so, so:

>>> import format
format, __init__
>>> dir(format)
['__builtins__', '__doc__', '__file__', '__name__', '__path__']

...which id not import lists.

Note that when you create subpackages, inter-package references are resolved first from the context of the importing package's directory, and if that fails from the top package.

where import looks

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

See also:



This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

(note: this is unrelated to package managers freezing a package, which is basically just listing packages and their versions, usually to duplicate elsewhere)

Freezing means wrapping your code so that it does not depend on things, and runs anywhere. This usually meaning a copy of the a python interpreter, and all external modules (it's vaguely analogous to static linking), and some duct tape to make it work (and make it independent from whatever pythons you have installed).

It often creates a relatively large directory, and doesn't really let you alter it later.

The main reason to do this is to have a self-contained copy that should run anywhere (in particular, it does not rely on an installed version of python) so is nice for packaging a production version of your desktop app.

To do this yourself, you can read things like

It's easier to use other people's tools.

Options I've tried:

  • cx_freeze
lin, win, osx
  • PyInstaller
lin, win, osx
can pack into single file
See also for some get-started introduction


lin, win, osx
  • py2exe [2] (a distutils extension)
windows (only)
can pack into single file
inactive project now?
  • Python's (*nix) (I don't seem to have it, though)
mac OSX (only)
  • Gordon McMillan's Installer (discontinued, developed on into PyInstaller)

See also:

TODO: read:

Installation in user environments, and in dev environments; packaging; projects

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

TODO: merge with Isolating_shell_environments#Python

Doing package installs


  • for system installs
pip (or similar) will install into the same dist-utils your system package manager
system package manage should mix decently with pip installs
but it can gets confusing when you have one install things the other isn't aware of. So you might want to prefer using just one as much as possible
  • for distinct stacks (dev, fragile apps)
consider virtualenv
consider pipenv, conda, and similar -- sometimes doing something virtualenv-like for you is often simpler/cleaner
  • creating and uploading packages
look at distribute (basically a nicer setuptools)

Making python packages

Python's packaging history is a bit of a mess.

Packaging was initially a bit minimal, and pasted on.

And initially just installation, which is cool and useful and all.

...yet in development (and rolling-delivery production), you want updateability, clean dependency management, and more.

This led to people making a few too many alternative, leading to some confusion.

💤 History we can now mostly forget about

We had

  • distutils (2000ish)
standard library
  • PEP-273 introduced zip imports (2001)
can be copied into place
is then mostly equivalent to having that thing unzipped in the same location
...with some footnotes related to import's internals.
  • PyPI (2003)
meant as a central repository
initially just a repository of links to zips elsewhere, which you would manually download, unpack, and either install (distutil stuff) (or sometimes just copy the contents to site-packages)
  • setuptools (2004)
introduced eggs
introduced easy_install (which these days is no longer used)
  • egg (2004, see previous point. Never put into a PEP)
eggs are zip imports that adhere to some extra details, mostly for packaging systems, e.g. making them easier to discover, their dependencies resolved, and installed.
there are some variants. A good readup involves the how and why of setuptools, pkg_resources, EasyInstall / pip, and more
Ideally, you can now skip eggs
  • distribute (2008)
fork of setuptools, so also provides setuptools
had a newer variant of easy_install (from distribute, so ~2008)
(how relevant is this one?)
  • distutils2 (~2010) - made useful contributions, apparently not interesting as its own thing[4]

  • pip (2008)
easy_install replacement
more aware of depdendencies (verify)
can uninstall; easy_install could not
cannot install eggs (seemingly because we wanted to replace egg with wheel?(verify))
doesn't isolate different versions (verify)
limited to python - C dependencies are still ehhh
  • wheel format is introduced (2013; PEP-427, PEP 491 ) as replacement for egg format.
intended to be a cleaner, better defined thing, for just installs - see On wheels
  • PEP-518 introduced pyproject.toml (2016)
specifies what build tools you require to build a wheel
better defined than setup.cfg did(verify) and with whatever version of setuptools someone would have installed which you would have no control over
(and using TOML format to be easier than ini/configparser?

On wheels

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

any of this and docker

Creating packages



Tries to make it easier for you to publish to PyPI


Hatchling / Hatch

Hatch is a project manager.

Hatchling it its build backend.



Project here in a vague sense, of 'directory with contains a single project that you may want to handle in a specific way'

This suggests things like

having a user environment (more specific that for the entire user)
to isolate to
to run in
to install into
possibly with specific package dependency management
building packages for distribution

pyproject.toml PEP-518] specifies how python software specifies build dependencies.

See also

pip notes

showing package dependencies

For installed packages,

pip show spacy

...will show something like:

Name: spacy
Version: 3.4.1
Summary: Industrial-strength Natural Language Processing (NLP) in Python
Author: Explosion
License: MIT
Location: /usr/local/lib/python3.8/dist-packages
Requires: typer, langcodes, spacy-loggers, catalogue, packaging, requests, setuptools, thinc, spacy-legacy, wasabi, numpy, pydantic, 
          cymem, tqdm, srsly, preshed,  jinja2, pathy, murmurhash
Required-by: spacy-transformers, spacy-fastlang, nl-core-news-sm, nl-core-news-md, nl-core-news-lg, en-core-web-trf, en-core-web-sm, 
             en-core-web-md, en-core-web-lg, collocater

Note that the required-by only list things that require it and you have installed, not all the possible things, so will vary between installations.


python -m pip

The advice to use

python -m pip 

instead of


comes from it being more obvious which of the installed python versions you're referencing.

It's otherwise identical.

pip search is dead, long live the alternatives

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

search never did much more than a substring search in name and summary,

but the API it relied on was always considered experimental, and there was a constant (and possibly accidental) DDoS going on that made hosting costs high. So they shut that down. See and for details

pip may be working on a local replacement based on pip index (verify)

In the meantime, alternatives include:

  • pip_search (seems to scrape pypi website)
pip install pip_search
pip_search scikit
  • pypisearch (seems to require py>=3.8, though)
git clone & cd pypisearch & pip install .
python -m pypisearch scikit

Install from git

Can you update all packages?

There are a few hacks, but perhaps easiest is a third party tool:

pip install pip-review
pip-review --interactive


Reproducing the same set of packages elsewhere

The common way to do this is asking pip to list of installed packages.

This is only sensible to do within a virtualenv-like environments (or you'll get every system package). Which you want to be doing anyway when you're developing a package.

pip freeze > requirements.txt

And elsewhere do:

pip install -r requirements.txt

The convention of calling that requirements.txt is a de facto standard originating in projects.

...which typically put that file in code versioning as well (this still has a manual step or two, e.g. pipenv comes from wanting to simplify this)

Editable installs

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)


DBus error on python package installs

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)
No such interface “org.freedesktop.DBus.Properties” on object at path /org/freedesktop/secrets/collection/login

When you use something like pip, or something more complex like poetry or twine.

You'll probably see packages like keyring and secretstorage.

If you didn't actually need auth storage, then prepending


to your command should be a good test whether keyring is the problem - and be a good temporary workaround.