Systemd notes

From Helpful
Jump to navigation Jump to search

Linux-related notes
Linux user notes

Shell, admin, and both:

Shell - command line and bash notes · shell login - profiles and scripts ·· find and xargs and parallel · screen and tmux ·· Shell and process nitty gritty ·· Isolating shell environments ·· Shell flow control notes

Linux admin - disk and filesystem · Linux networking · Init systems and service management (upstart notes, systemd notes) · users and permissions · Debugging · security enhanced linux · PAM notes · health and statistics · Machine Check Events · kernel modules · YP notes · unsorted and muck

Logging and graphing - Logging · RRDtool and munin notes
Network admin - Firewalling and other packet stuff ·

Remote desktops

For context, see also Init_systems_and_service_management


Beyond its original goal of a better init, and a more flexible event/dependency system for services, systemd would like to handle most of the base system, including:

  • file system mount points [1]
  • log processing [2]
  • passwords
  • logins and terminals [3]
including process cleanup after logout (which makes screen / tmux more involved exception cases)
  • power management [4]
  • kernel DBus [5]
  • networking config [6]
  • local DNS[7]
  • date, time, time sync [8] [9]
  • virtualisation / containers [10]
  • sandboxing services [11]
  • sandboxing apps, wrapping apps into images
  • stateless systems, factory reset [12]


So, there are roughly two takes on this.

One camp complains that this is going so far beyond a better init, that it is breaking with the "do one thing well, and make it easy to combine" unix philosophy, that systemd is reinventing a few bugs that the systems it replaces solved decades ago, is somewhat tied to its origin distros (mainly RedHat (though amusingly that's the place you often find older, less-capable systemd versions, due to typical server update policies)), doesn't always care to play well with other subsystems (I AM the system, everyone else better adapt), or with the linux ecosystem as a whole (consider e.g. systemd dismissing bug reports from the kernel people), is aggressively linux-only (diverging from the idea that UNIX approach of that things that run on any *nix variant, and basically locking linux system into a monoculture), is disruptive change without asking anyone, doesn't provide APIs to anything else (I AM the system), is making some responsibilities vaguer, sometimes makes overview harder, has a steep learning curve (to do things right), has minimal and often somewhat unclear documentation (leading to trial and error and misinformation), sometimes has a less than ideal interface, the commands are longer and harder to remember.

It is currently easy to point at its rough edges. Which is not a great description of something that's wants to be most of your core system.

The other main camp points out the goal was never to just be better init, what we needed a system layer that talks to its parts and to you in a more coherent manner.

Because what we call the system ("all the stuff that supports all your programs") had grown considerably in size and complexity over many years, from some do-once-and-forget stuff (mounting and network config at boot, runlevels to start services, and you do the rest), via a bunch of awkward parallel solutions (e.g. having some services one way, and inetd for other services; bootstrap network config but then later hand that off to something else), to a lot of adaptive system (udev, automount, etc).

And much of that was a little more magical than we like to remember and were great themselves but never talked to each other very well, certainly didn't act in a unified way towards users or programs, which was getting worse at an increasing pace, and invited all of the duct taped bodginess (like "okay just sleep for a minute to hope the network is up, then run and forget" -- rather than the thing you probably want, to say like "hey please do this thing as soon as that interface comes up, thanks.").

It doesn't have to be that way, and improving that was always going to involve a scary thing called change.

And sure, there's an argument that you can solve most of that just with better defined APIs, or more convincingly to me, that systemd could have been a spec instead (much like opendesktop or POSIX itself). But people tried that and found this wasn't on the table in any real way. So actually just implementing it and seeing what happens was basically the best option.

So if you want to see a unified communicative layer as a good idea, then systemd is at best a decent solution, and at worst is still a good push towards one.

Overall, we'll see.

Personally, I've both gotten over much of my skepticism, and found some of the rest was well founded (in part due to current bugs, and inability to even find out how it's supposed to work, and it often taking an hour to figure out what particular example doesn't work in what version, and/or whether I'm stuck with that on any particular server. It's easy to be "how is this better?"-grumpy when I know better workarounds for the decades-older systems.

Units files

Units are the varied system resources that can interdepend.

They come in various types of units, including

  • devices,
  • services,
  • timers (triggering sense),
  • mount paths,
  • targets (often 'a set of dependencies useful to name')

...and a handful more.

You will probably mostly deal with services.

Also 'timers if you want to try to replace cron.

And maybe automounts if you want that.

Unit files are ini-style text config files that describe resources, and what they depend on; systemd itself figures out what the combination means and when to do something.

Note that not all units have explicit files - there's a bunch of automatic generation going on.

On unit names

Because it's sometimes useful to have arbitrary strings (anything except NUL) as part of unit names in particular when they are paths and/or autogenerated, and there is value in reversing them) there is a reversible string escaping.

You can play with systemd-escape to get some sense of it.

As I can gather from man systemd.unit:

  • if it's a path:
    • duplicate / are removed
    • / is a special case, returned as -, otherwise...
    • leading and trailing / are removed
  • all "/" character is replaced by "-"
  • all other characters (which are not ASCII alphanumeric or "_", and note this includes -) are replaced by "\xhh"-style escapes.
  • if the string starts with . that is replaced with \x2e

This is reversible.

Note the inverse for paths is why it can only deal with absolute paths.
Where unit files go

Varies somewhat, for reasons listed in the documentation.

Basically, where systemctl daemon-reload can find them.

For context: systemctl daemon-reload is the thing that figures out the dependencies. For all targets listed in config files, it creates a cache, made of symlinks to the actual unit files (cache in the sense of 'most recent computed state'. Boot uses this too(verify)).

Sooo that's not an answer. Where does systemctl daemon-reload read unit files from?

The directories mentioned below are hardcoded at compile time, so are constant for a system (and typically for a distro) though not always between systems.

In --system mode (mostly the subject here)

  • Units from installed packages
can be either /lib/systemd/system or /usr/lib/systemd/system
  • Runtime units
takes precedence over the above(verify)
  • Local configuration (basically for any admin customizations)
if it has the same name as a /lib unit file, it overrides that (verify)
takes precedence over the above(verify)

Note that if you actually use multiple this, then you want to know about systemctl cat

You can also get per-user systemd, allowing e.g. user services.

(it seems not all distros allow this, though(verify))

Launched via PAM. There is at most one per user - not per session. And only while they are logged in.

This adds a few directories

  • /usr/lib/systemd/user/ - from installed packages
  • /etc/systemd/user/ - user units from the admin
  • ~/.local/share/systemd/user/ - things you've installed to your homedir
  • ~/.config/systemd/user/ - your own

So what does systemctl enable do?

Creates a symlink in the appropriate target directory, because it represents the most recent dependency state.

This cache is typically in /etc/systemd/system/*.target.wants/ (and /etc/systemd/system/*.target.requires/?)

Note: Do a systemctl daemon-reload before e.g. an enable.


Create a file named something.service, with usually at least the sections [Unit], [Install], and [Service] (The first two are generic for unit files, the third is specific to services), e.g.

Description=Run a thing



Service types

Type matters to when to consider actually started -- which matters to units that list this one as a dependency, and when inspecting status.

  • simple
started process won't quit and won't fork, so is the process we care to track
the service is considered started as while that process it runs.
any child processes are ignored.
  • idle
mostly like like simple, additionally tries to delay startup to the end of the current transaction (when that involves multiple services), and at most 5 seconds later.
nice for some debug/cosmetics, not meant as reliable ordering or serializing
  • forking
it considered started once the process we start has forked off and exited (classical daemons often do this)
if the service can write a pidfile, you may wish to tell systemd about it(verify) (PIDFile=) so it can tell you something about the status
  • dbus - wait for a name to appear on DBus
name to look for is set by BusName=
  • notify - something will signal systemd itself, specifically sd_notify call

  • oneshot
blocks dependencies until the first-started process stops. Then goes to inactive.
not meant for services, but for the occasional one-time command that needs to happen in a specific place
discouraged unless you need it, as you can't really tell status, whether it worked/failed or not

The default is

simple when ExecStart= is specified (and Type= and BusName= are not)
oneshot when ExecStart= isn't specified (and Type= is not)

On unit states

On service states

On dependencies and ordering
This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.
  • Requires=
units will be started when this is. If any fail, we fail as well.
  • Wants=
units will be started when this is. If any fail, we ignore it.
recommended for most services, in that most single failed services should not stop the entire system from booting

  • Requisite=
like Requires, but a dependency not currently started means we fail, instead of starting them (verify)

  • BindsTo=
like Requires, plus when a dependency is stopped, everything that depends on it is stopped as well
  • PartOf=
when the unit listed here is stopped or restarted, this unit is as well.
  • PropagatesReloadTo=
  • Conflicts=
list units that are exclusive with us. Starting any unit within a conflict is means others are stopped.

You can define a want/requirement in both relevant units of a relationship:

Wants / Wantedby
Requires/ RequiredBy
PartOf / ConsistsOf
Requisite / RequisiteOf
PropagatesReloadTo / PropagatesReloadFrom

This is mostly pragmatics, e.g. in what happens when you remove you unit. For example, you'ld often use Wants for other services, whereas you'ld use WantedBy to become part of the target you want to be part of.

It seems like systemd will parallelize all dependency startups. Often you need something to run before you do, which is why you'ld regularly also add:

  • After=
  • Before=

Note that it often makes sense to use After when using Wants, Before when using WantedBy

It looks like you could easily use these without Wants/Require, for logic like "if a syslogger service is enabled then load after it, but if it isn't just go on" (verify)

Note also that certain cases make for automatic dependencies [13]

e.g. services with Type=dbus automatically get Requires=dbus.socket and After=dbus.socket
Execution and environment

The Exec* expect the executable to be an absolute path, mostly to avoid ambiguity.

In some cases you have to cheat, and can probably do so with

/bin/bash -c 'the thing you want'

  • ExecReload - command to run to restart (if absent)

Restarting after crashes

  • Restart [14]
    • no (default) - don't try
    • always
    • on-success - clean exit code
    • on-failure - unclean exit code, signal, timeout, or watchdog
    • on-abnormal - signal, timeout, or watchdog
    • on-abort - signal
    • on-watchdog - watchdog
  • RestartSec (default 100ms) - sleep before attempting restart

Also there is StartLimitIntervalSec and StartLimitBurst [15]

This is meant both as rate limiting, and as "if it doesn't start up after a bunch of tries, give up completely". Basically, if a unit attempts starting more than burst times within interval, the unit will no longer try to restart (note that a later manual restart resets this)

Periodic forced reload

E.g. when you have something with a bit of a memory leak you can easily restart at 4AM or whatnot.

Not a direct feature, but can be imitated if you can change Type=notify (and don't actually notify), and put the timeout to however often you want it to happen, e.g.:



Security and sandboxing

You probably want to often use User= and Group= for effective user (name or ID), because it defaults to root.

There are a lot of things you can lock down, see:


If the service name contains an @, like vpn@username.service, then that rest of the name becomes the instance name, which you can fetch inside the unit config using

%i escaped form
%I unescaped form

(note that the escaped instance name is filesystem-safe(verify), so can also e.g. be used for pidfile names)

You might e.g. use this to pass through a single parameter. [16]

Multiple-process services
Conditions ans assertions

See also:

Debugging when services don't work

List of failed units:

systemctl --failed

Basic reason why:

systemctl status unitname

More log:

journalctl -u unitname

Keep in mind that there may be services that are not working but not considered failed.

For example, consider:

● mytest.service - my test
     Loaded: loaded (/etc/systemd/system/mytest.service; enabled; vendor preset: enabled)
     Active: activating (auto-restart) (Result: exit-code) since Fri 2023-10-06 14:44:22 CEST; 10s ago
   Main PID: 1593863 (code=exited, status=203/EXEC)
      Tasks: 0 (limit: 38195)
     Memory: 0B
     CGroup: /system.slice/mytest.service

This is a Type=simple, Restart=always, so it's trying continuously,

but the above will not show it, because seemingly Restart=always means it cannot be considered failed or any other bad thing?

It took me a while to find that the way to find that the following may be more useful to also do:

sudo systemctl list-units --type=service --state=activating
This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

A timer unit is used to trigger another unit (often a service unit) for events that relate to time.

Events that can be specified in .timer units include:

  • OnCalendar - a cron-like expression
  • OnBootSec - time after system boot
  • OnStartupSec - time after systemd start (mainly useful for per-user systemd, because the system one is much like OnBootSec)
  • OnActiveSec - time after timer unit activation
  • OnUnitActiveSec - time after the Unit=-referenced unit's last activation
  • OnUnitInactiveSec - time after the Unit=-referenced unit's last deactivation
  • OnClockChange - when system clock (CLOCK_REALTIME) jumps relative to the monotonic clock (CLOCK_MONOTONIC)
  • OnTimezoneChange - when system timezone is changed

Examples of relative times:

OnBootSec=5h 30min

On OnCalendar's format

OnCalendar=DayOfWeek Year-Month-Day Hour:Minute:Second

Most parts are optional because, roughly speaking,

  • DOW defaults to any/all
  • ymd defaults to no restrictions
  • time defaults to midnight

You can use

  • numbers
  • names
  • , to list values, and .. for an inclusive range
  • / for "every-so-many" expressions (like cron)
  • shorthands like
    minutely  →  *-*-* *:*:00
      hourly  →  *-*-* *:00:00
       daily  →  *-*-* 00:00:00
     monthly  →  *-*-01 00:00:00
      weekly  →  Mon *-*-* 00:00:00
      yearly  →  *-01-01 00:00:00
   quarterly  →  *-01,04,07,10-01 00:00:00
semiannually  →  *-01,07-01 00:00:00


4AM every day
*-*-* 4:00:00
8PM Every Saturday and Sunday
Sat,Sun 20:00
every ten seconds (note that you need to up the default resolution as well)

Note that the DOW and day restrictions are effectively ANDed together, e.g.

Friday the thirteenth, (12AM):
Fri *-*-13
First Monday of the month, (12AM):
Mon *-*-1..7
one of the first four days of the month only when that's a monday or tuesday (so can execute once, twice, or not at all depending on the month), (12AM):
Mon,Tue *-*-1..4


  • If you don't specificy a Unit= in the .timer, it will try to find one of the same name as that timer.
  • AccuracySec is basically how often systemd checks timers for something to do.
defaults to one minute ("if it's good enough for cron").
If you specify shorter intervals anywhere and want that to work, you may want e.g. AccuracySec=1sec
(can be set lower, down to ~1us, for more precision but more CPU load)
  • OnUnitActiveSec and OnCalendar can do very similar jobs
differences include unit state, and whether running time counts(verify)
  • You can have one-time one-time timers, like the *nix at command, like
systemd-run --on-active="12h 30m" --unit someunit.service
systemd-run --on-active=30 /bin/touch /tmp/foo
  • listing timers
systemctl list-timers --all --no-pager                                                  

See also:

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

These unit files are a relatively straight imitation of fstab entries.

Where e.g. where fstab says

/dev/sda /mnt/first ext4 defaults 0 0

a .mount file (relevant section) might say:


While you could write these youself, it may be easier to have systemd pick them up from fstab (via systemd-fstab-generator) so you can keep using fstab.

Note that

  • if you write them yourself, the unit name must be the (systemd-encoded) mount path.
  • If both exist, the unit file takes precedence.

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

You need

  • a .mount unit file
  • a .automount unit file
  • to name them using the systemd-escaped path. You probably want to use systemd-escape for this.

There is an alternative, to have this runtime-generated from fstab (done by systemd-fstab-generator, presumably at daemon-reload time). The least you need for this is adding x-systemd.automount to options.

There are further options, like idle-disconnection (TODO: find an actual list, can't find it in documentation)

Explicit unit file

Alongside a (same-named(verify)) .mount

Automount means it can be mounted only once it's first accessed. (does it mean you should disable the mount and enable the automount?)

Supports parallelized or automatic mounting, when other units require it.

It also means that slow or missing (e.g. network) mounts don't hold up boot (unless required by something) but still get mounted eventually if they can.


Ordering information from fstab is discarded, so to do things like union mounts or bind mounts or such you need to use x-systemd.requires(verify), or write explicit unit files.

Debugging when mounts don't work

For me, items show up in

systemctl list-units --type=automount --all


loaded inactive dead

And specific status showed:

Loaded: loaded (/etc/fstab; bad; vendor preset: disabled)

The logs showed nothing for the automount unit.

Logs did show:

systemd[1]: Dependency failed for Remote File Systems.
systemd[1]: Job failed with result 'dependency'.

Which really only seems to mean "didn't work"

Changing to the directory shows:

Couldn't chdir to /data/mystuff: No such device

This seems to be because systemd seems to inject its own mounty layer of weirdness; mount shows

systemd-1 on /data/mystuff type autofs (rw,relatime,fd=34,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=568076)

The actual error, which only showed when I switched back to manual mounting, was that the password had expired.

Soooooo yeah with regards to getting a useful error, it looks like you're on your own here.


Used to create dependencies, mostly used to represent state of the whole system, and of subsystems that may depend on.

Often "a group of things you care to name".

A good number of targets already there are for checkpoints during boot (local-filesystem, remote-filesystems), or for specific subsystems (bluetooth, sound). To get an idea, see

systemctl list-units --type=target --all

Used whatever way you like, though - it's fairly easy to imitate a runlevel system with systemd.

Sometimes symbolizing specific actions. See e.g. these special cased targets, also mentioned below)

Note that People who write higher-level services mostly care about:

  • (imitation runlevel 3(-ish))

and maybe

  • (in imitation of runlevel 5)

Special system units:

Related to power state and/or boot state:

  • - like, but also pulls in basic boot, and filesystem mounts (so single user mode without most services). Can be used with kernel option 1, shorthand for
  • - starts emergency shell on the console without anything else. Can be used with kernel option emergency, shorthand for
  • - reached when the root filesystem device is available, but before it has been mounted.
  • - the target systemd tries to go to, often a symlink to or (can be overridden with systemd.unit= kernel option)
  • - basic boot-up.
  • - for encrypted block devices
  • - like by for those from _netdev entries
  • - systemd-fstab-generator create mount units that depend on this
  • - like, but for swap partitions/files. See also #swap
  • - like local-fs, for remote mountpoints
note that remote mountpoints automatically pull this in

  • - systemd starts this target whenever Alt+ArrowUp is pressed on the console. Note that any user with physical access to the machine will be able to do this, without authentication, so this should be used carefully.
  • - used when C+A+D is seen on console. Often a symlink to
  • - shutdown. basically the same as
  • - apparently the 'terminate services' part. By default services are hooked into this (see DefaultDependencies=yes)
  • - shutdown / rebooting via kexec
  • - usually for UPS signals
  • - A special target unit for suspending the system.
  • -
  • -
  • -
  • - pulled in by,, to centralize shared logic
  • - multiuser system, but non-graphical. Usually a step towards
  • - graphical login screen. Pulls in
  • system-update-cleanup.service
used for offline system updates. See also systemd-system-update-generator
  • ->
  • ->
  • ->
  • ->
  • ->
  • ->
  • ->

Boot ordering, otherwise passive:

  • - local TTY

Device-related - started when a relevant device becomes available [17]


Setup for up other unit types:

see #paths and
See #slice and
See #timer and


  • - for containers/VMs See also

systemd can monitor paths (using inotify, for path-based activation of services.


Often alongside a service

Special system units:

  • dbus.socket
note that units with Type=dbus automatically depend on this unit.
  • syslog.socket
userspace log messages will be made available on this socket
see also


For devices (think udev, sysfs), specifically just the ones where ordering or mounting may be relevant so some hooks into systemd are necessary.


Describes system swap files/devices.


Can save the state of systemd units, to later be restored by activating this unit.


Basically a model around cgroups isolation.

Special system units:

  • -.slice
  • system.slice
  • user.slice
  • machine.slice


Special system units:

  • init.scope - (active as long as the system is running)

Enabling, disable; start, stop

Putting a .service file in /etc/systemd/system/ (and doing a systemctl daemon-reload) makes it one that exists, that you could use.

Start or stop is the manual way of doing things:

systemctl start servicename
systemctl stop servicename

(Also restart, reload, condrestart)

This is independent of enabling and disabling, which is about whether it takes active part of the trigger/dependency system:

systemctl enable servicename
systemctl disable servicename

Enabling adds it into the active configuration accoding to the [Install] section -- basically systemd figures out the dependencies (specifically which targets require which services) and updates

Since you're most likely using the multi-user target, that will mean a symlink is made from /etc/systemd/system/ to /etc/systemd/system/your.service


This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Status of a service, including recent log:

systemctl status autossh.service -l

All units:

systemctl list-units

Failed units:

systemctl --failed

init.d wrapping

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

systemd can map /etc/init.d/* into unit files at runtime, through systemd-sysv-generator, which is run at daemon-reload time.

The precise behaviour (fetching useful details from the LSB header, and what the fallbacks are, and where that changed and isn't entirely in line with the documentation) takes some digging to find out, see e.g.


This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.
Inspecting Logs

All logs:


And you may like:

journalctl --no-pager    
journalctl -f           # folow

Filtering examples:

Errors from this boot:

journalctl -b -p err

Time section

journalctl --since "2 week ago"
journalctl --since 12:00 --until 12:30

Kernel messages:

journalctl -k

Logs for a service (/unit):

journalctl -u unitname
# which seems truncated so perhaps (1000 lines from this boot)
journalctl -u unitname -b -n1000 --no-pager

For the last it's possibly useful to see which units have logged stuff:

journalctl -F _SYSTEMD_UNIT

Some things you might expect to be special as they were in syslog may not be, and you may need to get at them like:

journalctl _COMM=cron

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Overall size

Total log size:

journalctl --disk-usage

Gives something like

Archived and active journals take up 704.0M in the file system.

Keep in mind it will only report what it can read, so you probably want to sudo that.

Per-unit size (to see which is hogging all your space)

not a feature.

Setting journal limits

Per-unit limits

not really
you could isolate it into its own namespace (which gets its own journald process and files), and then set a system maximum within that. But you basically can only view one namespace at a time, so while this sometimes makes sense to separate your main project, or one misbehaving app, it's not a great fix for each unit.

Clean up archived logs (not active logs(verify)). To archive:

journalctl --rotate --flush
--flush: "Flush all data from /run into /var"
--rotate: "Request immediate rotation of the journal files"

Then you can clean up archived logs, by backlog time:

journalctl --vacuum-time=2d

or total size

journalctl --vacuum-size=500M

Per-unit clean

not a feature.
one workaround to filtering out by unit is a third party script that
opens system.journal, copies out only the entries you want
...but that seems a bad idea on one it's currently writing to?

Check journald corruption

If your system hangs, journal files can be corrupt. You can check this via:

journalctl --verify

It seems that it can't clean up corrupted files, so rm them manually.

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

The basic config file is


Though note that it is overridden by


Disk or ram

  • Storage=volatile
goes to /run/log/journal (created if necessary)
  • Storage=persistent
goes to /var/log/journal (created if necessary), falling back to the above in certain cases
  • Storage=auto
if the target location (/var/log/journal) exist, go for persistent, otherwise fall back to volatile

Note that journald keeps file size limited.

Disk space limit

  • SystemMaxUse (default is 10% of size, capped at 4GB)
  • RuntimeMaxUse
  • SystemKeepFree (default is 15% of size, capped at 4GB)
  • RuntimeKeepFree

...and more settings like it (some of which also effectively control rotation).

Note that log settings starting with

System refers apply to persistent logs (/var/log/journal),
Runtime refers to /run/log/journal
On how systemd does logging

Where from

Systemd listens to various sources, including:

  • libc syslog()
  • /dev/log
  • kernel messages (printk())
  • dmesg
  • its own services' stdout/stderr
  • native protocol its journal daemon.

Where to

Disk or RAM, according to journald.conf, which on most distros seems to default to Storage=auto, meaning that

  • if /var/log/journal exists as a directory, it does persistent logging there
  • otherwise it'll go to RAM (specifically via /run/log/journal)

...but may vary with distro.


It'll store the text messages you're used to, plus a bunch of extra fields. See

In particular, _TRANSPORT is the best indicator of source. When that's

  • journal it came in via the
it'll probably have a UNIT / _SYSTEMD_UNIT
  • stdout it probably came from a systemd service's stdout
it'll probably have a _SYSTEMD_UNIT or SYSLOG_IDENTIFIER
  • driver - internally generated
  • syslog - syslog interface
and it'll probably have a SYSLOG_IDENTIFIER
  • kernel - it seems a bare message
and it'll probably have a _KERNEL_SUBSYSTEM, or SYSLOG_IDENTIFIER
  • audit - kernel audit

Note that many fields are there only under certain conditions.

The docs don't really mention which fields are present for which transports, but...

The more standard and user-controllable fields include:

  • MESSAGE - text as you know it.
  • MESSAGE_ID - 128-bit identifier, recommended to be UUID.
  • PRIORITY - as in syslog: integer between 0 ("emerg") and 7 ("debug"),
  • SYSLOG_FACILITY - facility number
  • SYSLOG_IDENTIFIER - tag text
  • SYSLOG_PID - client's Process ID
  • CODE_FILE, CODE_LINE, CODE_FUNC - code where the message originates, if known and relevant
  • ERRNO -

Fields starting with an underscore cannot be altered by client code, and are added by journald itself.

  • _TRANSPORT - how the message came here - syslog, journal (sysd's protocol), stdout (service output), kernel, driver (internal), audit
  • _PID, _UID, _GID - process, user, and group ID of the source process
  • _COMM=, _EXE=, _CMDLINE - name, executable path, and the command line of the source process
  • _BOOT_ID - boot ID the message came from
  • _MACHINE_ID - see machine-id(5)
  • _HOSTNAME - source host. Not so relevant unless you're aggregating
  • _STREAM_ID - (for stdout records), meant to make distinct service instantiation's stream of output identifiable
  • _LINE_BREAK - (for stdout records), whether the message ends with a \n
  • _SYSTEMD_UNIT - unit name
  • _SOURCE_REALTIME_TIMESTAMP - earliest trusted timestamp of the message. In microseconds since the epoch UTC

There are also a few more when the source is the kernel:


Fields starting with double underscores are related to internal addressing, useful for serialization, aggregating:

  • __CURSOR
  • __REALTIME_TIMESTAMP - wall-clock time of initial reception
  • __MONOTONIC_TIMESTAMP - basically, read [19]

Notes on cooperating with other loggers:

Watching logs

e.g. "how do I get logwatch to work again?"

Options include:

  • poll journal contents via journalctl, e.g.
journalctl --since "1 day ago"
this can make sense for anything that reports per day-or-such, rather than live.

  • follow (stream) it with
journalctl -f
If you want it in a parseable form,
journalctl -f -o json

  • interface like python-systemd.
(That particular one was didn't understand rotations when I used it, so was not fit for streaming)

Some tools

Time taken for boot:


...per service:

systemd-analyze blame

...with some summary of which dependencies are holding others up

systemd-analyze critical-chain

Warnings and errors

Failed to execute operation: Too many levels of symbolic links

Apparently systemd used to refuse unit/service files that are symbolic links.

Update systemd.

If you can't, consider hardlinks. Or just copy files in.

service is not loaded properly: Exec format error
systemd units like systemd-tmpfiles fail, logs show "Unsafe symlinks encountered"

Usually means you messed up the ownership/permissions on some system directories.

Given the services most likely to trip over this, look at