In this post, we are going to compare the differences in effectiveness, from a computation and energy consumption perspective, between SystemD Timers and in-code loops for executing infinite loops.
We’ll focus specifically on Linux, since features such as timerfd are only available on it, although similar implementations exist for BSD and OSX such as kqueue.
The inspiration of this post came from this StackOverflow thread.
SystemD makes use of the Linux Kernel’s timerfd timers, a POSIX timer alternative that is more event-loop friendly, since it notifies time expiration via file descriptors (hence the fd suffix), which allows for the use of things such as epoll.
Being event-loop based, it also facilitates multi-threading and is very resource friendly.
Also, POSIX timers are considered by some to have a terrible API.
If you add
MemoryAccounting to your service’s
[Service] block (or have those enabled by default), and also have CGroup accounting enabled (usually the default), you are able to known exactly how much resource your service unit consumed without having to resort to custom code or external tools.
systemctl status on such services, you get an output similar to the one below.
● docker.service - Docker Application Container Engine Loaded: loaded (/nix/store/j0y2wmaywsvf8hs7y4pqd4jhll0ncsa8-docker-19.03.12/etc/systemd/system/docker.service; enabled; vendor preset: enabled) Drop-In: /nix/store/ig74rh79479nq89dd20fjhsn82kf0xdh-system-units/docker.service.d └─overrides.conf Active: active (running) since Tue 2021-03-16 08:34:15 -03; 56min ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 2830 (dockerd) IP: 0B in, 0B out Tasks: 29 (limit: 4915) Memory: 163.3M <<<<< Memory Accounting CPU: 9.710s <<<<< CPU Accounting CGroup: /system.slice/docker.service ├─2830 /nix/store/j0y2wmaywsvf8hs7y4pqd4jhll0ncsa8-docker-19.03.12/libexec/docker/dockerd --group=docker --host=fd:// --log-driver=journald --live-restore └─2844 containerd --config /var/run/docker/containerd/containerd.toml --log-level info
With SystemD, by running
systemctl list-timers you are able to conveniently check when the timer was last run, when it’s scheduled to run next and how much time is left before that happens. You are also able to see when the timer suceeded for the last time.
When you run a timer, any change you make to your program or script will of course be used next time it runs.
If your service where to loop in it’s on code, you would need to restart it in order to apply the changes.
A trivial matter, but worth considering.
A timer needs to pay the cost of starting up the application every time it runs.
The impact of this obviously depends on your specific use case.
In a particular case of mine where I use
nix-shell, this cost in not irrelevant, although I mitigated it using cached-nix-shell.
Each time the unit starts or stops, a new log entry is created on the journal.
For very frequent tasks, this may fill your journal with useless information.
Apparently this can be mitigated by using
LogLevelMax=alert on the service definition, but I haven’t tested this.
Not exactly a con since it can be easily configured around, but by default SystemD timers have an accuracy of 1 minute. T
his means that, much like Cron, your minimum interval by default is 1 minute.
This can be fixed by setting
AccuracySec to something lower.
For scripts that I want to run every 2 seconds or so, I set
AccuracySec=1s, but you can go as lower as
AccuracySec=1us, although that’s likely overkill and will generate much more wake-ups than you actually need, consuming more battery.
Set it to a sane amount based on your exact need.
This part is extremely difficult to summarize, since each and every programming language has their very own way of handling timers.
On Python, for example, you can use linuxfd to interface with the same timerfd SystemD uses, achieving a very similar result. By default, if I’m not mistaken, calling functions such as
sleep makes use of the default POSIX Timer interface.
On C, you obviously have easy access to any syscall you need, allowing you to use the implementation that suits you best.
For JVM-based languages, such as Java, Clojure, Kotlin (the list goes on…), it mainly depends on the specific JVM implementation, but it most likely ends up mapping to the default POSIX Timer on Unix systems.
In Shell, since each and every instruction is a command, you basically have startup costs for every line anyway, so it doesn’t matter.
As a clear pro, you are able to have much more granular control of when and how your code loops.
Resource consumption wise, it’s usually the case for in-code loops to be lighter, since you are avoiding the startup costs.
But in conclusion, use whatever suits you best.
If you just need to run a simple script every few seconds or so and don’t want to worry about treating errors etc, just put it under a SystemD timer, configure
Restart accordingly and forget about it.
If you otherwise need more complex scenarios, use in-code loops as you normally would, and consider implementing
systemd-notify for tighter integration.