DEV Community

bemyak
bemyak

Posted on

Useful linux daemons for better PC experience

Hi, dev.to!

Linux is great OS, but it's default behavior is usually optimized for servers. Since more and more developers use it as theirs main operating system, I thought that it would be nice to share several useful daemons that will make this experience a little bit smoother.

I intentionally didn't include any installation instructions to stay distro-agnostic. If you are interested in one of them and want to have it in your system, please go carefully through installation and configuration instructions yourself. I don't want to break your system :)

Irqbalance

GitHub logo Irqbalance / irqbalance

The irqbalance source tree - The new official site for irqbalance

What is Irqbalance

Irqbalance is a daemon to help balance the cpu load generated by interrupts across all of a systems cpus. Irqbalance identifies the highest volume interrupt sources, and isolates each of them to a single unique cpu, so that load is spread as much as possible over an entire processor set, while minimizing cache miss rates for irq handlers.

Building and Installing Build Status

./autogen.sh
./configure [options]
make
make install
Enter fullscreen mode Exit fullscreen mode

Developing Irqbalance

Irqbalance is currently hosted on github, and so developers are welcome to use the issue/pull request/etc infrastructure found there.

Bug reporting

When something goes wrong, feel free to send us bugreport by one of the ways described above. Your report should include:

  • Irqbalance version you've been using (or commit hash)
  • /proc/interrupts output
  • irqbalance --debug output
  • content of smp_affinity files - can be obtained by e.g $ for i in $(seq 0 300); do grep . /proc/irq/$i/smp_affinity /dev/null 2>/dev/null;

What does the description above mean in practice? Suppose we are running Intellij Idea and it decides that it is a good time to update indices - right in the middle of the compilation process while we are watching a nice film or peertube video.

The irqbalance daemon will try to keep that "heavy" process as far away from our video thread as possible. This means that despite cpu overloading the system won't "hang": the cursor will be me responsive and the video will continue playing.

Nice thing to have, right?

haveged

GitHub logo jirka-h / haveged

Entropy daemon ![Continuous Integration](https://github.com/jirka-h/haveged/workflows/Continuous%20Integration/badge.svg)

Continuous Integration

Haveged, an entropy source

IMPORTANT UPDATE

Starting from Linux kernel v5.4, the HAVEGED inspired algorithm has been included in the Linux kernel (see the LKML article and the Linux Kernel commit). Additionally, since v5.6, as soon as the CRNG (the Linux cryptographic-strength random number generator) gets ready, /dev/random does not block on reads anymore (see this commit).

I'm happy that these changes made it into the mainline kernel. It's pleasing to see that the main idea behind HAVEGED has sustained time test - it was published already in 2003 here. I'm also glad that the HAVEGE algorithm is being further explored and examined - see the CPU Jitter Random Number Generator.

Please note that while the mainline Linux Kernel and HAVEGED are using the same concept to generate the entropy (utilizing the CPU jitter) the implementation is completely different. In this sense, HAVEGED can be viewed as another…


In linux we have two random generators:/dev/randon and /dev/urandom. The first one is really fast, but it is predictable: one should not use it for any security-related things. The second one is more reliable, but can be very slow sometimes.

The reason for that is that the kernel must collect enough entropy from outer sources (like CPU temperature, fan rpm and so on) to give you a truly random number. Even in a long time after boot the system may run out of entropy and processes will have to wait until there will be enough of it.

I faced this several times, for example when I tried to connect via ssh right after the system booted. The process have just "hung".

To avoid this kind of issues you may install haveged daemon. It uses additional algorithms to faster fill the entropy device and ensures that there is always enough of it.

fstrim

Due to the architecture of SSDs, theirs memory chunks are "wearing out" when they are accessed. So it is a good idea to redistribute a load evenly between all chunks. To accomplish this we have to shuffle the data on the drive periodically.

This operation is called TRIM and it can seriously prolong the life of your drive. More details and instructions can be found here

earlyoom

GitHub logo rfjakob / earlyoom

earlyoom - Early OOM Daemon for Linux

earlyoom - The Early OOM Daemon

CI MIT License Latest release

The oom-killer generally has a bad reputation among Linux users. This may be part of the reason Linux invokes it only when it has absolutely no other choice It will swap out the desktop environment, drop the whole page cache and empty every buffer before it will ultimately kill a process. At least that's what I think that it will do. I have yet to be patient enough to wait for it, sitting in front of an unresponsive system.

This made me and other people wonder if the oom-killer could be configured to step in earlier: reddit r/linux, superuser.com, unix.stackexchange.com.

As it turns out, no, it can't. At least using the in-kernel oom-killer In the user space, however, we can do whatever we want.

earlyoom wants to be simple and solid. It is written in pure C with no dependencies An…


Remember when we decided to launch Intellij Idea while watching a video? Let's imagine that we have only 8Gb of RAM - the amount that is totally insufficient for this (ARGH!).

The Idea's JVM will first eat all Xms then all Xmx RAM, then we'll extend Xmx it up to 6Gb, but there is also a PermGen storage eating space and a movie and also you might want to run browser. The result is simple - we are terribly running out of memory.

In cases like this, linux has a special mechanism called OOM-killer. When things goes bad like in the example above, linux just finds the most "greedy" process and kills it, so that other processes stay safe and alive. If it's not done than the computer will always be running "heavy" request that it is unable to satisfy and there will be no resources for anything else: the system will just hang.

So, OOM-killer is your friend. The problem with it: it usually comes too late. The linux will fist try to move all your memory chunks to swap partition on disk drive and from that moment your desktop environment will freeze and whole system will become unresponsive. Much later, when linux ensures that there is only one way left, he'll call for killer. And unfortunately this behavior is not configurable (but you can call it manually, refer to the next section).

earlyoom can help you in this case. From docs:

earlyoom checks the amount of available memory and free swap up to 10 times a second (less often if there is a lot of free memory). By default if both are below 10%, it will kill the largest process.

SysRq

Well, technically it is not a daemon at all, but it still fits the list of "preventing your systems from becoming unresponsive" tricks.

Missing Ctrl+Alt+Del from Windows? It is a life-savior when things gone bad and we want to recover system somehow. In linux we've got a better solution, but you have to enable it first.

SysRq key

Remember strange SysRq key on your keyboard? It is magical! No, really, we call a shortcuts that involves it Magic SysRq keys :). You have two ways to enable it:

  • Add sysrq_always_enabled=1 to kernel boot parameters
  • Add kernel.sysrq=1 to sysctl configuration (usually /etc/sysctl.conf or /etc/sysctl.d/99-sysctl.conf)

After that the whole list of commands from section header link becomes available. The most useful ones:

  • SysRq + F: Calls for OOM-killer
  • SysRq + R,E,I,S,U,B: Safely dumps caches to drive and performs a reboot.

These key combinations are handled directly by the kernel and will help you to recover (or safely reboot) if nothing else helps.

fail2ban

GitHub logo fail2ban / fail2ban

Daemon to ban hosts that cause multiple authentication errors

                     __      _ _ ___ _               
                    / _|__ _(_) |_  ) |__  __ _ _ _  
                   |  _/ _` | | |/ /| '_ \/ _` | ' \ 
                   |_| \__,_|_|_/___|_.__/\__,_|_||_|
                   v1.1.0.dev1            20??/??/??

Fail2Ban: ban hosts that cause multiple authentication errors

Fail2Ban scans log files like /var/log/auth.log and bans IP addresses conducting too many failed login attempts. It does this by updating system firewall rules to reject new connections from those IP addresses, for a configurable amount of time. Fail2Ban comes out-of-the-box ready to read many standard log files, such as those for sshd and Apache, and is easily configured to read any log file of your choosing, for any error you wish.

Though Fail2Ban is able to reduce the rate of incorrect authentication attempts, it cannot eliminate the risk presented by weak authentication Set up services to use only two factor, or public/private authentication mechanisms if you really want to…


Once I developed a web application at work and had a web server running in developer mode with logging all request. I forgot to turn it off before heading home and when I came back to an office in the morning - I was surprised! The log file was filled with tons of strange requests like:
404 GET /phpmyadmin/index.php
404 GET /ldap-accont-manager/index.php
404 GET /nextcloud/index.php
Enter fullscreen mode Exit fullscreen mode

Also, ssh log was filled with invalid authentication attempts. I notified our security expert and he confessed that it was he who scanned the network to find weak points :)

Anyway,I thought that this situation is dangerous - you can sit in Moonbucks drinking coffee and at the same time someone bruteforces your ssh password. To prevent this kind of attacks meet fail2ban.

This daemon monitors logs of various applications (apache, ssh and many more) for invalid auth attempts. If their count from one specific ip crosses the threshold, that ip is blocked using iptables rule for some time. Stay safe :)

Conclusion

Hope your enjoyed this (my fist) post. If you'll find any typos or mistakes - please PM me. If you have suggestions or something to add - leave a comment. Have a nice day!

Top comments (0)