Linux is a great OS, but its default behavior is usually optimized for servers. Since more and more developers use it as their main operating system, I thought that it would be nice to share several useful daemons that will make this experience a bit smoother.
I intentionally didn’t include any installation instructions to stay distro-agnostic. If you are interested in one of them and want to have it in your system, please go carefully through installation and configuration instructions yourself. I don’t want to break your system :)
Irqbalance is a daemon to help balance the cpu load generated by interrupts across all of a systems cpus. Irqbalance identifies the highest volume interrupt sources, and isolates each of them to a single unique cpu, so that load is spread as much as possible over an entire processor set, while minimizing cache miss rates for irq handlers.
What does the description above mean in practice? Suppose we are running IntelliJ IDEA, and it decides that it is a good time to update indices - right in the middle of a compilation process while we are watching a video in background. Irqbalance will try to keep that “heavy” process as far away from our video thread as possible. This means that despite CPU being overloaded the system won’t seem “hanging”: the cursor will be responsive, and the video will continue playing.
Nice thing to have, right?
UPDATE: Starting from Linux kernel v5.4, the HAVEGED inspired algorithm has been included in the Linux kernel, so you can skip this one. Leaving it here for historical reasons.
In Linux, we have two random generators:
/dev/urandom. The first one is really fast, but it is predictable: one should not use it for any security-related things. The second one is more reliable, but can be very slow sometimes.
The reason for that is that the kernel must collect enough entropy from outer sources (like CPU temperature, fan rpm, and so on) to give you a truly random number. Even in a long time after boot, the system may run out of entropy and processes will have to wait until there will be enough of it.
I faced this several times, for example when I tried to connect via
ssh right after the system booted. The process has just stuck.
To avoid this kind of issue you may install
haveged daemon. It uses additional algorithms to faster fill the entropy device and ensures that there is always enough of it.
Due to the architecture of SSDs, their memory chunks are “wearing out” when they are accessed. So it is a good idea to redistribute a load evenly between all chunks. To accomplish this we have to shuffle the data on the drive periodically.
This operation is called TRIM, and it can seriously prolong the life of your drive. More details and instructions can be found here
UPDATE: systemd-oomd is now available, which seems like is a better alternative to me. It ships together with systemd and integrates better with a system.
Remember when we decided to launch IntelliJ IDEA while watching a video? Let’s imagine that we have only 8Gb of RAM - the amount that is insufficient for this (ARGH!).
The Idea’s JVM will first eat all
Xms then all
Xmx RAM, then we’ll extend
Xmx it up to 6Gb, but there is also a
PermGen storage eating space and a movie, and also you might want to run a browser. The result is simple - we are running out of memory.
In cases like this, Linux has a special mechanism called OOM-killer. When things go bad like in the example above, Linux just finds the most “greedy” process and kills it, so that other processes stay safe and alive. If it’s not done then the computer will always be running “heavy” requests that it is unable to satisfy and there will be no resources for anything else: the system will just hang.
So, OOM-killer is your friend. The problem with it: it usually comes too late. The Linux will first try to move all your memory chunks to swap partition on a disk drive and from that moment your desktop environment will freeze, and the whole system will become unresponsive. Much later, when Linux ensures that there is only one way left, he’ll call for the killer. And unfortunately, this behavior is not configurable (but you can call it manually, refer to the next section).
earlyoom can help you in this case. From docs:
earlyoom checks the amount of available memory and free swap up to 10 times a second (less often if there is a lot of free memory). By default, if both are below 10%, it will kill the largest process.
Well, technically it is not a daemon at all, but it still fits the list of “preventing your systems from becoming unresponsive” tricks.
Del from Windows? It is a life savior when things went bad, and we want to recover the system somehow. In Linux, we’ve got a better solution, but you have to enable it first.
SysRq key on your keyboard? It is magical! Shortcuts that include it are called Magic SysRq keys :). You have two ways to enable it:
sysrq_always_enabled=1to a kernel boot parameters
After that the whole list of commands from the section header link becomes available. The most useful ones:
F: Calls for OOM-killer
B: Safely dumps caches to drive and performs a reboot.
These key combinations are handled directly by the kernel and will help you to recover (or safely reboot) if nothing else helps.
Once I developed a web application at work and had a web server running in developer mode with logging all requests. I forgot to turn it off before heading home and when I came back to an office in the morning - I was surprised! The log file was filled with tons of strange requests like:
404 GET /phpmyadmin/index.php 404 GET /ldap-account-manager/index.php 404 GET /nextcloud/index.php
Also, the ssh log was filled with invalid authentication attempts. I notified our security expert, and he confessed that it was he who scanned the network to find weak points :)
Anyway, I thought that this situation is dangerous - you can sit in Moonbucks drinking coffee and at the same time, someone brute forces your ssh password. To prevent this kind of attack meet
This daemon monitors logs of various applications (apache, ssh, and many more) for invalid auth attempts. If their count from one specific IP crosses the threshold, that IP is blocked using
iptables rule for some time. Stay safe :)