How to convert WebP-images to JPG and PNG using ffmpeg

I came across some WebP-image the other day and wanted to convert it to a regular JPG. It turns out that’s rather easy with a Linux based operating system, all that you need is ffmpeg.

Here’s an example:

$ ffmpeg -i input_file.webp output_file.jpg

It’s also possible to convert it to a PNG as well.

My proposals for a healthy Christmas

Christmas is approaching fast, just like it does every new year. I myself is as exited as always over this holiday! It’s by far my favourite holiday. Well. It’s actually the only holiday that I actively celebrate, in a traditional Swedish non-Christian way like most of here in Sweden.

Christmas is—for me—all about spending time with friends and family, eating too much food, snacks and candy, drinking (non-alcoholic) glögg and julmust, doing crafts, baking and of course; watching a bunch of Christmas themed movies. I know, I’m a sucker for Christmas.

Christmas can be a stressful time for some

For a lot of people, Christmas is unfortunately everything but how I just described my Christmas. For some, it’s a period filled with stress and anxiety, perhaps it has even become some sort of mandatory service, rather than a cheerful voluntary holiday. They’re stressed for all the things they “have to” do, buy and prepare, all the places they have to visit and be at, all the friends and family they have to spend both time with and money on.

And let’s not forget all the unwritten rules and scheduled traditions some slavishly follow without even question it. That’s not really healthy though. Christmas should be a cheerful holiday, or perhaps nothing at all if you choose to not celebrate it. Because that’s just it, Christmas should be what you want it to be, and why not make it to something that you actually enjoy and look forward each year?

We’re overconsuming

A large issue with Christmas is the fact that we’re overconsuming food and presents. I’m from the relatively small country Sweden, we’re about 10 million people here, yet, last year (according to hui The Swedish flag) we spent 7.8 billion euro on Christmas shopping, and on average we spent 769.8 euro per person. It’s estimated that we’re going to break those numbers this year, just like we always seem to do every year.

All this is rather ironic considering the fact that (according to Svensk Handel The Swedish flag) 41% of the Swedes considers the environment and sustainability aspect to be the most important thing for us. If that’s true, then why are our overconsuming our planet to a rapid death?

Svensk Handel also mentioned that we rated Christmas presents the seventh most important thing on Christmas. That’s after friends & family, food, Christmas decorations, snow, Christmas tree and Christmas music. They also said that 79% of the Swedes likes giving away presents and that only 56% of us actually cares about receiving gifts. Why on earth are we giving away presents to people who don’t even want them and why are these people not saying anything?

I’m not completely surprised though, this is a fairly typical us Swedish people. We’re so afraid of conflicts and being offended that we rather waste money on each other, rather than talking to each other about it. Wouldn’t it be easier to tell our family and friends that we don’t want any presents (or at least not as much) for the next Christmas? For me, that sounds like a lot better option than sitting there and pretending to be happy for some presents that you really don’t want in the first place.

And think about it, we’re all adults, if we really need something, we’ll just buy it! We don’t need to wait up to a year for the next Christmas for someone to—maybe—give us that specific item we need.

We’re working hard, but for what reason?

Most of us are working long and hard days, we’re tired after work, to tired to care about what makes life worth living, we neglect time with our friends, our family and ourselves. And for what? If it’s only so that we can afford all those luxurious and excessive presents and food, is it then really worth it?

We’re wasting our planet’s highly limited resources on producing all the food that we “must have” for Christmas, only to eat a fraction of it, then throw it away in the garbage and then continue to spend the rest of the money on presents that no one really needed (or asked for) in the first place. It might sound like a harsh thing to say, but it’s the truth for a lot of people.

My proposal for a healthy Christmas

With all that said, I’m not implying that we should skip Christmas or any part of it, quite the opposite. I want us all to focus on what truly important and makes Christmas so great. Instead of stressing through a hundred things, I want everyone to focus on the very core of Christmas, with the family and the friends we want to spend time with, not the ones we’re expected to spend time with.

I would like to see all of us to just drop all the “rules” and “musts”. If you don’t like something, just don’t bother doing it. If you don’t want to spend five weeks preparing food, then don’t. End of story.

I would also like to see less of that overconsumption from everyone. And no. I promise, it’s not the end of the world if we can’t buy “all the things”, there’s still a lot to for us to do and to even buy.

Make your own presents

I’m not joking. Making your own presents with things that you already own, can source in the nature and/or via secondhand shopping. It’s an affordable and highly personal way of showing someone that you care about them.

And if you’re not a handy or crafty person, there’s always other options like a framed photo with a personal photograph, a book or a calendar with multiple photos. It’s not as environmentally friendly and cheap as making something from things that already exists, but at least it’s not expensive and it should be something that’s going to last for a long time.

Buy secondhand presents

We should really not underestimate secondhand shopping. It’s a very good option for your wallet and our planet. Just be aware that secondhand shopping might not be the best place for any last minute shopping, as it sometimes can take a bit of luck and time to find the exact things you’re looking for.

I like browsing around on secondhand stores, and something that I have learned is that people are basically crazy, they give away and sell all kinds of things—in perfectly good condition—all the time, just because it’s not in fashion or that they’re just bored with “the old” and want new shiny things.

Buying secondhand is also good for our planet. If we buy more used items, use what we have, fix what we already have when it breaks or when it just needs maintenance—instead of throwing it away as soon as possible—we don’t have to produce as much of the new stuff that we do today. Our excessive carbon dioxide emissions—caused by our irresponsible overconsumption—is snowballing into a fast and brutal death of our planet. Why waste our frail emissions budget on things we really don’t need or care that much about to begin with?

Even if you don’t care about our planet, consider this

Even if you don’t care about our planet, consider the fact that spending less money can (and most likely will) improve both your health and the quality of your life. The less money you spend, the more money you can save and the less expenses you have, the less you have to spend of your life working.

With more time on your hands, you could spend more time with things that you actually care about, things that brings actual value and happiness to your life. Hopefully in an environmentally friendly manner.

Give away experiences and consumables

It doesn’t have to be an expensive trip somewhere, it can be something simple as an afternoon tea party at some cozy tavern or café. It often requires little to no effort and money. Make your own card or buy one at a secondhand-store like I do. I actually buy and stock them secondhand for about 0.1 euro each. I really don’t see why you need to spend 2-3 euro on new cards.

Another option is consumables. If you’re not a tea party person, there’s always the option to buy consumables. It can be anything from an environmentally friendly soap, some fancy coffee beans, to perhaps something like a simple chocolate bar.

A less common present, but equally appreciated, is donations to non-profit organisations and charities. If you want to give away something truly caring and unselfish, consider making a donation to something that does good for this world. If you’re not sure about where to donate your money to, there’s always GiveWell, an American non-profit charity assessment that search for the charities that save or improve lives the most per dollar.

Consider no presents at all

No presents at all is actually a valid option as well, as boring as it might sound. Just don’t forget that spending time with your friends and family should be everyone’s number one priority anyway. I really think that excluding presents is something that everyone should understand and be okay with, no questions asked.

If you’re simply not fortunate enough to afford presents and don’t want anyone to know about it, you could simply use the argument about our excessive carbon dioxide emissions as an excuse for not wanting to buy any presents.

Consider potluck Christmas dinners

A potluck is a gathering where each guest (or group) contributes with a different, often homemade dish of food to be shared. This means that you can all enjoy the things you like to eat on Christmas, while you at the same time equally shares and distributes the burden, the cost and the time it takes to prepare the food. This should leave everyone with more time for Christmas.

Just don’t forget to think twice about what food you really want for Christmas. You should also consider to only add the most important things, and in moderate portions. If you throw away food, you’re also throwing away your own money, money that you most likely worked hard for by trading away a portion of your highly limited life.

And if potlucks is not your cup of tea and you choose to go for something else, please make sure that everyone is okay with the distribution of the responsibility of the food. It’s unfortunately not uncommon that it’s still a one person (involuntary) job in some homes.

Why does Thunderbird add ‘\A0’ and other strange-looking strings in e-mails I send?

, 15/11/2020 | Source: Fitzcarraldo's Blog

I use Linux and have used the Thunderbird e-mail client since 2008. I used to use DavMail to enable Thunderbird to access various company Microsoft Exchange WebMail accounts but, several years ago, DavMail would no longer work with a particular Microsoft Exchange account so I switched to the Thunderbird add-on ExQuilla, for which I pay […]

My Desktop - November 2020

I just noticed that I haven’t done a “my desktop” post here on my website, so why not make one now.

I probably didn’t have to take any new screenshots, I’m that boring that I could have probably used any old screenshot from the last years or so, and that’s because my setup doesn’t see much (if any) changes these days. Believe it or not, but I’m pretty happy with what I got.

I tried to make the screenshots as “non-montage-y” as possible. My primary workspace (the second image) features a couple of extra applications that I normally wouldn’t have visible at the same time. I often only have my web browser visible and maybe one more application. I like to distribute my clients across my 10 workspaces, so I can focus on one or two things at the same time.

Check out my post “I went from a multi monitor setup to just a single monitor setup” if you want to read about how I improved my productivity by going from three monitors to just one monitor.

My third workspace (the third image) often works as my secondary workspace where I have my newsreaders, e-mail client and todo-list easily accessible when I want to check if something new has happened.

Here’s some information about my setup and some of the more common software that I use:

Operating system: Gentoo Linux
Window manager: i3
System panel: Polybar
Shell: zsh
Terminal emulator: URxvt
Terminal typeface: Terminus
Terminal colour scheme Solarized
Application launcher: Rofi
Notification daemon: Dunst
Text editor: Neovim
File manager: Ranger
Web browser: qutebrowser
E-mail client: NeoMutt
Web feed reader: Newsboat
Bookmarks manager: Buku
Media player: mpv
Image viewer: sxiv
Instant messaging client: WeeChat + bitlbee
Document reader: Zathura
Calendar: Khal
Contact book: Khard
CalDAV/CardDAV-sync: vdirsync

I added links to software that I have talked about in the past. If you want me to talk about any of the others, just let me know. I will probably talk about them eventually, when I feel inspired to do so.

How I got started with Vim

A long time ago my web browser of choice was Mozilla Firefox, and as most normal people I was using it primarily with my mouse and occasionally with some simple keybindings. That all changed back in 2011 when I started to get really invested in tiling window managers (and not long after a lot of text-based applications), simply because I found them to be the most efficient alternative for me. With the keyboard I could do swift actions in a matter of milliseconds, compared to the mouse where some actions could take multiple seconds to execute.

My desktop from 2011 running the tiling window manager WMFS.

The more I used my new setup with WMFS, the more conscious I got about my workflow in general, almost to the point where I became obsessed with optimizing everything in regard to how I used my computer. Bothered by how I was using Firefox, I had no choice but to look into how I could make my web browsing workflow more efficient.

I started out with the most obvious thing I could think of; I read the documentation for Firefox to see what keybindings they supported. And, well.. It turned out there’s actually quite a few commands! The issue was that they only made sense for a person with three arms.

Seriously speaking though. Some shortcuts require both of my hands and I do understand that they exist for people with disabilities, but I didn’t have any disability, I was just looking to make my experience more efficient from a power-user perspective.

Thankfully this was back when ‘good old’ Firefox was still supporting the old type of add-ons (for better and worse). I therefore decided to go out and look for any add-on that could hopefully solve this issue for me, or at least make it better in any way possible. This was a long time ago and I can’t really remember what I was looking for, or what I was expecting to find, but I somehow ended up with an add-on called Vimperator, an add-on designed to provide a more “efficient user interface for keyboard-fluent users”. The design of Vimperator was heavily inspired by the well known text editor Vim.

I was well aware of what Vim was back then, that’s why I also had no intention of trying Vim out myself. Younger and grumpy me was perfectly happy with my modeless editor, which probably was Geany at the time.

The reason I decided to give this “Vim-like add-on” a chance, was for the simple reason that it actually seemed like a decently simple and straightforward add-on when I read about it. While it did add a bunch of features, they were at the same time all optional to use. One of the features that cought my attention was some really neat and simple—one handed—keybindings.

Here’s a few examples comparing the default keybindings in Firefox and how they worked in Vimperator:

Action Firefox default Vimperator
Close tab Ctrl+W d
Undo closed tab Ctrl+Shift+T u
Search Ctrl+F /
Next search hit Ctrl+G n
Previous search hit Ctrl+Shift+G p
Go to top Ctrl+↑ gg
Go to bottom Ctrl+↓ Shift+g
Refresh page Ctrl+R r
Open search in new foreground tab Ctrl+Shift+Enter Shift+o

It took me literally minutes to get used to Vimperator and it instantly made my workflow a lot more efficient! I was really pleased with it from day one, even as a grumpy non-Vim user.

Over time, I picked up and used even more shortcuts and features. It eventually came to the point where I ended up hiding all the graphical elements in Firefox, which included the navigation buttons and the address bar. I was still using the mouse at this time, but I was efficient enough with the keyboard shortcuts to not need any of the visual buttons anymore. When I wanted to open a website I just used the key o to open the address field and O to open the website in a new tab.

My Firefox setup with Vimperator and my custom modded userstyle.

I’m a person who has always been interested in minimalism in some way or another, and having a clean looking web browser like this felt like I had visually achieved my end-game setup.

Over time, I started to depend less on the mouse and more on the keyboard, even though some things would always require the mouse in Firefox. After all, Firefox is designed around the philosophy of a graphical ‘point and click’ interface and an add-on is limited in how much magic it can do.

How does anyone browse the web without a mouse? It’s simple! You use something called hints. Hints work by activating ‘visual hints’ using the key f and then just typing the combination of the alphanumeric letters for the link you wish to visit. It automatically opens the first hit, so there’s no need to confirm with the Enter key.

This is how it looks when browsing with hints in qutebrowser.

And if you want to open a link in a new tab you just use the key F. It might be even more ways to use hints than this, but that’s going to depend a lot on what web browser and/or add-on you’re using. A cool feature that I like in qutebrowser (my current web browser, but more on that later) is a feature called “rapid hints”. I activate it with ;r and it then shows me all the hints, the only difference now is that it open all links in a new tab by default and the hints stays open, which means that I can rapidly open multiple links.

Vimperator eventually convinced me to try Vim

At one point in time I remember asking myself “If the Vimperator add-on can make Firefox as good as it is right now, perhaps that Vim editor isn’t so bad after all?”. So. I decided it was time to give Vim a fair chance!

The first few days with Vim was honestly pretty slow and awkward for me, I kept forgetting about the different modes and instead of typing text, I constantly kept screwing up the text by selecting parts of the text, randomly jumping all over the place and doing all kinds of impressive accidents in swift motions.

Once I learned the very basics of Vim, it became really fun to use it. My hands and my wrists was also thankful for not having to reach for the mouse all the time. I had never really considered how much time I actually spend in a text editor before I started to use Vim.

And while I wasn’t magically 9999% more efficient over a night, it did make editing text about 9999% more enjoyable for me.

Vim changed my life

I didn’t take long for me to come to the conclusion that Vim is the best text editor ever, for me. With my new-found awareness of Vim and its magic, I also started to notice that there’s a lot of Vim-like applications that supports keybindings and features similar to Vim. This lead me on a quest to Vim-ify both my desktop and workflow. Years later, I can safely say I did quite well with this task. There’s not much that doesn’t work like Vim on my computer these days.

I have also at one point in time replaced Firefox with qutebrowser, a keyboard-focused web browser with a minimal graphical user interface, highly inspired by (but not limited to) the Vimperator add-on and the Vi-like web browser dwb.

qutebrowser with its minimal userinterface.

With qutebrowser I don’t have to use the mouse at all. Well. Some websites are so poorly designed that they don’t work without a pointing device. And yes, boohoo me, but let’s not forget about the people with disabilities who actually have no choice but to fully rely on the keyboard to access the web. All websites must be keyboard compatible.

My web browser is pretty much the only graphical application that I still use on a regular basis. Other graphical applications that I also use, but less frequently, are the PDF-viewer Zathura and the image viewer sxiv, which are both using Vi-like keybindings, a minimal graphical user interface and does not require any pointing device.

I even use the mouse so little that I let it automatically hide after a few seconds of inactivity with Unclutter. It’s really nice to be able to hide the mouse when you rarely use it.

Vim even inspired me to learn proper touch typing

Vim itself can’t take all credit for it though, it was more precisely efficient Vim-users and the keyboard community who inspired me to learn proper touch typing. And by “proper touch typing” I mean touch typing with all your ten fingers with the correct placement on the keyboard.

I have been a touch typist for a long time, but I was a self-taught using my own style with 2.5 fingers, unaware of any proper technique for the longest time. When I eventually found out about the right way I was too used to my own technique to be bothered starting over.

Eventually, that all changed when I saw some real world touch typist (from the keyboard community) typing really fast without breaking a sweat or even lifting their hands to reach all the alphanumeric keys. It was that and some inspiring Vim-wizards doing their magic with Vim that inspired me to finally re-learn touch typing the proper way.

I started out by doing typing tests on the website 10FastFingers. I was frustratingly slow the first day, typing at around 20 words per minute. Any attempts to participate in any form of real time conversation drove me nuts, but what kept me going was the fact that my hands felt rather relaxed when I typed. My hand didn’t rush all over the keyboard anymore, they just gently rested on the wrist rest and my fingers did most if not all the jobs.

Things did take a humorous turn when I fired up Vim the first time though. My muscle memory was completely reset in more than one way, it turned out that I was now no longer capable of finding the punctuation characters. Why didn’t I just peak at the keyboard? Well.. I use blank keycaps on my keyboard.

My custom-built keyboard. [Read more]

Re-learning to properly touch-type was one of the most frustrating thing I have ever done in my life, but at the same time it was also the most rewarding thing I have ever done. I think it took my muscle memory about one month to get back at my old typing speeds, by then I was also a lot more accurate. It didn’t take long after that for me to also increase my typing speed. Today I’m actually more than 20 words per minute faster than before.

You should really consider learning to properly touch type if you haven’t already done so. It changed my life for the better and I’m forever thankful to myself that I eventually did it.

I have also replaced Vim with Neovim

As much as I love Vim, there’s a few things that I like less about it. The development cycle with Vim is slow, like really slow, they release a new version every two or three years. This is usually not an issue for me as I don’t care about bleeding edge software, but one day when I needed a feature in Vim that didn’t exist yet, I had to look for solutions. One of those solutions was to either install a plugin or try Neovim, who already had implemented the feature I was looking for.

Neovim is a fork of Vim that strives to improve the extensibility and maintainability of Vim. Neovim looked like a fresh breeze and I decided to try it out. It turned out to be quite a nice surprise. While it wasn’t a huge change in terms of functionally, or at least in the way I used it, I did notice that Neovim performed a bit faster. Not that I ever had any real issues with how Vim performed.

Another thing that I noticed with Neovim was some better defaults. It made me aware of small quality of life features that I previously didn’t know about. Because of this I could also remove a surprisingly large part of my configuration file, while I still gained new functionally.

vi, Vim, Neovim, what?

Are you confused by all the names? Hopefully this will clear things out for you:

  • Neovim is a fork of Vim from 1991. Neovim itself is from 2014.
  • Vim is based on Stevie from 1987.
  • Stevie is based on vi from 1976.
  • vi was derived from a sequence of UNIX command-line editors, starting with ed back in 1973, which was a line editor designed to work well on teleprinters, rather than display terminals.

As you can see, the ‘Vi-family’ have quite the history dating all the way back to 1973.

New or old to the vi-family, I can highly recommend you checking out these links:

Probably in that order as well. I have a few more links to some interesting articles in my bookmarks section if you want to do some more reading.

Anyway. I hope you enjoyed reading my post about how I got started with Vim!

My plaintext todo list

This is take two on my plaintext todo list. I actually wrote about my old setup almost two years ago now.

I have since then made some changes to it, I have skipped both the Bash-script and the Supercat tool and I have also replaced Syncthing with Nextcloud for the syncronisation of this task.

My current setup consist of a plaintext file called todo.txt, which is located in the folder $HOME/nextcloud/notes/. The content of the file uses Markdown formatting (as always) and it looks like this:

# todo

## Monday

* [x] Plan dinners for the week
* [x] Grocery shopping

## Thuesday

* [ ] Laundry

<Note: I did not include the rest of the weekdays here for the sake of this demo>

## Unspecified

* [ ] Something that can be done any day

When I have completed a task, I cross it off with a checkmark. I intentionally leave the list intact the whole week, so I can see what I have done over the week and then feel good about what I have done. It does actually help with the motivation!

On my phone I can then access and edit the file using Nextcoud Notes. I can also preview the file ‘properly’ using the built-in preview mode and I can even interact with the checkboxes by tapping on them.

Nextcloud Notes with my todo list.

I find this way of managing a todo-list a lot simpler than my previous setup. I can’t really see why I need to make things any more complicated than this.

Post updates


To make the managements of my list a lot easier, I have decided to install a Neovim plugin for this called Bullets.vim. It’s a fairly simple plugin for automated bullet lists. And as nice bonus I can now jump to any task and then press <Leader key>+x to mark it as completed or uncompleted.

Init System Features and Benefits

, 23/10/2020 | Source: Daniel Menelkir

I've found a very good reading about init systems here.

Troubleshooters.Com®, Linux Library
and Init System Choices Present:

Init System Features and Benefits

Copyright © 2015 by Steve Litt



Just so we're all on the same, a feature is a trait or property of the system. A potential benefit is a change in your life. A benefit is an improvement in your life. A potential benefit becomes a benefit if and only if the change makes your life materially better. For instance, fast booting is a potential benefit, but it becomes a benefit if and only if either:

  1. You do a lot of booting.
  2. You must quickly set up for presentations
  3. You must maintain very high availability.
  4. You're doing troubleshooting or experimentation that involves a boot.

In other words, if you boot your personal desktop once a week, you don't really care whether it boots in four seconds or four minutes. If you boot it every morning, you don't care whether it takes 4 seconds or 30 seconds. Now let's add in features.

Features are traits or properties of the system, presumably for the purpose of bestowing benefits on the user or owner. A specific potential benefit can be realized by alternate features. For instance, parallel daemon instantiation can make for faster booting. And so can not running scripts and daemons unimportant to you. And, if you're booting straight to GUI, so can a lightweight window manager.

Here's an example: Both a magnesium paddle shifter in a car, and parallel process starting in an init system, are features. The potential benefit of the magnesium paddle shifter is faster shifting of an automatic transmission, while the potential benefit of parallel process starting is faster boots. The paddle shifter becomes a true benefit if your engine has too little oomph to correctly accelerate you when your transmission shifts for itself, or if you're trying to impress somebody. Faster boots become a true benefit when you need to perform a lot of boots, or when you need very high availability, when you need to boot for a presentation right now, or when you want to impress someone.

The bottom line is this: A potential benefit is relevant if and only if it substantially betters your life. A feature is relevant if and only if it produces potential benefits leading to true benefits. Always remember this when you hear others extol the benefits of their chosen init system.

Init System Feature Matrix

Examine the following feature matrix, with features going down the left side, and the most common inits going across the top:

True PID1 init Y Y N ? Y Y Y Y Y Y
Respawning Y Y N ? Y Y Y Y Y Y
Parallel daemon startup N Y Y ? Y Y Y N Y Y
Process Dependency model numeric script calc script script script calc numeric ? calc
Event based? N ? 2 Y ? ? ? Y N Y Y
OS Toolkit N N N N N N Y sort of N N
Socket activation N Y 2 N N N N Y N Y ? Y ?
Daemontools Inspired N Y N Y Y Y N N N N
(Subjective) grade for
online documentation 4
A- C- B 1, 5 D C+ C- 3 C 5 C 5 B- 5 F
Declarative Syntax Y N N N N N Y ? 6 Y Y
Can natively run
one-time processes
Y N Y N N N Y Y ? Y ? Y ?
License unlicense
? 8 BSD 1011 ? 9 BSD 7 ISC LGPL 11 GPL GPL 11 LGPL ?
Primary install method Compile Comp
Package Comp
Package Package Package Compile
Cgroups N ? Y ? ? ? Y ? ? Y

  1. Based on Gentoo docs for OpenRC.
  2. Based on the "socket services" described in the nosh docs, I'd assume it has "Socket Activation" and is "Event Based".
  3. S6's online general init documentation is wonderful. Its s6 specific online documentation is either hard to find or nonexistent.
  4. Online docs are what really count, because few people will download and untar on the chance that the distributed docs will be better than the online docs.
  5. This documentation grade is based on the init system coming installed on the distro. The documentation grade would be much lower if you actually had to install/configure this init.
  6. Is /etc/inittab declarative, or script based? Your guess is as good as mine.
  7. Three-clause BSD alike license
  8. Unable to find license after search of website and source
  9. Perp has a home-grown, permissive license with disclaimer of liability
  10. 2 clause BSD license
  11. Info obtained from Wikipedia

About Documentation

I only counted online documentation: Few people will download a project in hopes that docs from the tarball will be better or more available than that available on the 'Net. Findability counts: Docs linked straight from the projects home page get the nod over longer navigations, and get even more of a nod over third party documentation (like what you're reading right now). Inits likely to be installed by a package manager were given a pass from having to explain installation and configuration, so it's somewhat of an apples and oranges situation. Docs requiring trips through github were rated lower: Who wants to do that.

My perception is that the greatest single failure of most of these init systems was insufficient documentation for a mere mortal to easily get them up and running.

Process Dependency Models

Completely apart from "event driven", there are three common process dependency models:

  1. Numeric
  2. Calculated
  3. Script based

Numeric process dependency models rely on the admin or packager to guess the order in which processes should be run. Numeric process dependency is appropriate only when process startup is sequential (not parallel) and there are not a great many processes being run. In other words, you wouldn't use Epoch to init a machine spawning fifty daemons. Why you'd want fifty daemons is quite another question.

Calculated dependencies are when the init system calculates the run order on the basis of each process' "requires", "after", and "provides". If events are not a consideration, a fairly simple Python program could convert calculated dependencies to numeric dependencies, but why bother? The power of calculated dependencies is that with the right init system, they can be merged with events to greatly reduce race conditions. Calculated dependencies are excellent for event driven inits with many spawned processes.

Script based dependencies are good in the same use cases as calculated dependencies. They're especially useful in daemontools-inspired inits such as runit, s6, nosh and perp, all of which retry spawning the process until it succeeds. Script based dependencies are based on this approximate algorithm:

if dependent process not running:
again spawn the dependent process
return failure on the current process
spawn the current process

Note that if the current process has several dependencies, there will be several of those tests, spawns and failure returns. Note also that in certain use cases you can't use the preceding logic, because it will over-and-over again spawn the dependent process. The runit init system has a special per-process file, called ./check, whose purpose is to detect a fully functional dependency and do the right thing otherwise.

Obviously, the dependency model I just described depends on polling, whereas event-driven dependency models don't poll: An advantage for event-dependency. That being said, depending on use case, it's not that much of an advantage, and of course polling is usually simpler and easier to debug than events.

Obviously, the dependency model I just described depends on polling, whereas event-driven dependency models don't poll: An advantage for event-dependency. That being said, depending on use case, it's not that much of an advantage, and of course polling is usually simpler and easier to debug than events.

About Native One Time Processes

Daemontools-inspired inits are all built to manage (respawn and control) processes, meaning that when such a process ends, it gets started again. Most of the daemontools-inspired inits I'm aware of can run single-shot scripts in their stage 1 (startup) and stage 3 (shutdown) stages, but not their stage2 (management) stage, which runs whenever the computer isn't either booting up or shutting down.

This presents a problem, because sometimes you want a crashed process to stay crashed (with appropriate notification), and sometimes you want a one-time process, that would normally be done during boot, to happen after some stage 2 processes are up and running.

I can think of several ways to make stage 2 processes run once. They're not aesthetically beautiful, they'd hand over great talking points to anti-daemontools people, but they'll work. Better news still, right now, as I write this, smarter people than I are working on ways to solve this problem elegantly.

Both Epoch and sysvinit can intermix respawning and one-time processes, and OpenRC runs only one-time processes. My assumption is that systemd, uselessd, and upstart can intermingle respawning and one time processes, but I have no documentary proof.

About Daemontools-Inspired Inits

Runit, s6, perp and nosh were all inspired by daemontools, a respawning process management tool with a surprisingly simple and understandable architecture based primarily on the Unix filesystem. Daemontools, along with the inits it inspired, employs very simple run scripts to daemonize a foreground process. Daemontools and the inits have kludges to try to daemonize software that cannot be run in the foreground, but the right way to use these inits is to run the program in the foreground. So for instance, sshd -d or cron -f

Inits like sysvinit and OpenRC give init scripts a bad name, but in fact daemontools-inspired programs usually have incredibly simple run scripts that should not in any way be compared to those of sysvinit and OpenRC. They're simple enough that a half way intelligent admin could write them from scratch. But writing them from scratch isn't usually necessary, because of facilities like supervision-scripts, a set of process run scripts, for many common programs, portable between runit, s6, and daemontools itself. Avery Payne told me on 1/2/2015, to include the following warning when describing supervision-scripts:


Please note the scripts are still pre-0.1, and have not been 100% tested. Many still have an "untested" marker in their service definitions; use at your own risk. I can only say that I run a subset of these on my home server under runit and they work for me. I am still working on testing them under all three frameworks.

The preceding warning doesn't subdue my enthusiasm, because once supervision-scripts is tested and working, people will be able to use runit or s6 without worrying about "translating sysvinit scripts." Not only this, but I'd guess this will become very popular with "upstreams" because they'll no longer be responsible for their software's init scripts, at least not if the init system is runit or s6. This is how things always should have been.

Daemontools-inspired inits have script based process dependency handling. This is superior to numeric dependency handling, more difficult to set up than calculation based dependency handling, but also more versatile than calculation style dependency handling.

The daemontools-inspired inits are simple, admin-friendly, very efficient, fast booters, easy to install without a package manager, versatile, DIY friendly, and rock solid.

About Respawning

Your mileage may vary, but in my opinion respawning should be an option. With Epoch, it is. You can declare anything to be either respawned or not. Same with sysvinit. I'd assume it's the same with systemd, upstart and uselessd.

OpenRC cannot respawn at all, and the daemontools-inspired inits are designed to respawn, so special steps must be taken to get them not to respawn.

Feature to Benefit Mapping

Socket activation and event based init

The main rationale for socket activation and event based init is the modern Linux kernel's parallel and indeterminate instantiation of various things. Apparently the kernel issues events after each instantiation is complete, at which time an init process depending on that instantiation can launch.

The potential benefit is that race conditions don't cause failures. However, there are many other features, both within the init system and without, that can almost completely eliminate such race condition caused failures. An obvious one is a short sleep. I haven't tested this on huge numbers of computers and use cases, but I have a strong feeling that a simple ten second sleep at the beginning of init would allow the kernel to complete all its instantiations, except those that depend on something the init spawns. So, if you can tolerate adding ten seconds to the boot (the time to sip your coffee and chew one bite of a Danish roll), you don't need socket activiation or event based init. I have a hunch that in most situations it will add up being more like two seconds, but even ten seconds is an amount whose only detriment is degradation of bragging rights.


Keep in mind that /sbin/init or /usr/bin/init don't run until after the kernel runs the init in the initramfs, and that initramfs init calls the hard disk's init, which usually happens at the very end of the initramfs' init. If there's a time consuming fsck, that happens during the initramfs' init before the call to the hard disk's init. So the bottom line is this: there's plenty of sequential activity that happens before what you think of your init even starts, and this should provide a buffer against kernel/init race conditions.

Another substitute for socket activation and event based init is daemontools style run scripts that test for a kernel process being fully functional. If the kernel process is designed not to run until it gets a message from something spawned by init, such a daemontools like run script could even signal the kernel.

The other thing is, if you're running an edge case causing kernel process to take several seconds to get running, perhaps that edge case itself should be investigated. To get more information, I just rebooted my Epoch-initted Centos box, a four year old box with an Asus M4A785-M mobo, an AMD Athlon(tm) II X2 250 Processor, and 4GB RAM. In other words, it's no racehorse. Nevertheless, it took only four seconds to get from Grub to the start of Epoch, and another four seconds for Epoch to boot it to a complete CLI system. In other words, the kernel and the initramfs init program took four seconds to do their job. If the kernel is still trying to start processes at, let's say, 10 seconds after it takes control, I'd sure consider the possibility that something that needs investigation.

Socket activation/event based, plus parallel process instantiation, is how you do it when your top three priorities are boot speed, boot speed and boot speed. Otherwise, there are many ways to avoid race conditions with modern, semi-indeterminate Linux kernels.


There are two ways you can obtain your init system: The package manager, and compile-it-yourself. In 2015, up to date distros bestowed by your package manager will pretty much be limited to systemd, OpenRC and sysvinit. All the rest, at least for most distros, you'll need to compile, install and configure yourself. When you need to do it yourself, good documentation can speed the install/configure process by an order of magnitude or more. This is why, in the Manjaro Experiments, I succeeded in installing Epoch and runit, but failed with s6 and nosh. Epoch and runit have docs that better cover their installation and configuration procedures.

Speaking of documentation, Manjaro Experiments itself is excellent documentation on inits in general, and the process of becoming expert with inits in general. Read it. I also highly suggest you start with an experimental Manjaro/OpenRC setup, and experiment.

Respawning and One-Time Processes

Respawning is the ability to automatically rerun a process that fails. So if your web server goes down, it gets started up again whether your admin is in the server room or on a beach in Tahiti. This isn't particularly important, because often that's not the behavior you want on a broken service, and also because if your init can't respawn, you can run something like daemontools that can.

A more important benefit from respawning is the ability to keep trying until you succeed, as in the instantiation method of the daemontools-inspired inits. These inits can be scripted to check for dependencies, run the dependencies and exit fail if the dependencies aren't up, and try again in a few seconds. This is a very effective and efficient polling method that substitutes for event based instantiation. However, even this benefit can be simulated (kludgily) with shellscripts in other inits, and of course in event based inits it may not even be necessary.

Personally, I don't think an init's inability to respawn, or its inability to natively and simply run once, is a showstopper.

Parallel Process Startup

Parallel Process Startup has exactly one potential benefit: Faster boots. If boot speed were not a priority for you, you could boot sequential startup Epoch or sysvinit, with appropriate sleeps numeric order. The cost of parallel startup is increased complexity. Your job is to decide whether it's worth it.

This cost/benefit analysis requires you to first know the boot times of sequential startup inits. For instance, Epoch, which starts services sequentially, boots to full CLI readiness, on my experimental CentOS system, in 8 seconds. Systemd boots the same system to X (without window manager) in 4 seconds. Personally, I wouldn't start worrying about boot speed until boot time exceeded 30 seconds, so in my use case, faster boot isn't a benefit, and therefore sequential startup isn't a feature I need. Obviously, it would be very different if I had a contract specifying no more than six minutes per year downtime.

There's yet another way to gain the benefit of a fast boot. You can take a machete to all the edge case CYA processes that get run by your distro's standard init. The standard list of processes are meant to cover use cases you don't have: You can safely remove a lot of them.

Another way to gain this benefit is to profile your initialization. Which processes take several seconds to start, and why? Are they performing complex handshaking, like dhclient? If so, what can you do to make that faster? Are they timing out waiting for reverse DNS before finally starting? If so, make sure that your reverse DNS works, and it works before such a process gets run. Is it taking 20 seconds to start Gnome once you log into your Display Manager? There's an app for that: It's called Openbox or LXDE.

Daemontools Inspired

Depending on your viewpoint, this is either overhyped marketing or the most DIY-friendly feature in the world.

Bias Disclaimer

I have always had very, very good feelings toward the daemontools way of administering processes. I'm not objective about daemontools-inspired inits.

The cool thing about the daemontools-inspired inits is that, in many cases, you do nothing but a simple ./run script for each process. This is welcome news for refugees from the gargantuan script files of sysvinit and OpenRC. It might even be a welcome change from the unit and install section options in systemd and uselessd, although I'd imagine most of the work could be done by requires and after.

Another outstanding benefit of daemontools-inspired inits is that, with two minutes work, you can turn every output of the process being run, both stdout and stderr, into a timestamped log file.

But I think the reason I and other people love daemontools-inspired inits is that, from a Unix viewpoint, they just make sense. Their "database" is nothing more than a couple directory trees and a few short shellscripts (or scripts in special shell-like languages). You see them, you immediately understand what they do, and they just make sense. Oh, and they perform well too.

And don't forget, if Avery Payne's supervision-scripts project succeeds, at least the runit and s6 members of the daemontools brigade will also become almost trivial to administer.

OS Toolkit

Lennart Poettering enthusiastically declares systemd an "OS Toolkit" rather than an init. My reading of and listening to Poettering indicates this means their project is making all sorts of OS building blocks to replace formerly independent things like udev and consolekit, the idea being that you can build an operating system by bolting a few of those things together.

I believe that, when the echos of rhetoric finally fade, it will have been this one feature that caused, and will continue to cause, extreme hostility toward systemd. Not the hundreds of thousands of lines of code. Not binary log files that can be easily rewritten as text. Not even the "wontfix" bugs. Those things were talking points. The real cause of hostility, in my opinion, was the OS Toolkit issue.

A great many Linux users liked the DIY opportunities bestowed by independent, relatively narrow interface parts like udev, consolekit, and several others. Systemd even goes so far as to offer hooks for desktop environments. This is why few hold any animosity toward uselessd, a knockoff of the init part of systemd, but without the "OS Toolkit" ambitions.

So, if you want to build an entirely new Linux, with minimal effort, by bolting together a few systemd offerings, then systemd's "OS Toolkit" feature offers a spectacular benefit. Oppositely, if you want to be able to change the functionality of your computer by inserting parts, replacing parts, even removing parts, then systemd's "OS Toolkit" feature bestows a harsh disbenefit. If you just use your computer and don't care about building from scratch or DIY modifications, and just want to use your computer the way your distribution gave it to you, then the OS Toolkit feature probably doesn't matter one way or the other.

Declarative Syntax

Declarative syntax means configuring your processes primarily with key-value pairs rather than with scripts. If you've been slogging around with the megascripts used by sysvinit and OpenRC, this probably sounds like an excellent idea right about now. If you've been using a daemontools-inspired init, you're probably saying "you'll pry my short and versatile scripts out of my cold, dead fingers."

I've used declarative Epoch, and I've used scripted runit, and to tell you the truth, if either declarative syntax or scripts are interfaced reasonably by the init, they're both fine. You can get to the same benefits, both ease and versatility, either way.

Primary Install Method

Of course, you can install any software by compiling, and you can install any software with a package, although in some cases you might need to make that package yourself. Oddly, or perhaps not so oddly, in the init world of early 2015, it turns out to be an either/or proposition. For any init, it's either overwhelmingly compiled, or overwhelmingly installed via the package manager. Each of these two opposite features has its own benefit.

The benefit for installing via package is pretty obvious. It's (theoretically) a five minute deal with apt-get install or packman -S or yum install. And the installed init works in harmony with the rest of your software. No muss, no fuss, no bother.

And there's a disbenefit: What if your distro's packagers stop offering your favorite init, or offer you a broken one? You need to start a search for a new distro. Or you need to compile your own init. This is not academic. Debian's original idea to offer only systemd instead of sysvinit caused the Debian init wars, and the init wars caused Debian to (perhaps temporarily, we'll see) continue to offer a sysvinit package.

The do it yourself compile install method offers two benefits:

  1. Install it on any distro you want
  2. Able to simultaneously have multiple inits

A word about the simultaneous multiple inits. This is more useful than you might first imagine. Just like many people have a fallback kernel for when things go bad, multiple inits give you one or more fallback inits when things go bad. With multiple inits, if you ever wonder if a problem is being directly or indirectly caused by your init, you can just change your kernel line in Grub, reboot, and compare your system using the two different inits. If you accidentally bork an init (I've done that several times), instead of whipping out the time consuming System Rescue CD, you just boot to the other init, you fix your problem, and boot back into your normal init.

By the way, the reason I list simultaneous functional inits as a benefit of Compile It Yourself is because, generally speaking, init packages disable each other.

A word about why OpenRC, systemd, sysvinit and Upstart have packages and the rest don't. The four in the preceding sentence are probably four of the five most complicated init programs there are, the fifth being uselessd. The likelihood of even tech-savvy users being able to compile those four by themselves is fairly low. Meanwhile, for some reason, those four are or have been them most mainstream.

On the other hand, Epoch, runit, s6, perp and nosh tend to be simpler and probably easier to compile (though I was unable to compile nosh). As a matter of fact, Epoch's #1 design priority was minimal dependencies, making it an easy compile candidate on any computer with a Linux kernel. And for some reason, these five inits have traditionally been ignored by distros. So you have no choice but to compile them, and they're easy to compile.

One more thing: There's no law saying you can't have a package-bestowed init coexist with one or more Compile It Yourself inits. Personally, I think that's the best of both worlds.


Cgroups, or "Control Groups", is a Linux kernel feature that is taken advantage of by (at least) OpenRC, systemd and uselessd. It's a better way of managing running processes, and is used extensively in container software like Docker.

From what I hear, using cgroups is better for killing zombies and eliminating the need for doubleforking. The potential benefit is more control. Personally, I don't think its worth the cost of a complex init system: Your mileage may vary. Also, I'd imagine it's pretty easy to use cgroups with non-cgroup-aware inits using cgroup binaries. Debian Wheezy has something called cgroup-bin for all these binaries. CentOS and Manjaro don't, but I'm thinking it might be possible to code them yourself, or grab the source from somewhere and compile it for your own distro.


Two of the listed inits are compatible with supervision-scripts, which is a pre-made bunch of run scripts compatible with runit and s6. Potential benefits include:

  • Makes it almost trivial for a user or admin to daemonize a process.
  • Relieves the "upstreams" of the need to develop init scripts. This should never have been their responsibility in the first place.
  • Relieves the application packager from the need to develop init scripts. He or she has enough to do.
  • Makes it dead-bang easy for one init-ingenious distro contributor to write and test all the init scripts.
  • If nobody from the distro can do the init scripts right, makes it easy for the user or admin to cut and paste them, possibly having to modify them slightly.

Before you take to the streets cheering, keep in mind that supervision-scripts is in its infancy, still under very heavy design and development, and it's a moving target. Today, supervision-scripts is very much a "use at your own risk" type thing.

But if and when supervision-scripts fulfills its mission of providing startup scripts for all common daemons, then not only will runit and s6 be very high quality, but they'll also be dead-bang easy, whether via a package manager or self-installed.


The past several months provided heated discussions of init systems. Many init debaters have based their arguments on the presence or absence of a particular feature. Such arguments failed to persuade, because in their hearts, everyone knows the only purpose of features is to provide benefits, and different use cases require different benefits, and a benefit can be provided by several different features.

This weekend was clothes swapping day here

A few days ago I was at something called “klädbytardagen” here in Sweden where I live. I don’t know if there’s a generally used English name for it, but it translates to something like “clothes swapping day”.

The concept is pretty simple, instead of buying new clothes you swap clothes with each other.

For every piece of clothing you give away you get a ticket, the ticket can then be traded with another item there. There’s usually a maximum amount of items you can give away and at today’s event it was 10 items each. Everything that doesn’t get a new home goes usually to either charity or to a secondhand store.

Cloth swapping day with COVID-19 safe distances.

I had a lot of fun and I ended up with a bunch of good kids clothes which was nice. I do wish that more guys would attend these kinds of events. It’s usually uncommon to see guys and that makes it rather difficult to find anything for myself and other guys. It’s a lot easier to find clothes for women and kids.

My findings of the days was a bunch of kids clothes.

I think this is a great way of updating and refreshing your wardrobe without it costing you any money and more importantly; it’s not impacting the planet with any additional carbon dioxide.

If you didn’t know, the fashion industry’s carbon impact is bigger than the airline industry’s carbon impact. According to the same source Quantis the total greenhouse gas emissions related to textiles production creates around 1.2 billion tons of carbon dioxide annually.

In a report from Naturvårdsverket (the Swedish Environmental Protection Agency), the average Swede buys about 14 kilos of clothes and textiles per year and only two thirds of those clothes are being used. And it gets worse; we also throw away about 8 kilos of clothes and textiles per person and per year here in Sweden.

Most of those clothes are perfectly fine and could have come to good use for a lot of people, especially for those who’s not fortunate enough to afford new clothes.

The bad thing with wealthy countries is that they have too much money to spend on things they don’t really need to begin with. Swedes is especially good at overconsumption junk we don’t really need. And for some reason a lot of us seems to think that it’s okay to do a lot of shopping as long as you give away your old items to second hand.

This has resulted in the second hand stores here being crammed full of perfectly fine clothes (and other items), but most people don’t even visit the second hand stores to begin with. Perhaps it’s still taboo to buy used clothes for some? Who knows.

What we need to do is to buy way less new stuff and way more of the used and perfectly fine things out there. We need to start to think about tomorrow and what future that lays ahead of us if we continue down the unsustainable path we’re currently on.

I would love to add comments to my website but there’s no good options

As the title say, I would love to add comments to my website but there’s no good options out there. I have so far looked at the following alternatives:

  • Staticman
  • Stapsher
  • ISSO
  • Remarkbox
  • JustComments
  • CommentBox
  • Hyvor Talk
  • Discourse
  • Talkyard
  • Coral
  • Commento
  • Schnack
  • Remark42
  • Comntr
  • Glosa
  • Lambda
  • HashOver

Some aren’t even open source, so they’re not an option to begin with, some require a GitHub account, some are too expensive (paid hosting) and some self-hosted alternatives are just a pure nightmare to set up.

If I have somehow missed some hidden gem out there, let me know via e-mail or IRC.

The web browser add-on uMatrix is now abandoned

The popular and the somewhat essential privacy add-on uMatrix for the web browsers Firefox, Chrom{e,ium} and Opera is now an abandoned project.

uMatrix is an add-on for advanced users to block any class of request made by the web browser, like scripts, iframes and ads. The add-on is created by the same person who also created (and later abandoned) the popular web browser add-on uBlock, an add-on which was transferred to another person who made himself rather unpopular by making some controversial choices.

The original author of uBlock later forked his previous project into a new project called uBlock Origins, a project that’s still being maintained today.

Thankfully he didn’t do the same mistake twice and it seems like he’s only abandoning the uMatrix project, with the possibility of maybe resuming it later according to a comment made by himself on GitHub.

Considering how popular this add-on is, I’m certain that someone will fork this project under a new name and continue the development of it. If you hear anything about a fork or an alternative to uMatrix, feel free to contact me about it and I will update this post with the information.