Opinions about Linux distributions

First of all, I find it quite pointless that Linux users argue indefinitely about the “best” Linux distribution. There are different aims in every Linux distribution and therefore some significant and subtle differences. The question is “Which one is best for my purposes?” than to ask a for a “best” distribution in general.

Here I want to give my musing and experience with the various distributions that I have used or just tried over the time.

I have highlighted the main point in each bullet point with bold type. That way you can skip any of the bullet points if you know what I am going to say anyway. It looks a bit ugly but is pragmatic.


I have used Ubuntu since 2009. At the time I had a Mac with Mac OS X 10.6 and started to use more and more programs from the GNU/Linux ecosystem. That meant that I used Vim (instead of say Textmate), LaTeX (instead of Pages), GIMP (instead of Photoshop, Pixelmator) and Firefox (instead of Safari) as well as Thunderbird (instead of Mail). The installation of GIMP was rather cumbersome. All in all I decided to jump ship and put Ubuntu on my iMac. At the time I felt a bit bad about it. When IBM decided to buy Macs for their employees (and put Windows on it?) it removed my bad conscience in retrospect.

The good

  • Ubuntu was (and still is) a nice start into the desktop Linux world. You have a wide array of software available (thanks Debian!).

  • The hardware support is just great. On all my computers that I owned I had less and less problems over time.

    When I used SuSE 9.1 around 2005 it was horrible. Proprietary graphics drivers was over my head, TV card did not work and WLAN was the worst of all. Now I install Ubuntu on my ThinkPad X220 and everything just works. The fingerprint reader needs to be configured, that’s it!

  • Proprietary software is installed quite easily and you can just use the proprietary graphics driver, MP3-codecs and if you want even Adobe Flash.

The bad

There are a couple things that do not resonate with me:

  • The X server is rather old and is on its way to be replaced. The industry (where I hope that it is more than just Red Hat) came up with Wayland whereas Canonical conceived Mir.

    A display server needs support from the graphics drivers. That means that Intel, AMD and NVIDIA have to add Mir support into their drivers. Canonical extended the Intel driver and submitted the patch but Intel rejected to maintain the code. Therefore Canonical has to rebase their patches regularly on top of the current Intel driver. Additionally the programs need to support the display server using some library. For most programs this is no direct work as the toolkit (Qt, GTK, …) takes care of that. The toolkits now need to support Mir as well if the programs are going to run there.

    One of the arguments for Mir was that Wayland would not be supported on smartphones. Canonical wants to build Ubuntu Phone and therefore needs a display server that runs on smartphones. I have a Jolla Phone and that runs with Wayland on the Android drivers as far as I understood. So the argument seems to be invalid and Mir is founded on disproved arguments.

    All in all I think that Canonical now has a lot of work on their hands to maintain Mir and the drivers and the toolkits to work with it. It is a display server that is going to work with one distribution only and with one desktop environment, Unity. This seems like a waste of manpower to me.

    Also the license is GPLv3 (and LGPLv3) so that might make it less attractive for driver and toolkit vendors.

  • This brings me to Unity. Around 2009 and 2010 there was an Ubuntu Netbook Remix which I had on my netbook. It was great as it did not waste much screen estate for toolbars and window decoration. Just what you want on a 1024×600 pixel screen. This desktop environment was then gradually absorbed into Unity. The story back then was that GNOME 3 Shell was taking very long and Ubuntu should have something like that and perhaps cooler. So Canonical started to work on Unity. Of course Unity got delayed like software usually does and when it was usable so was GNOME 3 Shell.

    By the time Unity was usable it differentiated itself from other desktop environments. Some things were really nice, for instance how it saves horizontal screen estate by merging window decoration and menu bar for maximized windows. You can switch programs with Mod4 and the number keys. I was a bit surprised to see that you can do that on Windows since 7 as well. And it is the basis for Ubuntu Phone, so that will not go away. Therefore Unity is now a desktop environment that only runs on Ubuntu.

    What I do not get is that it started as a Compiz plugin and is now ported to Qt. The used programs are all the GTK programs from GNOME. Also the programs were forked and adapted a little bit and now clash with vanilla GNOME. This had to be done in part because of systemd which Ubuntu did not use for a long time.

  • Canonical built Upstart to replace the SysV init system. This lead to some boot time improvements and seemed to be a nice thing. Other distributions also started to use it. Then systemd was the new kid on the block and distributions like Arch Linux and Fedora started to use it. openSUSE was also quick about it I think. Debian, the upstream of Ubuntu, has used SysV the whole time. So a while ago Debian and Ubuntu were the only major distributions without systemd. Debian had SysV and Ubuntu used Upstart. The Debian voted to use systemd and now Ubuntu has it too.

    I am in no position to judge either init system. I get some of the contra arguments about systemd that it moves too much responsibility into the PID 1 and so on. But for me as an end-user of the distribution I like that there is just one system to care about on all major distributions now.

    What I do not like about this story is that Ubuntu clung to Upstart just because they wrote it. So the choice of init system seemed more personal than a technical or political decision. Since I do not know all the details, I could be wrong about that, of course.

  • Regarding the Amazon search, I am rather sure that I do not like that. Also the “we have root anyway” did not make that better at all.

  • The package management APT that is inherited from Debian asks about changed configuration files during updates. So if you install updates for something where you manually changed the configuration the updates stop. You are prompted for a decision, keep or replace.

    This means that unattended upgrades occasionally just fail if there is a decision left. My package management sometimes was in a broken state from unconfigured updates. I would notice when I installed something manually and I got error messages. This is not cool as I did not receive security updates during that period.

    On a distribution upgrade (or release upgrade) you have to sit next to the machine when it does 3000 updates as it randomly asks about changed configuration files.

    There are some options you can give APT. You can add the following to /etc/apt/apt.conf.d/local

    Dpkg::Options {

    For more background see the blog post where this snippet is from.

    I am not sure whether this will fix the issue completely, I have not tried that yet.

  • Random programs are only available in old versions. There is no clear pattern to it. The window manager Awesome WM was only available in version 3.4 for a long time. Also LaTeX used to be in ancient versions, that is better now. The version of Vagrant I had with a current Ubuntu was already so old that the service had changed its API and it was not working with the Atlas any more.

  • The support of the LTS versions is way over two years, so you have plenty of time to change to the next LTS version. The normal releases which come out every six months have only three months of additional support which means that the upgrade window is just three months. Those lie exactly in my semester such that I have to change during the lecture period. I could use the LTS, but the software is not current enough for my taste.

  • For some reason, APT has two frontends, apt-get (and apt-cache) and the separated aptitude. I am still not sure which one is better, Debian used to recommend aptitude whereas Ubuntu apt-get. Now both recommend apt-get and a new apt has come around. Both systems have different databases of manually installed packages and only talk to each other on a limited basis.

  • Having automatic updates means that new kernels are also installed automatically. My laptop has full disk encryption which needs a separate /boot partition. The automatic updates fill the boot partition and therefore no more kernels can be installed. Even worse, the package installation of a new kernel fails midway as there is not enough space on /boot left. The installation fails and the package management is in a state of limbo. Nothing else works until you cleaned up space on /boot and finish the updates.

    If that happens in the background you do not get any notification directly, it just stops to install updates. You only notice that when you install a package manually and you get errors. You then do apt-get autoremove to get rid of the kernels and apt-get install -f to fix it again. It puzzles me as I have added the autoremove step into the unattended-upgrades configuration. Might be a bug, I do not know.

  • My knowledge of the Linux kernel is quasi non-existent. I know that it contains drivers and newer versions mean improved drivers. Their naming scheme is X.Y.Z where I think that Z is for the bugfixes. In Ubuntu the kernel versions are always X.Y.0 and then have a very high number of patches in Debian. To me this looks strange and I have no idea what the point is.

    I found a list with mappings between the versions. Perhaps this is done that way as Ubuntu has quite a number of patches they carry on top of the vanilla kernel.

    Also there are designated long term support kernels and somebody told me that Ubuntu is not using one such kernel but just a regular release for its LTS versions. If that is true, it is indeed a strange thing to do.

  • Contribution to the Ubuntu specific projects very often requires to sign a contributor license agreement (CLA) with Canonical. You would assign all the rights to them in exchange for the promise that the code is published under an open source license. This allows them to turn around, add some stuff themselves and sell it as proprietary code to some manufacturer. In exchange they take care of Ubuntu and do not charge for it. This point is debatable but I do not quite like it.

  • All the Ubuntu projects are versioned in Bazaar. That was my first version control system that I have used and I find it quite usable. The fact that it is written in Python is neat and that it has Windows support is also cool. However, git seemed to have won the version control competition and most open source projects that I see use git. A lot use Mercurial, but I think only a few actually use Bazaar. In order to contribute to Ubuntu projects you have to pick up yet another tool.

    Learning the basics of another version control system does not take long for a developer. But after years of using git I see what power this tool has and you cannot use that on those projects.

  • Debian has a useradd and adduser command. I always forget which one to use when I need it, therefore I have to look into the manual pages. It seems that one is a wrapper script for the other.

    On Fedora, both exist, but adduser is just a symlink to useradd. Therefore there is only one tool and no wrapper around another.

Arch Linux

In 2012 I had some experience with installing Arch Linux on a ThinkPad. We spend some 15 hours on it to get it working will all the bells and whistles:

  1. btrfs as main file system
  2. full disk encryption
  3. TRIM through the full disk encryption
  4. UEFI booting
  5. booting into console, then using startx

This is nothing that the Ubuntu installer would give you in any way. And I do see the purity of having just the Linux and no cumbersome configuration GUIs on top that make the whole thing harder to maintain than simpler. However as I regard my computer mostly as a tool and only partly as an end in itself, I like to use a more ignorant friendly distribution. Arch Linux definitely is user friendly when the users exactly know what they are doing. I do not exactly know and honestly I do not want to find out every detail of every service in a Linux system.

So I know that Arch Linux is not for me, but I think it is a very nice distribution if you know what you want.


Around 2013 I thought about switching to openSUSE. At the time it was openSUSE 12.3 and I wanted to see whether it was better than Ubuntu for me.

The bad

Perhaps I was not open-minded enough at the time but a couple of things did not resonate:

  • The whole KDE theme was tuned to green, the color of openSUSE. It is nice that it is consistent, but I just do not like green. For me KDE should be blue and it looked like I would have to change things at a couple places to get the green away fully.

  • The font anti-alias did look weird and less polished than on Ubuntu. I have no idea what it was in the end, but I didn’t like it. Also I missed the Ubuntu font which is a nice font I think.

  • Some packages are not available. And it is okay that a particular software is not packaged in a distribution, I get that.

    The most weird example was in 2015 on openSUSE 13.1 that I could not find any header files for Qt 5. There were binary packages for Qt 5 as well as a lot of programs liked to Qt 5 but no header. No idea how this worked, but it meant that I could not develop my program with Qt 5 but had to use Qt 4.

  • The configuration of automatic updates required YaST. One had to install some plugin for YaST using YaST to obtain settings for automatic updates. In that it would just do something in the background. Usually I am fine with things just working but I now dislike things I cannot automate with Ansible.

    Therefore I checked the whole /etc folder into git to see what changes YaST was actually creating. Then I enabled automatic updates in the GUI and checked with git again. One huge Bash script and some cron job was created. It was so much code that I did not bothered to read the whole thing, I just disliked it right there.

    Ubuntu and Fedora have much easier ways to enable automatic updates. On Ubuntu you install unattended-upgrades and adapt two config files slightly. On Fedora you install dnf-periodic or something in that neighborhood. Then you just use systemctl to enable it and you are good to go.

  • For some reason the categories in desktop files are not the standard ones. This means that every RPM package I made with my own software has to be adapted if there are XDG launcher files in there. This means I have to add lines like the following to my SPEC files:

    %if 0%{?suse_version}
    %suse_update_desktop_file -r thinkpad-dock-off System HardwareSettings
    %suse_update_desktop_file -r thinkpad-rotate System Monitor

    It is not a big deal but just a little cumbersome.

The good

  • Their package manager, zypper, is just a single command which can do everything you need. It has an exact SAT-solver instead of something heuristic.

    The command line output is a little cluttered for my taste but very informative. It has neat touches like coloring the first letter of each package to be installed. That way you can easily skim through the dense list of packages to be installed.

  • Zypper also notifies explicitly of “vendor changes” that is when a package is receives an updates from a different repository. This is an awesome feature as it allows more secure usage of third-party repositories.

    On other distributions you had the problem that a third-party repository could just supply a program that you probably had installed (say coreutils) with a new version. You would just get that with the update and not think twice about it. Then the attacker would have replaced software with his own. The approach taken here protects you from exactly this. This way you can add a repository and just install the software what you want without having updates of core packages.

    This protects from accidents where somebody has a repository and needs to upload a newer version of some library to work with his software. If you did not intend to use that software your libraries would still be updated without question.


For various reasons I switched to Fedora in October 2015. I was not sure whether it would be better than Ubuntu. Now I know that it again is a mixture of good and bad things.

The bad

  • On Ubuntu I was used to have all sorts of proprietary and non-free software like codecs and VirtualBox. This is not the case with Fedora. Since I do need that software I resorted to RPM Fusion. It does not feel too good to add a third-party repository directly after the installation but not using any music playback with streaming services is not cool either.

    At least the software in RPM Fusion is provided as nice RPM packages. The really bad thing is software that is not packaged and installed manually. Non-free software is hard to package due to license issues of course.

    There is Fedy which allows you to one-click install various proprietary software. It looks like a nice software which is very handy to get stuff working. However it just installs the software from the manufacturer’s website and you do not get updates. You have to install the updates yourself and I am not sure how much Fedy helps with that. I think it would be way cleaner to collect all that software into one or multiple repositories and add them to the package manager.

  • Fedora lacks a few software packages that Debian has, for instance xss-lock and “Mediathek View” which you can use to download TV shows of the official German stations.

  • For some reason the touchscreen does not work as with Ubuntu. I can perfectly draw on it with Xournal and use the touch to click on the toolbar of Xournal. But I cannot use every app with it and also not the KDE menus. I have no idea what is going on there but it is probably just some configuration issue.

The good

  • I am not asked anything during package installations. All the questions are asked before. Changed configuration files are just put next to the old ones with .rpmnew appended to the name. If the package maintainer deemed it necessary to use the new configuration file then the new one is used. The old one is moved to .rpmsave.

    In either case you can just call rpmconf -a whenever you want to go through those changes. This is very nice as all the updates have succeeded. I have never caught the package management in some undefined state.

  • The dependency solver in dnf is also the exact SAT solver used in openSUSE. This library, libsolv comes from openSUSE and is now incorporated into Fedora as well. Thanks openSUSE project!

  • Automatic updates are easy to set up. After I have set up some local mail delivery agent I receive system emails. The periodic updates send an email with a summary of the transaction. One time I got an email with over 2000 updates as TeXLive got updated. I noticed something by the CPU and SSD usage, but that is about it. Just how I think updates should go.

  • The download time of all the packages is reduced significantly by binary deltas. So when updates are downloaded it will only download the difference and apply it locally to the cached package from last update.

    When building the complete packages it uses all the CPU cores and is done rather quickly.

    I do not really care too much of the bandwidth and download time used by my laptop. On computers which I treat as a cold standby and only boot every couple weeks to sync data I am happy that the overall time to install updates is reduced that way.

  • The boot partition is not mindlessly filled with kernels. There are always exactly three kernels there which is a neat compromise of disk space (very constrained due to full disk encryption) and failsafe. You can even change the number of kernels that you want.

  • UEFI installation just worked fine without any problems. So did Ubuntu 15.10 but I wanted to mention that.

  • The versions of the programs are nicely up to date. I have not found a program that was too old for my purposes.

  • I have packaged my own software again as RPM packages and can compile them remotely on the Open Build Service kindly hosted by the openSUSE project. They provide a repository that I have added to my Fedora installations. Since Fedora and openSUSE share the RPM package format I can package my software for both distributions with a bit of additional work.

  • Although there is no long term support edition (well, Red Hat Enterprise Linux or CentOS or Scientific Linux) the overlap between releases is just one month until the release of the version next after. This means that with a release cycle every six months you can safely keep one version running for 13 months. This way I can choose to adopt the next version of Fedora in the semester break.