Two Wrongs

An Update a Week Keeps the Hackers Away

An Update a Week Keeps the Hackers Away

Update your system. Regularly. Seriously. Do it!

It sounds obvious when I say it like that, but a surprising amount of people forget about it. Besides the obvious problem of old software being a security threat, the problem becomes exponentially worse. The longer you put off updating the systems, the more updates will be needed, and the more likely something is to break. After a while, you put off updating simply for fear of breaking something.

Pinning + Frequent Updates

"But if I upgrade often I will introduce irregular downtimes my users don't want." No, updating doesn't have to mean you run the risk of downtime. The basic idea is that you specifically do not upgrade the packages that might cause a downtime when upgraded: many package managers have support for "pinning" versions of packages. In other words, you can for example pin Nginx at the current version, in which case you can update the system to your hearts content without the package manager touching your web server.

So what you do in practise is that you take some time once a week to run an update. You don't blindly blindly update everything it asks for – if it asks to update some critical part of your system, you tell it, "No." And you pin that part to the current version. (This is by the way why I don't suggest automatic updates. You should always vet the list of software to be updated so that you can deny anything that might cause your users inconvenience.)

After a while you will have a system that's somewhat updated, but some critical components are not – these perhaps include the kernel, web server, database system and so on. Make your users aware that once a month is "critical system update day." This is when your users have to stand the risk of downtime. This time, you fully update everything, including what's pinned. This will hopefully be a cheap operation since you have regularly updated most of your system.

There are some cases in which it won't be a cheap operation, and it's good to be aware of them. If anything vital to the start-up process of your server has been upgraded (in other words, something like the kernel, bootloader, system init scripts and so on), you should reboot your server. The reason you reboot is simply to check that the start-up process works correctly. You don't want to discover that it doesn't when faced with an emergency reboot. (For that reason, it might even be clever to force a power cycle on the server instead of an ordinary reboot – to make sure the start-up procedure works even when something unexpected caused the server to go down.)

Emergency Updates

What I've written above is the ideal workflow. Sometimes shit really hits the fan though, in which case you should of course immediately schedule an emergency update. Recent events that come to mind include the Heartbleed bug in OpenSSL and the Shellshock Bash bug.

What's funny about those bugs, though, is that they have often existed for years or at least months before they got attention, and people have probably known about them long before that as well. Some parties (both "good" and bad) have an economic interest in keeping such bugs under the lid. What I'm trying to say by that is that you don't have to drop everything right now to update your system. If the vulnerability has been known for six months, you can probably afford to wait a few hours more. Rushing an update risks breaking more than it fixes.