Safe updates, and knowing which data to back up

Q I am the IT manager for a small company that provides web services to international branches, VPN solutions and other services, all on CentOS, as well as internal services such as Samba and CUPS. Patching Linux servers is a relative unknown to me but I have to do it now. The paralysis brought on by fear of breakages can't continue - it will result in a less secure system. I've read book after book, article after article. They all seem to gloss over this topic with a catch-all "back up your data" Which data? It's not as simple as tarring up a home directory when it comes to enterprise services - they're all over the OS, with libraries that other services are dependent upon.

What if an update breaks something? How do I roll back? I understand that the major server distributions spend a great deal of time making sure that their repositories are self consistent, however there are things that never make it to the distros - certain CRMs for example, third-party webmail solutions etc. Anything more than one package with similar functionality could feasibly mean that I end up chasing dependencies by hand if something goes wrong.

The ideal solution is, of course, to apply the patch to a test environment first. In truth though, how many people have a mirror of every live service available all the time? A failover box may be available, but I'd rather not change the one that thing I know should work if everything else fails. Virtualisation seems to be the way to go. Virtualise your environments, take a snapshot, apply the patch, roll back the entire operating system if something goes wrong. This seems a little inelegant though - like changing your car when you run out of petrol.

A The car analogy seems a little strange -rolling back to a snapshot only undoes the changes made since the snapshot was taken, it is like an undo function but to a fixed time rather than a single operation. With critical production servers, you do really need to test everything on a separate system before applying it to the live servers. You are thinking along the right lines with virtualisation, but you can use it for the test environments. That way you could effectively have test versions of all of your systems on one or two machines.

This has a number of distinct advantages. First, you can use a single box with a number of virtual machines on it, which would require no more resources than a single box running any one of those servers, with the obvious exception of disk space. When you want to update a particular system, load the virtual machine, apply and test the updates and replicate them on the production server when you're completely satisfied that they work reliably. If there's a problem, revert the snapshot and try again, all the while your production server is reliably doing its job. Another advantage of testing on a separate system first applies when you're installing from source.

You don't need to compile on the production system, so you don't need a full compiler toolchain on that box. This reduces the number of packages installed on the remote server and so improves its security. You can use checkinstall (http://checkinstall.izto.org) to build RPM packages of the program for installation on the production systems.

Back to the list