Regular update of Docker images/containers

After converting my servers into docker setups, I was in need to update the images/containers regularly for security reasons. Baffled I found, that there is no standard update method to make sure that everything is up-to-date.

The ephemeral setup allows you to throw away your containers and images and recreate them with the latest version. As easy as this sounds, you figure that there are some loopholes in the setup.

First we need to understand, there are 3 types of images that we need to keep up-to-date

  1. Images from the docker hub, that just get pulled and are used as they are with some configs
  2. Images from the docker hub, that get pulled and then are only used as base for own dockerfiles
  3. The images created out of own dockerfiles

Ridiculously all three of them need to be updated to make sure everything is up-to-date (e.g., a new build won’t get the latest base image update) and additionally we have to care for cleanup.

I found some solutions in the net to automatically update docker, the so far best version by (Automatically update Docker images). But this leaves out my own dockerfiles with a build and some minor steps, pruning, etc. So I am only using the dupdate script out of the Handy Docker Tools to incorporate in a little script.

So what is the solution

WARNING: This just updates images. If your setup needs additional update steps, you need to plan these in. Otherwise you risk breaking your setup.

You need multiple steps to completely update.

First, I use dupdate to update all docker images coming from a hub, covering the ones I use directly and as base for builds.

/usr/local/sbin/dupdate -v

This will give an error for the images you created out of your own dockerfiles, but update all via pull from docker hub.
Second, I update the images of my own dockerfiles by rebuilding all via docker-compose (If you use docker without docker-compose, you just have to to this for each dockerfile)

/usr/local/bin/docker-compose -f docker-compose.yml build


This will use the newly pulled base images in their build, hence create the latest version for your dockerfile.

Now the images are all updated and we only need to restart the containers.

Addition for cleaning up

However you end up with a lot of images tagged or named <none>. These are your old images, which are now cluttering the hard drive.

The ones only tagged with <none> are the ones you updated from docker hub, the ones with name and tag <none> are the ones you build.

You will need to do a image prune to get rid of them and free up space.

/usr/bin/docker image prune -a --force

Warning: This will erase all older images. If you need them as a safety precaution, skip this step.



As a long time PGP user I wanted to improve the key landscape and offer my public key via DANE. This is quite simple, if you have a DNS host, that supports DNSSEC and the different needed DANE types. (Note: I use

The principle behind this: You publish your PGP key in your signed DNS and by that a correctly configured mail system could opt into sending you encrypted E-Mails even without having exchanged keys before.

I have tried to use the gpg2 function “–print-dane-records” (available from 2.1.9 upwards), however could not generate a usable data part of the DNS record via this.

So what is the solution

As a prerequisite you need to have created your own PGP keys with the E-Mail you want to use as an User ID. Note: I am only doing a per E-Mail Setup.

Use the following website:
Generate an OPENPGPKEY output pretty much does the trick.

The generated output is sorted as follows:

  • Owner Name goes into the record. Be careful, if your DNS provider automatically adds the domain.If you used “–print-dane-records” you need to concatenate the ID of the key (before TYPE61) and string after $ORIGIN
    The syntax is “<SHA256 hash of your e-mail name before the @>._openpgpkey.<your domain>”
  • Out of Generated DNS OPENPGPKEY Record you take the part within the () into the data part of the record. Here you need to transform the key into one line string.
    If you used “–print-dane-records”, I discarded the data part, as I could not get it to work. Simply use your key data exported with ASCII armor (-a parameter) without the last line. Of course leaving headers and footers out as well.
  • The type is OPENPGPKEY
  • Class is IN
  • TTL you can set on a decent value. For testing I used a hour.

To test the setup use

dig OPENPGPKEY <Owner Name goes here>

This will simply send back the data block and test the DNS setup.

To test the DANE lookup do the following

gpg2 --auto-key-locate clear,dane,local -v --locate-key <your e-mail goes here>

This gives quite a good feedback on the setup and tells you if the key was fetched via DANE or not.


SendXMPP mail forward on Debian Jessie

To have a more comfortable way of receiving messages by my servers, I wanted all my root E-Mails to be forwarded to my mobile via XMPP. I only have a limited exim4 on my machine running, configured for local mail delivery only.

So what is the solution

1. Install sendxmpp

apt-get install sendxmpp

2. Create config file as “/etc/sendxmpp.conf”


as an example: supersecretpassword!

3. Set the right permissions and owner

chmod 600 /etc/sendxmpp.conf
chown Debian-exim:Debian-exim /etc/sendxmpp.conf

4. Create a script to call sendxmpp as “/usr/sbin/mail2xmpp”. It might be that you could put this completely into the alias, however I decided to use the script.
Exchange with your receiving ID. “-t” enables the TLS connection for sending the message.

echo "$(cat)" | sendxmpp -t  -f /etc/sendxmpp.conf

5. Make script executable

chmod 755 /usr/sbin/mail2xmpp

6. Create the alias for the user, which E-Mails you want to forward in “/etc/aliases”

root: , |/usr/sbin/mail2xmpp

7. To activate pipe forwarding we have to create “/etc/exim4/exim4.conf.localmacros”


8. Run newaliases and restart exim4 for config to take effect

service exim4 restart

Now you should be able to test if it works, by simply sending a local test E-Mail to user “root”.