Maslow’s Hammer

by actionjack on August 2, 2013 | Leave your comment

“I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”

I haven’t blogged in an absolute age so I thought I commit some of my ideas and thoughts as a DevOps journeyman.

Over the past year or so I’ve been more contemplative about the Art of DevOps in both its cultural and technical aspects. As part of this contemplation I’ve reviewed how I’ve done things as a DevOps practitioner and questioned why, all the while attempting to sharper my saw, sometimes it’s been difficult and sometimes it’s been fun but either way it’s been rewarding.

However back at the point, After the past 18 months I’ve had the good fortune to work at multiple organisations with differing methods in their Practice of DevOps so I wanted to compare and contrast them and put my own particular spin on what I think worked and what didn’t and why.

Orchestration vs Automation

We now have some absolutely fantastic tools for automation, we have Puppet, Chef, SaltStack, Ansible, CFEngine and the list grows on. These are absolutely wonderful and I can no longer imagine a world without them. However with them has come a culture of favouring the “tool” above all else.

What I mean here is that the toolsmith wants to use the tool to do EVERYTHING, from configuring infrastructure to deploying applications and configuring them which is generally fine but sometimes there is the temptation to go a bit too far IMHO with them e.g. Using puppet to configure and setup one time destructive operations which handle persistent state e.g. the creation of a mongodb replica set in an ephemeral environment like AWS.

This can be done quite easily but easily could run completely amok, so the question is should this be done without any manual gates e.g. if you accidentally generate a new mongodb replica you could put your replica set in a situation where quorum could never be reached e.g. 4, 6, etc.

Personally I think this is where Orchestration should step in, I define orchestration as the codifying of a process which will be triggered by something like jenkins, rundeck, capistrano, fabric or even a bash script on a one time or infrequent basis. I believe there should be some sort of manual gate to handling specific types of operations e.g. creating mongodb replica sets or deploying software in a structured manner rather than just letting your configuration of choice just do it’s thing on an ad-hoc.

The long and short of this is just to use the right tool for the job and make sure you keep both in your toolbox.

Relevant links

Though I’m not an advocate of everything in this article it does have a section on being overly zealous with automation tools like puppet.

Versions vs repository management

Also known as:

package {"foo": ensure => 'latest'} vs package {"foo": ensure => '0.0.1'}

I’m a great believer in keeping local package repositories (or mirrors) for you server software, ideally one of each environment you maintain (Development, Staging, Production).

What I’m not a great believer in is using versioning in your configuration management tool to handle what specific version is deployed, I think repository management is a more elegant and long term solution to the problem, just set your configuration to be latest (or installed) and ensure that the right packages are in your local repository.

The reasons why I think it’s a much more viable and elegant in the long term is that:

1. It stops constant configuration churn since you don’t have to continually update your configuration to handle all your software versions.
2. I believe the rolling out of updated software is an orchestration task which should be validated by your CI system through the spin up of throwaway virtual machines or containers like vagrant or and tested using tools like serverspec, selenium, capybara, cucumber or rspec to validate that your application is still working after the software has been upgraded. Then an automated job should move the validated packages from repo to repo.
3. Nothing kills continuous delivery faster than having to hack around in your {insert configuration management tool of choice here} in order to do a release.
4. If you want to go down the road of using Immutable infrastructures, you need to start thinking of service availability not discrete versions of software.

Relevant links

Staging Package Deployment via Repository Management

Pulp Project

Definition of a immutable server

Packages vs Tar-balls

Use your distros package type to deploy your software, nuff said!

Packages give you versioning, querying, upgrade, rollback, dependency resolution and ease of distribution.
Let your configuration management to handle the configuration (see it’s not just a clever name!)

Once I would have let you off for saying creating OS packages is hard but now we have fpm so you have no excuse.

As an example this is really hard to verify if something isn’t quite right somewhere is there:

$ tar -xzvf foo-bar.tar.gz

While this will tell you something is corrupted:

$ rpm foo-bar.rpm
 foo-bar.rpm has wrong checksum

Test Driven vs Just Hacking with NOOP

I work in a agile team who live and breathe TDD, they sell it as a feature and it quickly made sense for me to adopt it as well for Infrastructure as Code to add more credibility to the group, however I have to admit is was hard in the beginning, I just couldn’t see the value until, I tried to refactor a module I was working on and like a bull in a china shop broke it by trying to change too many things at once. It was the tests that did two important things for me that day:

1. It helped me fix the problems in record time and;
2. It taught me to change one thing at one time and only that thing until the test for it had passed.

While those may not see those as important I began to realise other benefits, my code became more re-usable and read-able because I focused two things, the first was how can I write code that was easier to test and writing the tests before I committed a single line to implementing the code.

Writing tests in a multi individual environment helps because your tests become a compliance guarantee if someone on your team extends the functionality you can test that it doesn’t overtly break the existing infrastructure.

Taking vs Giving

aka: Not Invented Here vs Forking & Merging

Pause for a moment and take a look in your configuration management source tree and see if you can find a ntp module or recipe of kind, do you know who wrote it? You? Someone in your organisation? Someone who left your organisation? A consultant? A vendor?

Opscode Chef has over 25,000 registered users  but their ntp cookbook has only ever been downloaded 2918 times (as of the time I wrote this post), now you could read into this a number of ways but I’d like to believe that:

a) People usually care about their servers having the correct time.
b) That huge amounts of them aren’t being employed by a small number of companies.

I bet the majority of you who actually looked will find they have written their own ntp modules and the greater part of those are just using puppet to install ntp, copy in a file and ensure that the service is always running.

Now that’s a great first step to learning the tool but by just leaving it there it becomes inventory and not just anybody’s inventory it becomes yours, you made it, you own it, you maintain it.

If instead you used a shared ntp you can reap certain benefits:

1. It available Just In Time, when you want it, it’s just there, no muss no fuss,
2. As a general rule it’s usually well tested,
3. If it does do exactly what you need you can extend it and submit a merge request, you learn from working without people’s code, it improves your code, and if your intention is to merge the code back at some point you start to think more about how it’s going to affect both you and other people.

Relevant links

Stop the fork

One of my pull requests


Going it alone vs pairing

Recently I started doing something I’ve always thought about and tried to do before and that’s DevOps pairing, I pair with Developers, QA’s and another Sysadmins locally and remotely and I have to admit that I really enjoy it, I get to work with people smarter than I am from different fields with very different viewpoints and I feel I’ve really raised my game because of it.

Pairing among sysadmins is almost unheard of, we are generally solitary creatures when performing our art and secretive in our methods. Often the only time we pair is when we are put on a training course and forced to share a single computer with the person next to us and the goal then is get the exercise over and done with quickly and if the person we are working with doesn’t understand well too bad for them.

How many times have you worked within a team and heard the person next to you wrangle, curse and complain with an issue until they finally breakdown to ask you to take a look and of course you magically identify the problem within milliseconds, is this a magical talent? Prescient skill even? No it’s just a fresh pair of eyes from another perspective, it’s a review from a peer not a judgement of your skill.

Pairing is not just for Developers it’s for (everyone?) sysadmins too it enables you to

  • Share knowledge
  • Reduce errors
  • Produce simpler solutions
  • Be more productive
  • Enforce understanding – Does it work really the way you think it does

But here’s the kicker it

  • Requires social skills and it doesn’t just happen even when your not driving you need to navigate.

Relevant Links

Pair Program with me
Google Hangouts
Pair Programming Agile 2012 presentation
Extreme Programming Pairing

Favouring the Status Quo vs Challenging and Changing things for the better

I was going to expand on this but I’ll leave it to @lusis he says it much better and more eloquently that I ever will in his post.


Using FPM to package unwieldy 3rd-party vendor packages

by actionjack on July 20, 2012 | Leave your comment

Sometimes you have a requirement to package a third-party vendors software which hasn’t been delivered in your package format of choice (let us say rpm in this case), so no problem you think, you then proceed to install it and create a rpm package from it.

9 times out of 10 this just works and works well but sometimes you will run into problems for example:

    • The size of the package is greater than the rpm cpio limit of 2Gb;
    • The application has a manifest that holds a checksum of the files which is stripped using a standard build;
    • The are multiple CPU architectures, OS versions, test versions supported in within the package

Funnily enough I hit the jackpot and had all 3 to contend with:

    • The package was 10 gig compressed and 22 gig uncompressed;
    • It had a manifest file that had a checksum of every file listed (181347 of them) and refused to run unless they were valid and;
    • Had to be packaged to handle 10 different situation types these situation types were scattered in different directories (there were 5266 directories);

Now I’m quite confident with my rpm-fu but this looked like could take a while and be more than a little painful so I thought what would be the fastest most convenient way to package this (insert curse word here) application with the least cost and most benefit.

Enter Effing Package Management!

If you have never heard or used Jordan Sissel’s Effing Package Management tool ( before you are in for a real treat! It slices, it dices but wait! There’s more it can create packages from multiple source types e.g. dir, gem, python, deb and spit them out as rpm’s, deb’s, solaris pkg’s and even with for it! Puppet modules!

Installing FPM

Hint you will need the epel repository installed

$ yum -y install zlib zlib-devel rpmdevtools
$ yum groupinstall "Development Tools"
$ curl -L | bash -s stable --ruby
$ source /usr/local/rvm/scripts/rvm
$ gem install fpm

Dealing with the B*tard package from Hell

What you are doing here is creating a clean copy of the application and a base application tree without the architecture specific stuff (this will depend on what format your package takes or it may not even be needed at all)

$ mkdir bitbucket
$ cd bitbucket
$ mkdir -p application/opt application-base/opt
$ rsync -av --progress /path/to/application/root/* application/opt/application
$ rsync -av --progress --exclude='*Option*' application/opt/* application-base/opt

Now the fun part

To create an unstripped noarch rpm using the stripped application-base with a dependency on libstdc++ using fpm:

$ fpm -s dir -t rpm -n application -v 1.0 -d libstdc++ -a noarch -C application-base/ .

To create 2 unstripped x86_64 rpms using the full application tree with all files/directories that contain OptionLowMem|OptionHighMem and OptionLowMem-Support|OptionHighMem-Support:

$ fpm -s dir -t rpm -n application-optionlowmem -v 1.0 -d application -d libstdc++ -a x86_64 -C application/ `find opt/ -name *OptionLowMem -print && find opt/ -name *OptionLowMem-Support -print`
$ fpm -s dir -t rpm -n application-optionhighmem -v 1.0 -d application -d libstdc++ -a x86_64 -C application/ `find opt/ -name *OptionHighMem -print && find opt/ -name *OptionLowHigh-Support -print`

Both of these will produce 3 RPM’s


Now this simple example could save hours of time and effort, in the real life scenario that this was based on it saved me days even weeks of rpm spec file writing, tuning and testing.

Installing Foreman on CentOS 6

by actionjack on June 6, 2012 | Leave your comment

Currently the easiest way to install foreman is by using the foreman-installer which is basically a set of pre-packaged puppet modules

yum install git mysql-server
cd /tmp
git clone --recursive git://
echo include foreman, foreman_proxy | puppet apply --modulepath /tmp/foreman-installer
chkconfig mysqld on
service mysqld start

Create a my.cnf for auto logon of mysql:

vi ~/.my.cnf

And enter the following content:

user = root
password = password

Lock down the access to the file:

chmod 0600 ~/.my.cnf

Create the foreman development and live databases:

mysql -e "create database foreman character set utf8;"
mysql -e "create user 'superforeman'@'localhost' identified by 'mysupersecretpassword';"
mysql -e "grant all privileges on foreman.* to 'superforeman'@'localhost';"
mysql -e "create database foreman_development character set utf8;"
mysql -e "grant all privileges on foreman_development.* to 'superforeman'@'localhost';"

Configure foreman to use the mysql database:

vi /etc/foreman/database.yml
adapter: mysql
database: foreman_development
host: localhost
username: superforeman
password: mysupersecretpassword
encoding: utf8

adapter: mysql
database: foreman_test
host: localhost
username: superforeman
password: mysupersecretpassword
encoding: utf8

adapter: mysql
database: foreman
host: localhost
username: superforeman
password: mysupersecretpassword
encoding: utf8

Replace the existing yum repo with an updated one to get the latest RPM’s:

wget -O foreman.repo

Run yum update to update foreman:

yum update

Install mysql and foreman support components:

yum install rubygem-mysql foreman-mysql foreman-console

And voila

Pimping my vim or giving up on heavy IDEs

by actionjack on April 17, 2012 | One comment

I’ve given up on using heavy weight IDEs to write my puppet code, they are memory hungry, slow and make you forget how the command tools actually work.

So in that thread I pimped up my vim with some additional vim modules so I can hack around without the overhead.

mkdir -p ~/.vim/autoload ~/.vim/bundle
curl -so ~/.vim/autoload/pathogen.vim \
cd ~/.vim/bundle
git clone git:// fugitive
git clone puppet
vi ~/.vimrc

Contents of my ~/.vimrc:

set tabstop=4
set shiftwidth=4
set expandtab
call pathogen#infect()
syntax on
filetype plugin indent on

Relevant links:


Thanks for Lowe Schmidt for recommending 2 additional plugins that are exceptionally sweet!


Using Java Service Wrappers and RPM to enable Continuous Delivery

by actionjack on March 31, 2012 | Leave your comment

Lessons in Web Access Management – Josso – Part 1

by actionjack on February 20, 2012 | 3 comments

I’m currently looking for a Web Access Management tool to manage and control access to a number of hosted applications.

Wikipedia defines Web Access Management as:
Authentication Management
Policy-based Authorizations
Audit & Reporting Services
Single sign-on Convenience

I’ve had previous experience in this space using CA’s Siteminder product but this time I want to experience some “open” alternatives where the information on the use and configuration of the product isn’t as limited and the costs aren’t as great.

Initially I thought I’d have a look at JOSSO (Java Open Single Sign On)

Installing Josso

useradd josso
Download josso-ce-2.2.x.tar.gz from
cd /opt
tar zxvf /path/to/josso-ce-2.2.x.tar.gz
chown -R josso: /opt/josso-ce-2.2.x
su - josso
cd /opt/josso-ce-2.2.x/bin

atricore: JAVA_HOME not set; results may vary
__ _____ _____ _____ _____ ___ _____ _____
__| | | __| __| | |_ | | | __|
| | | | |__ |__ | | | | _| | --| __|
|_____|_____|_____|_____|_____| |___| |_____|_____|

JOSSO 2 Community Edition (2.2.1)
Atricore Console (1.1.1) http://localhost:8081/atricore-console/
Atricore Identity Bus (1.2.1)

Apache Felix Karaf (2.2.1)
Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.

Hit '<ctrl-d>' or type 'osgi:shutdown' to shutdown JOSSO 2 CE.


Enter “osgi:list | grep Atricore” to confirm the services are started and running.

[ 41] [Resolved ] [ ] [ ] [ 60] Atricore IDBus :: Kernel : Web Console Branding (1.2.1)
[ 148] [Active ] [ ] [ ] [ 60] Atricore IDBus :: Kernel : Support (1.2.1)
[ 149] [Active ] [ ] [ ] [ 60] Atricore IDBus :: Kernel : XML Digital Signature Binding (1.2.1)
[ 150] [Active ] [ ] [ ] [ 60] Atricore IDBus :: Kernel : SPML 2 w/DSML Profile Binding (1.2.1)
[ 151] [Active ] [ ] [ ] [ 60] Atricore IDBus :: Kernel : SAML R 2.0 Protocol Binding (1.2.1)
[ 152] [Active ] [ ] [ ] [ 60] Atricore IDBus :: Kernel : SAML R1.1 Protocol Binding (1.2.1)
[ 153] [Active ] [ ] [ ] [ 60] Atricore IDBus :: Kernel : Atricore SSO 1.0 Protocol Binding (1.2.1)

Browse to http://jossohost:8081/atricore-console/ default login is admin:admin

Java Service Management with YAJSW

by actionjack on February 10, 2012 | 2 comments

Ok so what actually is Java Service Management? Well it means easily enabling your java application to be run as a daemon with all the bells and whistles e.g. clean service stopping and starting, logging and jmx profiling.

I was working on a project that ran a java jar file e.g. java -cp application.jar and I initially wrote a simple initscript to stop and start it.Unfortunately it would not daemonize cleanly and dump tons of “useful” info and debug data to the console, eventually it got to the point where I only felt comfortable starting it in using screen.

I finally got time to revisit the situation and look at Tanuki’s Java Service Wrapper software, unfortunately it was incompatible with the project I was working on (and by incompatible I mean it was going to cost me lots and lots of license fees).

Enter YAJSW – Yet Another Java Service Wrapper (

“YAJSW is a java centric implementation of the java service wrapper by tanuki (JSW).
It aims at being mostly configuration compliant with the original. It should therefore be easy to switch from JSW to YAJSW.”

And it only took me minutes to get working:

    • Download yajsw from Sourceforge
    • unzip to your application’s working directory
    • Run the java application and go to $applicationdir/yajsw/bat
    • Change the mode to all the shell scripts e.g. chmod a+x *.sh
    • Run ./ $java_app_pid
    • Edit $applicationdir/yajsw/conf/wrapper.conf and customise it to your needs
    • Run ./ to check if the application is running correctly.
    • If you want to install the service run ./ this will create the required initscripts and place them into /etc/init.d/ and /etc/rc.d/runlevel directories.
    • You can then choose to either run the startService or stopService shell scripts or use the newly created one in /etc/init.d.

More detailed configuration can be found on their main site and this one is definitely a keeper for my java deployment toolkit.

Moving from puppet-iptables to puppet-firewall

by actionjack on February 3, 2012 | 3 comments

After months of procrastination I finally migrated from puppet-iptables to puppet-firewall and I’m so glad that I did!

I’m beginning to see what Ken Barber (@ken_barber) was hinting at when he told me at the last EU Devops Days conference that configuring linux firewalls was just tip of the iceberg, I can now see the path of it being used to configure Cisco and Juniper based firewalls eventually.

To get the puppet-firewall module working for me out of the box I had to add the following to my site.pp

exec { 'clear-firewall':
  command => '/sbin/iptables -F',

  refreshonly => true,

exec { 'persist-firewall':

  command => '/sbin/iptables-save >/etc/sysconfig/iptables',

  refreshonly => true,


Firewall {

  subscribe => Exec['clear-firewall'],

  notify => Exec['persist-firewall'],


After that I configured a base firewall module e.g.

class basefirewall {

resources { 'firewall':

    purge => true,


firewall { "001 accept all icmp requests":

    proto => 'icmp',

    action  => accept,


firewall { '002 INPUT allow loopback':

    iniface => 'lo',

    chain   => 'INPUT',

    action    => accept,


firewall { '000 INPUT allow related and established':

    state => ['RELATED', 'ESTABLISHED'],

    action  => accept,

    proto => 'all',




Top Marks for this module!


HAProxy and MySQL lockouts

by actionjack on February 1, 2012 | Leave your comment

For the last couple of days I’ve been playing around with HAProxy MySQL/Galera to create a fault tolerant, load balancing database backend that is invisible to the application.

Unfortunately, I was having an issues, that after a set amount of time the MySQL nodes would stop accepting connections.

After a bit of failed googling I visited the #haproxy irc chat on Freenode and a couple of helpful fellows called @vr and @meineerde pointed me to this section of the HAProxy configuration guide:

  …It was reported that it can generate lockout if check is too frequent and/or if there is not enough traffic. In fact, you need in this case to check MySQL “max_connect_errors” value as if a connection is established successfully within fewer than MySQL “max_connect_errors” attempts after a previous connection was interrupted, the error count for the host is cleared to zero. If HAProxy’s server get blocked, the “FLUSH HOSTS” statement is the only way to unblock it.

I had already set the max_connect_errors in my.cnf file to be 1000 but I guess that was no way enough. After looking around the net a bit looking for inspiration I’ve decided to set it to:



WordPress 0 to 60 in 5 minutes flat

by lesliebuna on January 1, 2012 | Leave your comment

So Steve and I agreed that our first sprint was to build a WordPress server, that’ll be a snap to do (or so I thought!)

In order to create a WordPress server I need 4 main components:

A Server with an Operating System installed
A MySQL database
A HTTP Web Daemon and
The WordPress application itself.
So like a good boy scout I set off about installing a WordPress on a centos box by hand and I found some most excellent instructions here by Andrew at Adlibre.

I hit my first problem, the WordPress package from EPEL for RHEL/CentOS 5 is quite old, it’s 2.8 while the version of WordPress with all the new hotness is 3.0.

Hmm ”old busted” or “new hotness” (not to say the 2.8 is either old or busted but you get what I mean). I decided I’d roll with the new hotness and then I hit another dilemma I couldn’t find a RPM for RHEL/CentOS 5, so should I just roll out the tar ball or create a package for it?

This is currently a hot topic in the Devops community and when I went to Devops Hamburg back in October one of the more popular open space sessions was “packaging vs. non-packaging, when and what?” and the main output was it “depends”.

However I wanted a zero touch and fully automated deployment so I investigated both methods and weighted them up:

Scenario 1 – Installation of Tar ball

Download WordPress tar ball to the server file system
Untar the package and install it to the correct location
Track the installation and ensure that it is valid (correctly installed, all checksums in place (Thanks AndrewH!)) also be able to cleanly handle upgrades.

Scenario 2 – Installation RPM

Package WordPress RPM
Yum install WordPress

I quickly came to the conclusion that the packaging case won over when I imagined the amount of puppet code I’d have to churn out to do scenario #1 and make it robust versus just using what I believed was the right tool for the job i.e. RPM and keeping it as simple as possible.

Name: wordpress
Version: 3.0.1
Release: 1
Group: System/Servers
Summary: Personal publishing platform.
Vendor: WordPress
License: GPLv2+
Packager: Martin Jackson
BuildArch: noarch
BuildRoot: %{_tmppath}/%{name}-%{version}-root
Requires: php
Requires: php-mysql

WordPress is an open source Content Management System (CMS), often used as a
blog publishing application, powered by PHP and MySQL. It has many features
including a plug-in architecture and a template system.

%setup -q -n %{name}

# fix dir perms
find . -type d | xargs chmod 755

# fix file perms
find . -type f | xargs chmod 644

# disable wordpress update option
sed -i -e "s/add_action/#add_action/g" wp-includes/update.php

rm -rf %{buildroot}

install -d %{buildroot}%{_sysconfdir}/httpd/conf.d
install -d %{buildroot}%{_sysconfdir}/%{name}
install -d %{buildroot}/var/www/%{name}

cp -aRf * %{buildroot}/var/www/%{name}/

cat > %{buildroot}%{_sysconfdir}/httpd/conf.d/%{name}.conf << EOF

Alias /%{name} /var/www/%{name}

AllowOverride None
Allow from All


# cleanup
rm -f %{buildroot}/var/www/%{name}/license.txt

rm -rf %{buildroot}

%attr(0644,root,root) %config(noreplace) %{_sysconfdir}/httpd/conf.d/%{name}.conf

* Wed Nov 17 2010 Martin Jackson
- initial Red Hat Enterprise package

OK now I have my newly packaged WordPress 3.0 rpm and life is good, everything else I need is already packaged and can be easily installed using yum, a few minutes later I have a fully functioning WordPress server.

Now for the fun part, how do a fully automate the creation of a WordPress server end to end.

Firstly I created a custom kickstart profile on my cobbler server to install a minimal Just Enough Operating System (JeOS) base build (faster to install and patch with the minimum security attack surface) with some additional post install stuff to bootstrap puppet.

url --url http://rasengan/cblr/links/Centos-5.5-x86_64
key --skip
lang en_US.UTF-8
keyboard uk
network --device eth0 --bootproto dhcp --hostname testvm-wordpress.uncommonsense.local

rootpw --iscrypted $MD5PASSWORD
firewall --enabled --http --port=22:tcp
authconfig --useshadow --enablemd5
selinux --permissive
timezone Europe/London
bootloader --location=mbr --driveorder=vda
clearpart --all --initlabel
part /boot --fstype ext3 --size=100
part pv.2 --size=0 --grow --ondisk=vda
volgroup VolGroup00 --pesize=32768 pv.2
logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=512 --grow --maxsize=1024
logvol / --fstype ext3 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow
repo --name=source-1 --baseurl=http://rasengan/cobbler/ks_mirror/Centos-5.5-x86_64/

%packages --nobase --excludedocs

exec < /dev/tty3 > /dev/tty3
chvt 3
echo ##############################
echo # Running Post Configuration #
echo ##############################
rpm -Uvh
echo " puppet rasengan rasengan.uncommonsense.local" >> /etc/hosts
yum -y install puppet rdoc
chkconfig puppet on
cat > /etc/sysconfig/puppet << EOF # The puppetmaster server PUPPET_SERVER=rasengan # If you wish to specify the port to connect to do so here #PUPPET_PORT=8140 # Where to log to. Specify syslog to send log messages to the system log. #PUPPET_LOG=/var/log/puppet/puppet.log # You may specify other parameters to the puppet client here #PUPPET_EXTRA_OPTS=--waitforcert=500 EOF cat > /etc/puppet/puppet.conf << EOF [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl libdir = /var/lib/puppet/lib [puppetd] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig pluginsync = true plugindest = /var/lib/puppet/lib EOF cat >> /etc/rc.local < installed

class apache::config {
require => Class["apache::install"],
notify => Class["apache::service"],
owner => "root",
group => "root",
mode => 644

class apache::service {
ensure => running,
enable => true,
require => Class["apache::config"],

class apache {
include apache::install,

class apache::disable {
include apache::install,

For the MySQL installation and configuration I wanted something a little more robust since I believed I could make a lot of use out of a good MySQL puppet module. I found it quite ironic that I stumbled across a problem that I had previously answered on server fault that being how do you find a top notch MySQL module:

There were quite a few out there and after a fair bit of back and forth I settled on the Camp to Camp boys’ MySQL puppet module, it is very functional and quite compact in the amount of supporting modules that are needed to make it function:

There were quite a few out there and after a fair bit of back and forth I settled on the Camp to Camp boys’ MySQL puppet module, it is very functional and quite compact in the amount of supporting modules that are needed to make it function:


You’ll need to have the plugin sync enabled in your puppet.conf for this to work because it copies some custom facts and plugins to the client in order to create the databases.

1 [main]
2 libdir = /var/lib/puppet/lib
4 [puppetd]
5 pluginsync = true
6 plugindest = /var/lib/puppet/lib

Onto the final stretch I need to install WordPress so I whipped up the following puppet module to install WordPress.

# Class: wordpress
# This class manages the wordpress blogging application
# Parameters:
# None
# Actions:
# Install the wordpress blogging application
# Requires:
# - Package["apache","mysqlclient"]
# Sample Usage:

class wordpress::install {
$packagelist = ["php","php-mysql","wordpress"]
package{ $packagelist:
ensure => latest,
require => Class["repository::uncommonsense"],

class wordpress::config {
require => Class["wordpress::install"],
notify => Class["apache::service"],
owner => "root",
group => "root",
mode => 644

class wordpress {
include wordpress::install,

class wordpress::disable {
include wordpress::install

I used the following entry in my nodes.pp file to pull it all together

node "testvm-wordpress.uncommonsense.local" {
include repository::uncommonsense
include apache
include augeas
include wordpress
$mysql_password = "foo"
include mysql::server::small

mysql::rights {"Set rights for wordpress database":
user => "username",
password => "password",
database => "wordpress",

mysql::database {"wordpress":
ensure => present

Ok so I now have a fully functional WordPress Server in about 5 minutes. This is just a basic setup but it demonstrates what can be done without too much effort and if I wanted to take this further in the future, I would create an erb template for the wp-config.php so that the database settings are populated by puppet using the existing username, password and database name variables.