Maslow’s Hammer
by actionjack on August 2, 2013 | Leave your comment
“I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”
I haven’t blogged in an absolute age so I thought I commit some of my ideas and thoughts as a DevOps journeyman.
Over the past year or so I’ve been more contemplative about the Art of DevOps in both its cultural and technical aspects. As part of this contemplation I’ve reviewed how I’ve done things as a DevOps practitioner and questioned why, all the while attempting to sharper my saw, sometimes it’s been difficult and sometimes it’s been fun but either way it’s been rewarding.
However back at the point, After the past 18 months I’ve had the good fortune to work at multiple organisations with differing methods in their Practice of DevOps so I wanted to compare and contrast them and put my own particular spin on what I think worked and what didn’t and why.
Orchestration vs Automation
We now have some absolutely fantastic tools for automation, we have Puppet, Chef, SaltStack, Ansible, CFEngine and the list grows on. These are absolutely wonderful and I can no longer imagine a world without them. However with them has come a culture of favouring the “tool” above all else.
What I mean here is that the toolsmith wants to use the tool to do EVERYTHING, from configuring infrastructure to deploying applications and configuring them which is generally fine but sometimes there is the temptation to go a bit too far IMHO with them e.g. Using puppet to configure and setup one time destructive operations which handle persistent state e.g. the creation of a mongodb replica set in an ephemeral environment like AWS.
This can be done quite easily but easily could run completely amok, so the question is should this be done without any manual gates e.g. if you accidentally generate a new mongodb replica you could put your replica set in a situation where quorum could never be reached e.g. 4, 6, etc.
Personally I think this is where Orchestration should step in, I define orchestration as the codifying of a process which will be triggered by something like jenkins, rundeck, capistrano, fabric or even a bash script on a one time or infrequent basis. I believe there should be some sort of manual gate to handling specific types of operations e.g. creating mongodb replica sets or deploying software in a structured manner rather than just letting your configuration of choice just do it’s thing on an ad-hoc.
The long and short of this is just to use the right tool for the job and make sure you keep both in your toolbox.
Relevant links
Versions vs repository management
Also known as:
package {"foo": ensure => 'latest'} vs package {"foo": ensure => '0.0.1'}
I’m a great believer in keeping local package repositories (or mirrors) for you server software, ideally one of each environment you maintain (Development, Staging, Production).
What I’m not a great believer in is using versioning in your configuration management tool to handle what specific version is deployed, I think repository management is a more elegant and long term solution to the problem, just set your configuration to be latest (or installed) and ensure that the right packages are in your local repository.
The reasons why I think it’s a much more viable and elegant in the long term is that:
1. It stops constant configuration churn since you don’t have to continually update your configuration to handle all your software versions.
2. I believe the rolling out of updated software is an orchestration task which should be validated by your CI system through the spin up of throwaway virtual machines or containers like vagrant or docker.io and tested using tools like serverspec, selenium, capybara, cucumber or rspec to validate that your application is still working after the software has been upgraded. Then an automated job should move the validated packages from repo to repo.
3. Nothing kills continuous delivery faster than having to hack around in your {insert configuration management tool of choice here} in order to do a release.
4. If you want to go down the road of using Immutable infrastructures, you need to start thinking of service availability not discrete versions of software.
Relevant links
Staging Package Deployment via Repository Management
Definition of a immutable server
Packages vs Tar-balls
Use your distros package type to deploy your software, nuff said!
Packages give you versioning, querying, upgrade, rollback, dependency resolution and ease of distribution.
Let your configuration management to handle the configuration (see it’s not just a clever name!)
Once I would have let you off for saying creating OS packages is hard but now we have fpm so you have no excuse.
As an example this is really hard to verify if something isn’t quite right somewhere is there:
$ tar -xzvf foo-bar.tar.gz
While this will tell you something is corrupted:
$ rpm foo-bar.rpm foo-bar.rpm has wrong checksum
Test Driven vs Just Hacking with NOOP
I work in a agile team who live and breathe TDD, they sell it as a feature and it quickly made sense for me to adopt it as well for Infrastructure as Code to add more credibility to the group, however I have to admit is was hard in the beginning, I just couldn’t see the value until, I tried to refactor a module I was working on and like a bull in a china shop broke it by trying to change too many things at once. It was the tests that did two important things for me that day:
1. It helped me fix the problems in record time and;
2. It taught me to change one thing at one time and only that thing until the test for it had passed.
While those may not see those as important I began to realise other benefits, my code became more re-usable and read-able because I focused two things, the first was how can I write code that was easier to test and writing the tests before I committed a single line to implementing the code.
Writing tests in a multi individual environment helps because your tests become a compliance guarantee if someone on your team extends the functionality you can test that it doesn’t overtly break the existing infrastructure.
Taking vs Giving
aka: Not Invented Here vs Forking & Merging
Pause for a moment and take a look in your configuration management source tree and see if you can find a ntp module or recipe of kind, do you know who wrote it? You? Someone in your organisation? Someone who left your organisation? A consultant? A vendor?
Opscode Chef has over 25,000 registered users but their ntp cookbook has only ever been downloaded 2918 times (as of the time I wrote this post), now you could read into this a number of ways but I’d like to believe that:
a) People usually care about their servers having the correct time.
b) That huge amounts of them aren’t being employed by a small number of companies.
I bet the majority of you who actually looked will find they have written their own ntp modules and the greater part of those are just using puppet to install ntp, copy in a file and ensure that the service is always running.
Now that’s a great first step to learning the tool but by just leaving it there it becomes inventory and not just anybody’s inventory it becomes yours, you made it, you own it, you maintain it.
If instead you used a shared ntp you can reap certain benefits:
1. It available Just In Time, when you want it, it’s just there, no muss no fuss,
2. As a general rule it’s usually well tested,
3. If it does do exactly what you need you can extend it and submit a merge request, you learn from working without people’s code, it improves your code, and if your intention is to merge the code back at some point you start to think more about how it’s going to affect both you and other people.
Relevant links
Going it alone vs pairing
Recently I started doing something I’ve always thought about and tried to do before and that’s DevOps pairing, I pair with Developers, QA’s and another Sysadmins locally and remotely and I have to admit that I really enjoy it, I get to work with people smarter than I am from different fields with very different viewpoints and I feel I’ve really raised my game because of it.
Pairing among sysadmins is almost unheard of, we are generally solitary creatures when performing our art and secretive in our methods. Often the only time we pair is when we are put on a training course and forced to share a single computer with the person next to us and the goal then is get the exercise over and done with quickly and if the person we are working with doesn’t understand well too bad for them.
How many times have you worked within a team and heard the person next to you wrangle, curse and complain with an issue until they finally breakdown to ask you to take a look and of course you magically identify the problem within milliseconds, is this a magical talent? Prescient skill even? No it’s just a fresh pair of eyes from another perspective, it’s a review from a peer not a judgement of your skill.
Pairing is not just for Developers it’s for (everyone?) sysadmins too it enables you to
- Share knowledge
- Reduce errors
- Produce simpler solutions
- Be more productive
- Enforce understanding – Does it work really the way you think it does
But here’s the kicker it
- Requires social skills and it doesn’t just happen even when your not driving you need to navigate.
Relevant Links
Pair Program with me
Google Hangouts
Screenhero
Pair Programming Agile 2012 presentation
Extreme Programming Pairing
Favouring the Status Quo vs Challenging and Changing things for the better
I was going to expand on this but I’ll leave it to @lusis he says it much better and more eloquently that I ever will in his post.
Leave your comment