My friend Sandy shared a link to a fascinating natural experiment comparing the productivity of two similarly tasked developer teams. If you haven’t read it already, take a minute to check it out. I’ve seen this need for visibility throughout my career.
The cable company was a rare laboratory, you could observe a direct comparison between the effects of good and bad software design and team behaviour. Most organisations don’t provide such a comparison. It’s very hard to tell if that guy sweating away, working late nights and weekends, constantly fire-fighting, is showing great commitment to making a really really complex system work, or is just failing. Unless you can afford to have two or more competing teams solving the same problem, and c’mon, who would do that, you will never know. Conversely, what about the guy sitting in the corner who works 9 to 5, and seems to spend a lot of time reading the internet? Is he just very proficient at writing stable reliable code, or is his job just easier than everyone else’s? To the casual observer, the first chap is working really hard, the second one isn’t. Hard work is good, laziness is bad, surely?
In agency work, you tend to track hours worked on a project. I’d bristle when each quarter the list of “most billable” employees. Great! If those folks are junior developers, chances are their also creating a lot of “billable” work that pulls in other people. A list of people who were the busiest in the last 3 months, when you should be encouraging people to get the most done in the least amount a time. A better metric, though harder to calculate and report, would be to figure out revenue per hour. That’s not so difficult to do per project, but it gets hairy in trying to tie it directly to people but it can be done.
If you’re part of an internal development team, upper management may use “seeing butts in seats” as a proxy for people getting work done. This encourages people to hang around just to look busy, and discourages using remote workers. In this case, metrics you’d like to look at are more tied to business outcomes, things like site uptime, conversion rates, sales, etc.
Still, if you want to measure actual productivity, in terms of what tasks your development team what can you do? This is where I think a good habit of issue+SCM tracking, rigorous testing , and continuous integration can really shine.
- Issue tracking can let you report the number of issues you’ve addressed.
- Unit testing can report on the health of your code base by looking at test coverage, number of tests added/created, etc.
- Continuous integration can then give you ongoing performance metrics. How often are we producing successful builds? How often are we deploying code to production?
I’m sure that just scratches the surface of what you can do. How do you measure developer productivity?
Peer review.
I think most measurable metrics can be gamed, subconsciously or not, until they become not informative anymore:
– I am productive if there are many issues so I open one for any silly case I could fix in 3 seconds. Also, I don’t correct bugs immediately so that people can find them and open issues.
– I am productive if the test coverage is high/there are lots of tests so I write many duplicated tests by copying and pasting and I leave out assertions since it’s coverage that counts.
– We need to deploy every day to be considered productive, so I make fewer commits but I deploy every single one of them…
Those are reasonable points. Of course, any system can be gamed so you’ll need non-technical safeguards to make sure everyone plays fair.
Then again, I”d also argue that if you have someone who is spending more time gaming the system than being productive (cause people, particularly their peers will notice) then you’re better off getting rid of that person in the long run.
Yes, the important thing is not to dismay the ones that seem less productive because they don’t game 🙂
Some things cannot reliably be measured, and therefore should not be measured – any measure of productivity is going to be biased towards a certain kind of “productivity”, and can usually be gamed.
For example: I could close lots of tickets really quickly by just hacking the shit out of everyone’s code. I could write lots of unit-tests that pass, but don’t really demonstrate or assert anything, or worse, unit tests that manufacture results. My changes could build perfectly every time, because I don’t ever change anything substantial or make any real, deep improvements to the code.
Should I get a bonus at the end of the year for laying the codebase to waste as quickly as possible?
If you don’t trust any of the members of your team are actually being productive, that’s your first problem – not demonstrating, measuring or proving that fact.
Usually, if someone is having problems and isn’t as productive as the rest of the team, the rest of the team will be well aware of this – and usually, if you sit down for a discussion with these team members to find out why, there is a good explanation; maybe they don’t understand the domain matter, maybe they’re going through a personal crisis. In my experience, the explanation is very rarely along the lines of “you’re lazy and stupid” – which, in my opinion, is all you can hope to demonstrate or prove with a metric.
Software development is a human effort, not a machine effort – don’t treat it as some mathematical or algorithmic problem that you can solve with a system.
Systems are what we do – not what we are.