The only developer productivity metrics that matter
13 points by fcbsd
13 points by fcbsd
- How often does the team routinely ship new versions of the software they build?
- How often do things break when the team ships a new version?
- (optional) How often does a new version of the software the team builds spark actual joy in the people who have to use it?
honestly, only 3 matters. directly measuring ship rates or anything else is not more useful than counting LOC, assuming you're building a software product. The metrics that count for me are:
of course, collecting this information requires Actual Work on the part of management, rather than just putting queries into Jira, so of course the manager types will tell me I'm wrong. But the management types can bite me, I've been doing this way longer than they have.
Ultimately the engineers are the people building the thing that makes the money. the only thing management should care about is if they are doing that effectively.
Yeah, I'm having a hard time being charitable to this piece. Back when I did manual software updates on iOS, certain apps were notorious for shipping weekly updates that required 200MB to download and came with a changelog that read something vapid like "bug fixes and improvements".
Being able to ship quickly is very important, but still not as important as building something actually worth shipping.
I find it annoying that almost all such posts assume all software is SaaS. For on-premises software and especially for embedded, the ability to produce a new, working release quickly is important. But churning out new versions all the time is hardly a virtue, usually the opposite.
"Every piece of software is SaaS" is just a constant brainworm in modern computing discussions. My own experience is primarily SaaS (though I've worked on something that was deployed as a desktop app that talks to a SaaS), but I recognize that there's a lot of variety in the world.
How often does the team routinely ship new versions of the software they build?
How often do things break when the team ships a new version?
Is this not basically the DORA metrics?
I tend to agree, and, well, DORA has data to back it up.
If you haven't seen it, the DORA website has a great quick check that will compare you to others in your industry. Even better, there's another part to the questionnaire that helps you identify capabilities they've identified as predicting improving your metrics, with great write ups on how to improve on them.
That's what I wanted to ask, if the author just came up with his own DORA metrics.
I really think they should make every junior run a production system for a month in the first year of work, just to have a vague idea of what does it mean to run and operate software. Even a lot of experienced deva i know don't have a clue about any of that.
Software productivity cannot be measured, only vagely perceived by participating in it.
Keep your teams small with a clear focus and listen to their tech leads.
Big companies are doomed to never have a grasp of their own productivity, and will only invent ways to draw a line and select scapegoats when the headcount needs to be reduced.
These metrics are useful, but only backward-looking consequences of quality. These metrics might be all green and nice for months or years, and then suddenly drop because of architectural/design/implementation mistakes done a long, long time ago and their consequences accumulating over time. And when these metrics go downhill fixing the problem might require months or years of extra work.