Once you start using agile practices and frameworks you are going to learn more about your team than you thought possible. You will have a wonderful new array of metrics to choose from. (Burn downs, Burn ups, works in progress, velocity, story points, ideal hours, and on and on). They are magically confusing yet revealing all at the same time.
Further, some of what you think you are going to learn is some people skate by and are consistently under performing. And you may also learn who is the backbone of your team. Or so you think.
“Sunlight is said to be the best of disinfectants” – Count Dracula and Supreme Court Justice Louis Brandeis
Agile practices provide ample sunlight into what your team is doing. But it can cast long shadows. These shadows come in the form of exposing performance of individuals. This exposure can be frightening to many people. And for good reason. Some people struggle with assigned responsibilities. Others just don’t like the exposure because it provides a means of comparison. But these shadows, that if misinterpreted, can give you the wrong impression of your team.
Question your metrics.
“Everybody is a genius but if you judge a fish by its ability to climb a tree it’ll live its whole life believing that it’s stupid.” –not Albert Einstein.
First of all, you need to understand the limitations of measurement.
You get what you measure
Developers are talented individuals. They know how to game a system. In fact, many developers find joy doing just that. Using traditional metrics, like lines of code produced, can backfire. For example, I know one management team that started tracking semi-colons in the code produced by developers. (Generally a semicolon is the end of the line.) This worked pretty well until one day a developer learned how management was counting lines of code. Knowing this he altered his coding practice to generate an exorbitant amount of code. It wasn’t that he was more productive, rather he just got good at giving management what they wanted.
For that reason and the mere fact that using lines of code rewards verbosity and discourages reuse, I can’t recommend that metric.
Consider instead, metrics which measure quality rather than quantity.
- Defect rate – If you generate a net positive on defects, I.e generating 1.6 defects for every defect you fix, you have a problemo.
- Refactoring rate – how much effort is spent going back and re-writing code. Some is good and expected but an excessive amount means you are generating an unacceptable amount of technical debt. Make sure the team does not know you are measuring it or you might kill this usually good habit.
Question your bias
Confirmation bias is the “tendency to interpret new evidence as confirmation of one’s existing beliefs or theories.” Think about the echo chambers social media and fake news have become. What you don’t want to be doing is misinterpreting what is happening simply because you believe it is happening. Seek a few different opinions from more than one peer. It would be best if those peers weren’t familiar with your viewpoint on the issue you are looking to understand.
Think how misguided you can become if you only measure the things that you want to believe are happening.
Using agile metrics to compare individuals
In general, you need to make sure you are <Cliche>comparing apples to apples. </Cliche> Comparing a senior developer performance to a junior developer performance can be tricky. After all, it would be obvious to expect the junior developer to be less productive.
However, there are situations, where the team may have assigned the same amount of effort to one task as another. On the surface, your two developers may appear to be contributing equally, but the real difference is the technical skill that is required to complete the task. For example, updating a web page or fixing a bug in the code, may take the same amount of time if the developer selects the task more appropriate to their skill level.
What to measure in individuals:
- Growth of the individual over time.
- Are they challenging themselves without hindering the performance of the team.
- How many tasks do they have to give up because they are blocking the team.
Gather additional data points on determining poor individual performance
I look at the types of tasks the developer is selecting and looking at comparable metrics by an average of effort spent by peers that are of similar skill level. But this is only one data point. I also look for other tell tale signs of performance. If they are always selecting simple tasks or tasks below their skill level, I may need to help them reach a little higher. I may need to consider whether they are being very short or vague on a stand up call. Or if they provide the same status day in an day out.
You can also listen for how many times a person says they are waiting for a response to email. Once in a while is OK, but this tends to become a habit of masking the problem of a lack of communication / not knowing what they need to know to do their job.
At the end of the sprint, look to see if they can demo the work they completed.
Overall, I get these are subjective and prone to confirmation bias, but enough varied data points should give you a trend as a way to tell performance.
Move towards team metrics
Agile is about better team performance. Sure the team benefits if the individual is performing at a high rate, but a team becomes synergistic when they compliment each other very well. So much effort is spent of communication, understanding, and joint planning, it is better to grade the whole team.
If you can’t tell, I prefer team metrics to individual metrics. It helps the team bring under performers into line. The customer / product owner cares about the performance of the team, not the individual performance (that’s your problem).
And it is harder to game a team metric, because you need collusion from the entire team. And if they are colluding, at least they are doing it as a team.
Team metrics to consider –
- Look at accuracy of your sprint plan – if the team said they can deliver x, did they deliver x
- Unit test code coverage – be careful, potential for gaming the system. But high value in making sure you have great unit tests
- Defects found – seek to understand root cause of a defect rate.
- Overtime spent as a team – we want performance that is sustainable, not death marches.
- Value delivered – as defined by the customer (product owner.)