How Agile are we really? – Part 1

Do you have agility? And could I see it?

And how agile are you/we?

Is that agility of yours helping us?


Such questions may be quite common during Agile Transformation. The company has decided to transform and now would like to know (at least management) where it stands with Agile.

We have to admit that here too, as Agile coaches, we had to go through an evolution. Just a few years ago, the common response was, “You can’t measure Agile. That’s like measuring how lucky you are or how good your marriage is. You just know. You just have to believe it. After all, agility helps us to execute our strategy/vision and that’s either happening or it’s not.”

Change for which we want to track the return

But when a company invests significant resources in changing its management style, organizational structure, and mindset, it is a fairly legitimate requirement to see what it is getting back and how it is actually doing.

Isn’t it a basic tenet of the Agile approach to constantly ask yourself whether what you’re doing is of value, or how to increase it? So how do we do that if we are unable to identify a metric for something like Agile maturity and even say there is no point in looking for it?

Direct vs. indirect indicators

Agile, on the other hand, is all about changing the mindset, approach, proactivity, teamwork, and customer focus. And that can’t be measured directly and accurately like cash flow. That’s why we have to measure it indirectly through other indicators. And here is a potential risk that we have unfortunately experienced in practice many times. The famous “what you measure is what you get” applies, or once I start measuring something or even start rewarding based on those measurements, people in the organization will focus on the easiest way to report those numbers. Not necessarily with an understanding of what the purpose and goal were, which gets lost behind such an indirect indicator.

For example, we experienced the decision to evaluate the development team by the number of bugs in Jira. This led to a reduction in test coverage and when there were bugs, they didn’t make it into Jira at all, missing important statistics. This led to problems with bugs during deployment and in production – it was much more difficult to detect them and release a hotfix.

Another example would be measuring the duration of stand-ups. It is recommended that a stand-up lasts about 10 minutes, so why not use this metric to measure how far along the team is in Agility? Well, because after all, we are concerned with the contribution of stand-ups to team alignment and review, not their length. In practice, we have quite often seen people praise short stand-ups and when we asked about their value, the answers were embarrassed, vague, or even followed by silence as they hadn’t asked the question.

It is also common to try to evaluate the performance of the team. We have seen attempts to use Velocity or Story points in general to do this. This has led, understandably, to subconsciously bending the estimates during Grooming so that the team can confidently achieve the predetermined goal of giving higher estimates. Thus, it damaged any ability to predict using Velocity.

This behavior is quite similar to when so-called standards are measured in a factory. Everyone on the shop floor is running at 50% of their output to make the standard time just fine, or exceed it, so everyone can reach targets for the bonus.

How to guide

Despite the risks and deterrent examples in the previous paragraph, we will try to offer three ways to approach the measurement of Agile maturity. Each has its pros and cons.

1. Measuring Agile “mechanics” – by this, we mean tracking the fulfillment of the process side of an Agile method. For example, if a team chooses the Scrum framework as their way of working, we track whether and how they prepare Sprints, whether Stand-ups are taking place, and whether and how Retrospectives are taking place.

  • Advantage: simplicity, clarity, and immediate applicability – the team focuses on executing specific steps (“fake it till you make it”).
  • Disadvantage: as we described above, there is a high risk that the team will focus on mechanically executing the activities, even if they do not fulfill their purpose/benefit (see example – length of Stand-up); another disadvantage of this approach is that it cannot be applied to the company across the board, as it is tied to a specific method that is only suitable for a part of the company.

2. Feedback on the implementation of the principles – this approach removes the risk of the previous point by focusing not on monitoring a specific methodology, but on the implementation of the Agile principles as described in e.g. the Agile Manifesto (see; we monitor and evaluate e.g. whether the team is in contact with a customer representative (typically Product owner, business, etc.) on a regular basis.

  • Advantage: this technique leads to an understanding of why a given methodology (e.g. SCRUM) is being deployed and what we want to gain from it; it can also be applied to the company across the board.
  • Disadvantage: it is quite a sophisticated technique that requires a deeper understanding of the principles and its generality can lead to different interpretations.

3. (Business) Transformation indicators or why are we doing all this? Do we want to deliver faster to the market? Do we want to respond better to customer needs and have better customer reviews? What is the real reason? And we can track and evaluate that reason. And that’s the third way to evaluate the success of Agile Transformation.

  • Advantage: we focus on the ultimate benefit/value of why we are investing in the transformation, which leads to the entire organization focusing on that goal (including finding effective ways to meet that goal, regardless of Agile or the specific method).
  • Disadvantage: these are “lagging indicators”, i.e. a look in the rear-view mirror that only tells us whether we have achieved our goal in retrospect – months to years after we start the transformation; this leads to ambiguity and the possibility of questioning – e.g. Customer satisfaction can be just as well influenced by a new product version with better UX as by Agile; the previous two methods are, on the other hand, examples of “leading indicators”, or indicators that help me predict that I will get to the target state when I do activities here and now.

In practice, we have found it best to combine these approaches appropriately. The low-level approaches (see the first approach “Mechanics”) provide quick feedback on which we can react more quickly, especially at the beginning of the transformation. High-level ones will then see us fulfilling our purpose in the longer term.

In the next part, we will look in detail at the first way – Measuring Agile “Mechanics”.


Authors: Marek Hersan and Roman Šmiřák