Accountability vs Complexity

Ever been “held accountable” for something that you had no control or power over?

I’ve previously spoken about how corrosive the words we use in organisations can be. Words matter. And some words tell you quite a lot about an organisation. One thing I’ve observed is the use of the phrase “hold to account” tends to correlate with some of the lowest trust environments I’ve encountered.

Christopher Avery has talked about this for years. He makes the point that “Accountability” is a setup for blaming down the road. That it is only ever used in a negative context (when was the last time you were held accountable for something good happening?). Accountability, when said by others, is often a thinly veiled threat. It is extrinsic, rather than intrinsic. Accountability is very different to people taking responsibility for something.

I’m not saying that accountability is 100% bad. It’s more about the use and abuse of it. If someone volunteers “I want you to hold me accountable for this” that’s perfectly fine. All too often though, the holding to account is not an invitation, or voluntary. And in some contexts it’s just the wrong way to view the problem space.

The reason why it doesn’t work though is the same reason demonstrated in W.E. Deming’s famous Red Bead Experiment. The problem is that we tend to misunderstand the complexity and randomness in the outputs and especially the outcomes. It’s the corporate equivalent of getting upset with the local Lotto shop for not selling you the winning ticket.

From the link above:

“They wound up frustrated by a system that would not let them do their work and in which they were powerless to change.” This is a very common experience. The red bead experiment takes it to an extreme but the point is being forced to work in a broken system that you are powerless to change. Sadly, that is a common fate of many workers. Blaming workers in such situations is obviously pointless.

In Dr. Deming’s Red Bead Experiment, the variability in the process is the source of the variability in the result. It’s obvious to anyone playing it that the result is stochastic – it involves some degree of randomness. In the case of software development, the probabilities are often hidden: in the realm of “unknown unknowns”. It’s impossible to know in advance what the result is going to be.

This is similar to the managers in the system not being able to see the visible randomness of the Red Bead Experiment. All too often, senior managers oversimplify the complexity of the things we are attempting. When problems happen (A P1 or P2 incident, for instance) the assumption is that someone did something wrong, and that they should be “held to account” in order to prevent it happening again.

Adding insult to the call for more accountability, these calls for accountability often come from senior management and the C-Suite at the top of organisations – who seem to be “held to account” the least when things go wrong. Instead, they seem to be rewarded richly for failure, paid well to leave, and soon move on to similar roles elsewhere.

Alternative? Maybe invite folks to take collective responsibility for improving the system, rather than accountability for an outcome that they cannot control or predict. That way it’s not something that is placed on them, it’s something they willingly take on – and focused on the things they are well-placed to actually do, if you allow them to!

© 2022, Written with ❤️ and built with Gatsby by J