Johanna Rothman sent us through some common puzzles about Cost of Delay and CD3. Posting our thoughts here, in case others have similar questions or suggestions…
“Who estimates the Value part of Cost of Delay?”
In most organisations it tends to be the Product Owners or Product Managers who facilitate estimating the Cost of Delay. We see them more as a lightning rod for drawing in key data points from around them though – from finance people, marketing people, customer service etc.
For some of the more technical pieces of work (tech debt, architectural changes, changes to improve the delivery pipeline, etc) the Cost of Delay is often estimated by some of the more technical people. Overall though, it should be seen as a group effort. Transparency of how the Cost of Delay figure was arrived at is important, otherwise it becomes an easy thing for the system to game.
“What do they base their value estimate on? Revenue, reducing waste, or something else?”
You can use whatever economic framework you like, using the outcomes that matter to the organisation. (Those outcomes don’t have to be monetary, they can be “number of baby elephants saved” if you like. Economics != $$. Assuming you pay your people in $$ that’s typically a useful common currency.)
We suggest starting with this “Four Buckets” Value Framework. It’s worked pretty well for us across many different sized organisations, in both public and private sector. Product Managers, Product Owners and teams have found it helpful as a starting point (or scaffolding) for thinking about value. (More here for its genesis in the public sector.)
After a few months you tend to observe a similar set of patterns emerge that are specific to that org, the outcomes they are hoping for and their business model. You can then start to refine the input parameters, by at least making sure you’re consistent, and (where possible) improving the accuracy of key parameters that matter the most in estimating value. Quantifying Cost of Delay tends to get easier, faster and more accurate over time.
“How do you handle salespeople who strongly assert that a feature is worth “millions”?”
This is a common struggle in some organisations. Quantifying Cost of delay helps with this:
- By estimating cost of delay, we make the assumptions about value and urgency visible
- Once we have made our assumptions visible we can design experiments to test these assumptions.
Variations in estimates are usually due to an underlying uncertainty in one or more of the parameters in their assumptions. Better to switch to System Two, surface that uncertainty and then question the inputs. Sometimes this is enough to re-assess their Cost of Delay estimates. Other times you may need to design an experiment to generate the information needed to bottom out the difference.
To counter your point, we have seen lots of cases where the estimated Cost of Delay was in the order of $200,000/week, so “worth millions” really isn’t abnormal when you are working with organisations that measure revenue in the billions per annum.
We’ve also seen lots of cases where the Cost of Delay turned out to be higher than estimated. There is no evidence (that we have seen) that people consistently bias to the high side with Cost of Delay. The benefits side of the equation is not like duration estimates, where most of the variability and uncertainty is to the downside (longer, due to optimism bias). This is the problem with positive Black Swans: the value is only obvious in hindsight, so a consensus view often underestimates the value that results.
“How do you clarify estimated value?”
Firstly, you need to be clear about the difference between Value and Cost of Delay. A common mistake is to talk about (and show) Value, not Cost of Delay. The units of Cost of Delay are Value per time period e.g. $10,000/week.
Our main advice for clarifying estimates is to make sure you show your working. What are the inputs and assumptions you’ve made to get to your Cost of Delay figure?
Same thing applies to CD3. People often mistakenly show this as Value over Duration, (completely missing the Urgency part of Cost of Delay). This is not CD3. There are a number of common patterns of Urgency profiles that we’ve seen.
If by “clarify” you mean “reduce uncertainty” then that is effectively a portfolio question. We discussed the inherent uncertainty in the Black Swan Farming Experience Report
Value is rare, extreme and obvious in retrospect
Nassim Nicholas Taleb describes “Black Swans” as rare events; that have an extreme impact; and that appear obvious only in retrospect. When viewed through the lens of value and urgency, there were a very small number of ideas for which the impact on the organization was extreme. Yet this was not obvious until the economic value and urgency was considered. We found that within the pool of ideas generated at Maersk Line there were what we would characterize as Black Swans.
The point about being “obvious only in retrospect” is an important one when it came to the Cost of Delay calculations themselves – but perhaps more importantly, management’s view of the numbers produced. The typical response, which we heard more than once at Maersk Line, was the desire to “hold people to account” for their value estimates. In fact, we heard various ideas about how to punish or reward the individuals or groups raising ideas, depending on whether their idea turned out to be as valuable as they had predicted.
Doing this would have had a negative impact though, driving the organization back to being risk averse and slow when evaluating benefits. Business Process Owners would be incentivized to only raise ideas where they could be sure of the result, like headcount reductions or other cost saving. Focusing only on reducing cost becomes a zero-sum game. At some point, the organization needs to also focus on creating value for its customers, and in doing so increase the size of the market.
The reality is, innovation and product development are not sure things. It is more like what venture capital firms do – a series of small bets, some of which will hopefully pay for all the others. They attempt to pick winners, but they are still hostage to the market and the probability of success is low. Much the same as with the banking system, none of the bets we make should ever become “too big to fail”. Innovation requires that we test new ground and break things, discovering the best solution. Probe, sense, respond: Amplify the positive signal, dampen the negative signal. In this way, product development is not so much Black Swan hunting, but more like Black Swan Farming
In short: Product Development is a stochastic problem space. As Allen C. Ward said:
“Instead of learning to surf, organisations try to control the waves! This almost never works.”
We recommend learning to surf, of course 🙂
Do you ever suggest anything about estimated duration, so people realize time estimates are often wrong?
“Wrong” suggests there is zero information generated in the process of estimating or in the resulting estimate itself. Thankfully, it’s not as binary than this. We’re looking for accuracy NOT precision.
This advice applies to both the Cost of Delay estimate and the Duration estimate. On the numerator, Cost of Delay is a power law distribution, so Order of magnitude accuracy is better than nothing. Likewise, T-shirt sizing of Duration is better than ignoring it all together.
It’s important to realise though that we don’t need Duration or Cost of Delay to be precise for it to still be useful. We usually advise not to worry about false precision on either side. We’ve written about this previously here. TL; DR version: don’t throw the baby out with the bath water!
One other important thing to say is that CD3 naturally encourages the breaking down of work to find the nuggets of really high Cost of Delay. We would NOT recommend limiting the length of Duration though. Doing so would artificially boost the CD3 scores of things that are genuinely large batches of work. Let the natural ranking do it’s work of encouraging slicing into smaller more valuable pieces.
Having said that, it is a useful decision rule to not allow very long duration pieces of work into development. We recommend slicing this into smaller pieces of value (including information value). If a very high CD3 happens to be of very long duration, then we’d recommend reducing the uncertainty of both numerator and denominator.
“Do you divide CD3 by 100, so the numbers are clearer?”
It’s not clear to us why dividing by 100 (or any other constant) is necessary. (We’ve never done this before and it’s never been a problem.) The CD3 score gets the job done as long as the backlog is ordered from the highest CD3 to lowest CD3.
The problem may be that you are using some made up numbers here. If you quantify the actual Cost of Delay across a backlog or portfolio you’ll probably find it’s a power law distribution, so dividing by any constant doesn’t really help. CD3 is also the acceleration figure, so it is scale independent. You can just as easily measure metres per second squared or Kilometres per hour squared, as long as you are consistent.
Any other puzzles?
Let us know in the comments below…