Sunday 28 October 2012

In Support of Objectivity

 Do you come across organisations running so called 'decision conferences". These are, in the main, a two-day workshop involving a number of people who are there to evaluate a range of options. The participants are "experts" and they score each option against a range of criteria before the option with the best score wins through.

This approach leads to a number of issues that develop later on - particularly for major decisions in large organisations.

The 2-day workshop seems ubiquitous. But why two days? Clearly because some people are travelling of course but surely this isn't the best reason. The length of the workshop needs to be as long as it needs to be and this is dependent on the objectives and hence agenda of the workshop. It's also worth bearing in mind the percentage of time for that workshop compared to the project duration itself. In many cases the actual decision is made based on spending just 0.1% to 0.3% of the available time on it and sometimes even less for mega-projects. Is that sufficient for major decisions? Analysing strengths and weakness and how robust the decision is to changing in data and stakeholder preferences is well worth investing more time.

Experts. Just how expert are they? And what are they expert about? We've all come across the loud, opinionated participant - the self confessed all-knowledgeable person. But whether these are really experts is something else. An expert should be someone who has spent many years on a particular topic and could stand their ground with other experts across the world. Just because they know a little more than the others in the room on a particular subject (maybe they're representing a department or organisation) does not make them an expert. Mark Twain defined an expert as "an ordinary fellow from another town".

Experts, by definition, know a lot about very little. They have an incredibly deep understanding of a topic that they've studied for many years. Interestingly, in a multi-criteria workshop this may mean they may know a lot about how an option performs for one or two of the criteria but clearly not for them all. So asking for their opinion on costs, for instance, may not be their strong point.

The expert offers an opinion. And opinion is subjective. So, the experts provide a subjective score and the best option wins. Unfortunately, what this means in practice is that you have no evidence to back up that opinion so that when it is reviewed or audited it fails in dramatic style. You cannot possibly recreate the decision result unless you have the same experts in the room. Clearly for decisions that involved multiple stakeholders and several hurdles of reviews this approach won't get you very far - no matter how "expert" the experts. At some point, the scores will be reviewed, different ones used and (mostly) a different option will be favoured.

A far better approach is to collect evidence beforehand on how each option performs against each criterion. For many criterion you will be able to obtain decent quantitative measures and it is well worth putting in the effort as real-world number are easy to substantiate.

For others you may need to use a rating scale. These capture subjective opinions with words or numbers; so a 0-10 scale or a High, Medium, Low scale. These sorts of scales are not designed to capture opinions but estimate the magnitude of differences. These scales are still quantitative however even if you use words to describe it - they have equal intervals between each point and represent an order from "bad" to "good" or "less" to "more".  It is vital that each point is described in as much detail as possible so that everyone knows what a "High" looks like. Part of the process should also be to write down why each option is scored that way for each criterion. This means a paragraph or two written by an "expert" ideally with references and sources to additional information to support the score. This is similar to how a "safety case" or "safety basis" may be written but obviously much shorter in length!


A very good detailed explanation on these terms can be found here.

The point of collecting the evidence is that the decision is then based on something that can be reviewed and audited by others who would themselves come up with the same decision given this information.

If someone questions the decision, new evidence needs to be provided. This reduces significantly the effort required when someone with a new opinion arrives (maybe a new Head or other change in staff or perhap a stakeholder who wasn't intimately involved). Also, if new evidence emerges during the decision process, the original decision can be assessed against this - under change control. So, you don't flip from one choice to another choice seemingly at random. If a choice needs to change there's a good reason for it backed up with evidence.

Conclusion

Resist the temptation to have a scoring workshop where experts turn up and provide the performance evaluation for each option. It is far better to collect evidence beforehand and let the experts discuss the validity of the evidence and analyse the results. This will ensure that the decision will be evidence-based and can be realistically audited/reviewed by others. It is a much more robust process to use.

Spend a little more time analysing the decision. Do not fall into holding a 2-day workshop just because that's what is always done. Spend the right amount of time with the right amount of people for the decision at hand.

No comments:

Post a Comment