metrics

You’re Nobody Till Somebody Measures You

Next Story

How to Make Cybersecurity a Priority for Your Small Business

Once upon a time, Dean Martin told us that “You’re nobody till somebody loves you”. As I was reviewing yet another set of proposed metrics recently, I found myself humming that song, but substituting the word ‘measures’ for ‘love’ – because of course, in business, what’s love got to do with it?

So it was that I found myself again contemplating that it is often the case that we – as individuals, teams, organizations, etc. – can seem next to invisible until someone decides to calculate their interpretation of our performance. Note how I state that, as it speaks to the heart of why ‘measurement’ so frequently misses the mark, indicating ‘results’ that are actually not representative of how things really are. When someone ‘measures’ something, very few of those reviewing the results take the time to ask how those results were arrived at. That is a shame, because all too often when we do, we find that the way something was measured caused the results to vary (sometimes wildly) from reality.

When I previously covered metrics as a topic (“Lie to Me”, located here for any interested parties), I called out a number of reasons why measurement goes wrong. I avoided giving examples of ‘accuracy’ and ‘precision’ then, but the time since then has shown me that it might be good to do just that now.

The first graphics below might be familiar to you – stick with me though, as I am also going to illustrate a few things beyond this basic view. To begin, I will state that the ‘targets’ are meant to represent a measurement system that is properly calibrated; put another way, this system measures reality, and represents results that align with the actual state being measured.

The terms ‘precision’ and ‘accuracy’ are concepts that enter most any discussion on the topic of metrics.  The difference between these terms is often graphically shown like this:

You are accurateif your data is grouped around your intended center, but not necessarily in it; it means you are coming close to your intended goal, but are not quite hitting it.  You are precise if your data is grouped together, but not necessarily around your intended center; this means your results are consistent, but are ‘off-center’ from the ‘bullseye’ you want to reach.  When your results are both accurate and precise, you made it – and it’s time to start considering if you are ready to adjust your ‘bullseye’ to seek even better results.

This is about as far as the demonstration usually goes, so now I will show you why how you measure is as much or more important than the results, and how easy it is to be fooled by what you see.

This graphic shows that all my data is within the center, so it appears to be both accurate and precise.  The data points however are the exact same ones you saw in the “Accurate, but not precise” picture above – the only difference is in the size (spread) of the center I used to measure my results.  My data therefore is no more precise than it was before – it only looks that way because the bullseye I used to measure things is larger than it should be.

This next example also appears to show both precision and accuracy are what they should be for great results; in this case though, the points you see are exactly the same ones as those in the earlier “Precise, but not accurate” picture.  The results are no more accurate than before, but it seems they are because the way I measured things moved the center I am measuring out of alignment with where it actually is.

 

The manufacturing world recognized long ago the importance of ensuring their measurement systems and tools are properly calibrated. The business world, including ITSM efforts, still often do not validate this aspect of measurement. Unless you are aiming at the right target, and the position you need to hit on it, and are using the right ‘ammo’  to produce results, not only when designing a metric, but regularly during its use to account for changes in conditions over time, you won’t be reporting the right results

Beyond the system

Up to now I covered the ‘what you see’ aspect of metrics – the results and the system used to get them. But one of the toughest aspects of measurement is subjective: the effect of human emotion on what is going to be measured, why it will be measured, and how to interpret the results of that measurement.

In many discussions held while reviewing proposed metrics, I have found that there always seems to be a couple of things that those proposing the use of a given metric would remain convinced should be measured as they proposed, despite being provided empirical evidence to the contrary. This was usually defended by stating that they read somewhere that something was “an industry standard” way of doing this. Even when shown this was not correct, the tendency to defend it remained – especially when the ‘metric’ had been in place for some time. As any time there are people involved you can be sure emotions are involved, you need to be aware of how those emotions might be trying to skew the measurements you actually need – and be prepared to mitigate.

Measurement is everywhere, and since it is very unlikely that you can avoid being measured, you can rest assured that you won’t be a “nobody”. It is in your own best interests (and that of your organization) to do all you can to validate that the measurement (and reporting!) systems being used are properly calibrated. Always remember that while reality may bite sometimes, sooner or later fantasy (in the form of inaccurate measurement) will bite harder…

The following two tabs change content below.
mm

Michael Keeling

Michael has been providing consulting and guidance in IT Operations, ITSM and SIAM to enterprise level organizations in many industries for more than 20 years, and has extensive background in data center and service desk operations, technical writing, mentoring, cause analysis and workflow improvement. He is known for bringing the view of a detective to these efforts, perspective he credits to education in crime scene investigation and over 10 years designing processes and performing risk management in the private security sector prior to his career in IT. A confirmed realist that believes no project can be truly successful unless all involved parties are grounded in reality, Michael is always prepared to paint ‘the elephant in the room’ bright yellow when appropriate….