Cleared for Takeoff

I did it again. In a bit of a rush, I looked at these numbers from a database table

34.1

34.8

and I said to myself: Great! 34.8 is greater than 34.1. That’s what I expected.

And unfortunately, it was also incorrect.  Each of the numbers was a sum of sales figures from two different regions, but a significant percentage of the regional assignments were incorrect.  Enough to produce an error of up to plus-or-minus one in each number; enough to render the two numbers statistically equivalent.

It’s tempting to argue that when data quality problems create ambiguous comparisons – numbers that “should” be different but actually are not – we should continue to compare two numbers as if they were exact – after all, when those data quality problems are fixed then the exact comparison will be valid, right?

Not exactly…. When, and if, the data quality problems are fixed, then some comparison will be valid, but at this point, before the fix, we don’t know what the final result will be.    The “corrected” 34.1 might be bigger, or the “corrected” 34.8 might be bigger – it’s too soon to tell.

The presumption of precision when we see a number can be difficult to set aside, but in reality that presumption is just prejudice – the prejudice that our numbers and our data are as perfect as we want them to be.   Until we’ve tested and proved the quality of our numbers, we might be far better served to take the opposite, perhaps radical presumption: that when comparing any two numbers in our database there is no meaningful difference until we have proof to the contrary.   There are no significant differences, no conclusions to be drawn, no decisions to be inferred,  nothing of value; a net worth of zilch-point-nada until proven otherwise.

Is it radical to ask our data systems to prove their support for our conclusions? Perhaps. But that radical approach is precisely how other engineers deal with uncertainty when the outcome is critical, in systems ranging from nuclear power plants to aircraft.   Engineers and operators always presume these systems will fail until they have proven they can operate successfully.  The presumption of failure doesn’t always guarantee success, but it does guarantee a safe and unharmful failure.

In one of the most reliable events we ever encounter – the successful takeoff of an aircraft from a runway – pilots are trained to operate on the premise that the plane will not be able to fly, including procedures to abort takeoffs well after the plane has begun its roll down the runway.   That might seem a little scary, but it’s truly the very safest approach – by having the plane prove it is capable of liftoff,  pilots are optimally prepared for those very rare circumstances when the plane really should not leave the ground.

Why not do the same thing with our data – ask it to prove that it’s worthy of supporting the comparisons we’re planning to make? If we start with the presumption that our data cannot support any conclusion of true difference  – what statistical thinking usually recommends –  we’ll rightly conclude only what we can prove, rather than wrongly conclude what we merely assumed.  Isn’t that how it’s supposed to be?  Aren’t our data systems worth that level of care?

But realistically, our practice of often developing and applying partially-unproven data systems will not change overnight, for a couple of good reasons.

First, the creative minds of data architects and developers naturally rebel against any task that is tedious, or might tarnish the capabilities of their creations – and I don’t blame them a bit. Uncertainty analysis can be both tedious – as if we had combined the worst aspects of writing documentation with an interminable mathematics problem set, as well as tarnishing – the best outcome is that our data won’t be worse than we hoped, and they probably will be worse than we hoped.

In fact, uncertainty analysis poses real creative challenges, ranging from estimation to probabilistic problem-solving.  And it needn’t be as depressing at it might seem-  if we work from the premise that our data are not, after all, perfect,  uncertainty outcomes can become a part of our requirements, delivering the positive and definite data quality targets that must be met, if our questions are to be accurately answered.

Second, our biases come into play.   We want to be right – confirmation bias, and we especially want to be right after investing considerable time, effort, and dollars in our creation – sunk-cost bias.    Bringing uncertainty into the mix forces us to swim upstream against powerful emotional factors.

 

Still, if our data systems really matter we should shift increasingly to a prove-the-capability mental set point when launching our data systems, just as we do in any endeavor using a complex system that is critical to our well-being.  That’s when  “Cleared for takeoff” does not mean “have a nice trip,” but rather means “start the procedures that will prove your system will do the job for which it was designed.  Then, have a nice trip.”

Unless we want to argue that our data systems really don’t matter much…. No, me either.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s