Mirror, Mirror

A bias we all encounter is confirmation bias – the tendency to find and interpret information in alignment with our expectations. And then, to stop looking.  Mirror, mirror.

We tend to think of ourselves as unbiased. But any time we execute a “first ‘sensible’ answer wins” analysis, that’s confirmation bias. For most of us this will happen every day. Who has time to check out all the possibilities for every little thing?

That approach is OK, until the answer’s value is greater than the cost of examining the reasonable options.   Which is a lot of the time, in most research and corporate databases.

A quick story: I once watched a team create a list of retail-store sales by store manager, and before I knew what was happening people were trying to understand why managers in low-performing stores were doing such a bad job.  It took a real effort on my part to restart inquiry, and get the team to look for other possible explanations. As it turned out the data couldn’t tell us which explanation was determining – it could have been inventory, location, or the managers.

Scenarios like this can be discouraging – our additional work only showed the limitations of what we actually understood.  Confirmation bias already urges us to stop short of full inquiry. Beyond that, no one wants to expend energy and come away with less.

However, we really didn’t come away with less.  The supposed conclusion was never supported in the first place – a potentially valuable outcome, if not necessarily what we hoped for.

What we want, hope for, and talk about is “insight.”  It’s an overused term, and I think we are conditioned to expect insight just because we have a big data system on hand.   Regrettably, our data did not sign up to tell us anything new.   Real insight is relatively rare – a genuine surprise that can be sustained through the trial of full and unbiased inquiry.   Rare maybe, but worthwhile too – it’s the reason we build this stuff.

 

 

 

 

 

 

 

One thought on “Mirror, Mirror

  1. If I read this correctly, then it is a condemnation of “stick figures for the generals” as applied to BigData. A friend of mine once bemoaned doing sophisticated flight simulations of theoretical aircraft on theoretical mission profiles, that got summarized to a bar chart with two bars, one representing the existing aircraft and the other bar one they were trying to sell, as the generals couldn’t comprehend all the details of the issue and needed an exceedingly simple (and perhaps biased) visualization.

    If you summarize a piece of analysis, too simply (ignoring or assuming other independent variables are a constant as engineers often do) you can obviously generate conclusions with no real basis in fact. If you drill down more carefully, say looking at the performance of managers in weak stores that go into peak selling days of the week with low inventory to other managers with low inventory levels, then you might have a more meaningful conclusions, does this imply that it is the generalization or level of summary that is the issue?

    So the questions this raises for me about BigData are,
    a) is BigData a way to validate our previous assumptions/conclusions or merely a way to make more bad ones?
    b) is there any value of BigData beyond very detailed segmentation and analysis of events?
    c) is there a methodology that can be developed when building BigData analysis (other than high levels of subject matter expertise) that can be used to assist with validating the assumptions about what needs to excluded or included when trying find correlation?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s