Breaking the Ice

In The Question Your Answer Doesn’t Want You To Ask, I talked about actively assessing the fidelity of answers derived from our data.

I’ve synthesized the most effective practices I’ve witnessed in business, scientific, and engineering environments, and summarized them below as Q&A contexts and professional commitments for managing fidelity.

When I solicited comments from friends and colleagues about this, part of me expected to hear it’s another one of his strange ideas. And I did get a little of that. But to conceptually aggregate, what I largely heard was: Yeah – we think we kind of knew that.  So how do you get things started – how do you break the ice?

I can offer this “ice breaker.”  Because the Q&A fidelity contexts reach across our expertises, one commitment drives everything – for all stakeholders (I really do mean “all” – especially user reps and sponsors) to meet weekly for an hour. This is a highly-structured discussion.  Everyone contributes, is treated as a peer, and must be understood by everyone else. That might sound a little daunting, but any team can do this with a little practice.  Do you want to eat lunch at the same time, or meet more than once a week? Great idea.  Your factory burned down and you can’t make it this week? I’m truly sorry to hear it – send a sub.  As I look back on a couple of decades of consulting, it’s amazing how many problems boiled down to someone just didn’t know what was going on.  There are no panaceas, but this is a problem we can fix.

A facilitated workshop-style initial assessment may also be a good idea.  It’s a chance to talk through evaluation tools and techniques.  Outside facilitation helps: our in-team perspectives are circumscribed by our current roles – each of us could be cultivating a nice garden, but the overall landscape is a little blotchy.  The likely outcome? A lot of things are in good shape; we’re doing some things that we don’t need; we’re not doing a few things that would improve our outcomes.

For the contexts and commitments themselves, I use the acronym QUOTE AWL to identify (OK, to remember) the fidelity contexts I’ve seen most often, and what I see as the keys to their management.

QUOTE stands for:

  • Question
  • Uncertainty and Bias
  • Optimization and Action
  • Type and Context
  • Extension
  • Question.  Many fidelity issues start with the questions we ask.  Our database may not understand what we really want to know (e.g. “Why is a product so great?”). On the other hand, the questions we are able to ask (e.g “How much did our division make compared to last year?”) may tell us something other our actual question (“Which division should be expanded?”)  And, we often stop asking questions before considering everything that might be related to our line of inquiry.  Asking a really good “Question” is really hard work.
  • Uncertainty and Bias.  All data, and people, have uncertainty and bias.  Our personal bias can strongly impact outcomes – with confirmation bias we’ll often stop asking questions after obtaining the first answer agreeing with our expectations.   Of course we people make mistakes, like assuming our data are “essentially free” from error. But some error is irrelevant – or actually useful. “Uncertainty and Bias” is here to stay, and a context requiring pretty much constant attention.
  • Optimization and Action.  Our questions and models often have a larger actionable context. This can force constraints on our data and models, but ironically it can reduce or even eliminate some data and modeling work.  It’s a revealing, if not regularly performed, exercise to write down the objective function and constraints for any action or decision we might take based on our data.  For example, a logistics constraint might make production data and models irrelevant, at least for now. It is often a great idea to examine the “Optimization and Action” early in our process.
  • Type and Context.  Real world objects (such as people) have type and context that are easily stripped away when we represent them in a database.  The questions we can support are limited as a result, and perhaps distorted.  Once our data is stored, it pays quite literally to consider the data contexts of error (which is not zero), value (which often becomes zero), and shelf-life (which is frequently shorter than we imagine). A crucial data context is our system size.  Our approach is altered if our system has hundreds of millions or billions off records.  The “Type and Context” associated with our data storage has implications for everything else we do.
  • Extension.  Business rules, data transforms, exploratory analytics models, predictive analytics models, and dashboards all extend our data.  Extensions are valuable, often expensive, sometimes distorting, frequently opaque.  And if we’re not careful about the Optimization and Action context, sometimes irrelevant.  We want to be sure the substantial effort and occasional risk of an “Extension” is worth the cost, particularly in complex or large data sets.  And to the extent possible, it’s great to be transparent in our extension efforts – because transparency equals adoption. Especially when our results are unexpected,  an open and explainable extension is preferable to a fancy but opaque one.

AWL represents three commitments for managing analytics context:

  • Acknowledge and assess
  • Write and communicate
  • Limit outcomes
  • Acknowledge and assess.  We can be so busy that we simply don’t assess our Q&A contexts. The first step in managing our contexts is to know they exist.  A weekly meeting of all stakeholders to call out and discuss contexts is a great start. If we “acknowledge and assess” we’re a long way towards home, whereas if we don’t our situation is unclear.  In the latter case, we shouldn’t be surprised when unacknowledged contexts create worry or lack of adoption among our users.
  • Write and communicate.   We write too little, and too much.  The assumptions implied in our contexts are often under-documented – it’s hard to expect our users to understand what to do, when the assumptions of our system are unstated.  On the other hand, the Q&A dynamic can be so rapid that writing out specifications and requirements is pointless – they’re obsolete before they are finished (and if they are longer than a page, likely to be unread anyway).  We often script or code (for ourselves) rather than write software  (for ourselves and others). Communication across our professional boundaries is critical, because Q&A contexts cross those boundaries too – a weekly meeting of all stakeholder is a great way to go. “Write and communicate” is actually about spending less total time communicating and writing, but more where it counts.
  •   Limit outcomes. Our data and analytics contexts will impose limits on what we can validly say. It can be painful to find that our hard work is constrained by bias, error, or limited data. But if we’re committed to fidelity we “limit outcomes,”  because our data didn’t commit to giving us what we wanted.

 

4 thoughts on “Breaking the Ice

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s