The Question Your Answer Doesn’t Want You To Ask

When we’re using data to answer questions – whether we’re Googling, interrogating a large database, working with a spreadsheet, or plotting points on graph paper, there is an always-present, if not always-considered, question of fidelity:

How good is this answer?   Does the answer truly address my question, and accurately represent the data and its limitations, as well as the outside world the data reflect?

At one time or another most of us will raise our right hand (me too…) and say, “I wish I had a better handle on that.”

Completely answering any “Q&A fidelity” question is difficult – and can also be time-consuming.  But even fidelity triage brings value – from the surety of saying “the answer is good, with these caveats…,” to early-stage (and therefore less expensive) adjustments, to the savings that result from throwing in the towel and moving on, when data support is lacking.   To assess fidelity is to know when we’ve done enough, and to regain control of a sometimes unwieldy Q&A process.

On the other hand, when we don’t assess fidelity we’re in terra incognita.  Our results might be OK, but we can’t be sure.  Certainly, data train-wrecks are not fun – some of my least pleasant consulting experiences have been to inform teams – often capable and experienced teams – that they have been fooling themselves.  Answering questions carries a lot of human, social, and technical context, and fidelity can be elusive. We could be the ones in that train wreck.

With this in mind, I started reviewing 25 years (!) of software, data, and analytics projects, thinking that my, well, diverse experiences could lend themselves to Q&A fidelity assessment.  (Thanks to a number of friends and colleagues who offered early feedback, as well as what might be called a kick in the ass to move things along.)

I ultimately looked at two questions: 1) what’s in a good Q&A fidelity assessment? 2) can a data team self-assess fidelity?

I’ll follow-up (starting with the next post) on detailed frameworks, but here are my short-form answers:

1) What’s in a good Q&A fidelity assessment? Good fidelity assessment engages all stakeholders (including users and sponsors) in an ongoing discussion driven by these Q&A contexts – questions (which can be ambiguous, require iteration, or are beyond our data’s scope); error and bias – involving data and people; actions taken with our outcomes; data environments – how data originates and is stored; and data extensions – rules, transforms, analytics, visualizations.  Each context has a human, social, and technical component cutting across professional activities.  Technology matters, but Q&A is still by and for people.

Fidelity assessment also represents a definite commitment from all stakeholders – as assessing fidelity might force us to limit our outcomes, or even stop work.

2) Can a data team self-assess fidelity? Almost. No one will know contexts and nuances like the team itself.  I think it helps to have a concrete framework to help identify the contexts impacting fidelity (which I’ll post), and some structured practice to get things started – a round of context assessments, and of communicating progress and challenges to our fellow stakeholders.

If this seems a little daunting, that’s not my intent. Taking control of fidelity is less about altering process, and more about altering mindset.   Is it a change?  For many of us, it probably is. It’s also about taking charge of question-and-answer machinery that can control us, more than we control it.

For millennia new data were at a premium – shared slowly, and developed even more slowly. As a result, the first chore of answering a question with data was to find the data –we were first-and-foremost data hunters.

In the last 20 years we’ve become data farmers – now our biggest chore is to format and process large amounts of data into a consumable format, so we can evaluate it.   But with modern data technology, tools, and process, we can find ourselves working for the apparatus, rather than having it work for us.  Personally, I don’t think that’s very satisfactory – we’re the ones asking the questions and wanting the answers, and we should be the ones in charge.  The question your answer doesn’t want you to ask – “Am I any good?” also reminds us of, and helps re-establish, our primacy in the asking and answering process.

One thought on “The Question Your Answer Doesn’t Want You To Ask

  1. I think your question, “can a data team assess fidelity” is short of the mark. The qualification should be “can they and will they”. Too many times I’ve seen biased or just plain bad data that was a result of organizational dysfunction, architectures and work flow that impede quality, laziness, or just a plain unwillingness to make the effort to improve the data or admit that they don’t have the power or funds to fix the underlying problems.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s