I Didn’t Quite Get That

They played the tapes for us, and they were truly side-splitting.

The recordings were conversational train-wrecks between real callers and an automated artificial-intelligence “AI” help system. Once the system was unable to process a caller’s statement, the fun would usually begin.

A representative dialogue would be like this:

The caller begins by complaining that his new refrigerator has stopped working.

There is a pause. I’m sorry, I didn’t! quite-get that. Could you!-try again?  

The sing-song inflections of a synthetic voice alert the caller that a machine is on the call.  The caller attempts to shorten the original statement, speaking slowly and loudly.   “MY, REFRIGERATOR, IS, NOT, WORKING.”

There is a pause. Let me see! if I-understand.  Your refrigerator! isn’t working. 

“Yes! What is there to understand?!  That’s exactly what I said before, this is the second time.”

There is a pause.  OK, got! it. Can you-describe! in a few words, the-problem you! are having? You can-say! words like “does not! run,” or “it-makes an! unusual noise,” or “it’s warm! to-the touch.”  Please tell me your! problem.

“Look. I just bought the goddamn thing and it’s under warranty – it’s 95 degrees here today, and the sucker is totally and utterly dead.  I have about two-hundred and fifty dollars of groceries that are about to spoil. All I need to do is get a service guy out here.”

There is a pause. I think! I-understood that your refrigerator! is running-hot. Is that right? You can say “yes” or “no.”

“No! Let me speak to a real person in customer support.”

There is a pause. Sorry, I think! I-understood that your refrigerator! is-not! running. Is that right? You can say “yes” or “no.”

The caller curses, and hangs up.

It was hilarious, but also agonizing – we’ve all been in the position of that caller.

It was also interesting that when a call was going well, what happened when the caller had to provide information – perhaps an address or a serial number.  The system could not process this information, and so the caller’s recorded words would be transferred to a person, who would manually translate the words into text, and then return control to the AI system.  There was an entire roomful of these modestly-paid humans doing what is trivial to us – understanding human lingual context – in the service of a very expensive computer system trying to do what is unnatural to machines – understand human lingual context. A metaphor, perhaps, for concerns about the forthcoming role of artificial intelligence in our lives.

Few things are more poorly understood than artificial intelligence. Interested non-experts are really in a bind – it is challenging to find unbiased information, expert or otherwise. Large and even mid-size organizations are starting to invest in AI out of fear as much as anything, not wanting to be left behind.

I’ve worked with AI systems periodically, but I am not necessarily an advocate for the technology. If it’s economical, useful, or necessary, I’ll say “yes,” and if not – and that’s often the case – I’ll say “let’s hang back.”

In broad terms, how can we assess the AI technology we have today?  I believe that two ideas can frame a discussion:

First: start from the premise that AI systems are neither intelligent nor understanding, as we mean it.  Human understanding derives from our brain’s pre-wired contexts, such as Chomsky’s universal grammar.  Computers don’t have that context, nor are they likely to anytime soon.  To distinguish – even imperfectly – a spoken “yes” from “no” is entirely different from understanding what “yes” and “no” mean, or mean in context.  You know what these sentences mean: “Machines understand me perfectly.  Not.”  You even knew what they meant the first time you heard the construction.  But the computer has to be told how to process those sentences.

As for intelligence, most hallmarks of human intelligence – such as learning something and applying it to a really new situation – are far beyond the capabilities of AI systems right now. Most AI systems I’ve seen can’t even say when they are making an erroneous prediction, much less propose a new hypothesis or theory.

Second:  there are constraints, risks, and expenses to any AI implementation.  The latest wave of AI is more about enabling technologies – massive computing power and data storage – than AI algorithms per se.  However, those enabling technologies are still expensive to build, maintain, and use.  In addition, the optimal use of AI algorithms requires specialized expertise and experience that is presently in short supply. After all of that, things may not work, in part because the data needed to train the AI system is lacking.  To an extent, you have to dive in, and then see if you can swim.

From the first idea, we can begin to assess where AI will work best. Structured contexts, well-defined rules of operation and right-or-wrong, answers with high error tolerances, or answers with very short shelf lives – these may be good choices for current AI technology.  An integrated system to drive my car safely?  Bring it on – I can’t wait.   Improve the hit-rate for online ads even though I’ll still usually be wrong?  Sounds good.  Let me give verbal commands to my phone?  If I’m patient today, yes.   Play a fun game like chess, go or Jeopardy better than I can?  Very impressive parlor tricks – but no game is much fun if one entity always wins.

On the other hand, AI is less immediately effective if we have to shift contexts, or deal with unknowns and ambiguities. Artificial intelligence can still play a role but I see that role as a more guided one – with a more nebulous problem, we must define the structured frameworks and contexts where AI works best.

That takes me to the potential role of artificial intelligence – and its enabling technologies of computing and data storage – in improving the fidelity of answering questions with data. The Q&A fidelity contexts of questions, bias, error, and limits of information are challenging for humans to process, let alone computer systems.  Nonetheless, there are guided contributions within these contexts for AI and its allied technologies.

To cite a few examples:

  • Alerting to ambiguity in questions, or proposing better alternative questions.
  • Integrating unstructured information into our Q&A framework is potentially valuable – AI systems can identify attributes in text that might supplement our structured data.  But an unsupervised AI process is likely (as I’ve learned personally) to identify textual attributes that are valid but unhelpful to understanding.  Aligning the attributes with human concepts (a supervised protocol) is more work, but also much more helpful.
  • An AI system might offer guidance on the structured attributes likely to be most meaningful for inquiry, and possibly on their extraction from less structured formats. A good data modeler may see extraction rules that generate fewer exceptions than what the AI system easily produces.  But a hybrid approach – for example use rules encoded in a genetic algorithm and ask the algorithm to lower the error rate – is potentially interesting, if the scale of the project supports it.  In a modestly-sized project, human modeling is still likely to be more efficient.
  • AI is as a potential tool for identifying bias and helping us to answer a fuller range of questions to limit bias, or to detect limits in our supporting information.
  • The allied technologies of inexpensive computing power should allow us to more fully explore whether potential or actual error has a significant impact on our conclusions.
  • Although it’s a little in the future, I see AI and large-data systems as a potent tool in gathering new data when we ascertain a limit imposed by our current data.  I have a nerdly fantasy that when I need to gather more data – maybe a short questionnaire, an AI system will help me craft a few sensible questions likely to correlate with what I have.  Then I click a button to send it out , get the answers back, it’s absorbed into my data system, and I find out if anything new has been learned. It’s even feasible with current technology, but I don’t think we’re quite there yet.

In short:

Is artificial intelligence a useful tool?  Sure. It’s better when we provide structure and guidance.  And a lot of the time it helps if we have some dough to spend for the required support, and experts to guide the process.

It is a panacea?  For the immediate future, forget it.  I haven’t seen one yet anyway.

Do I like it?  It depends – see above.

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s