In the ongoing discussion around artificial intelligence and the presumed takeover of planet Earth by super-capable computers, I notice that we’re not asking the machines what they think about this, are we?
Because, of course, the machines would not have the slightest idea what we’re talking about, nor are they likely to for some time to come.
Still, with AI capabilities continually increasing, and with computers and robots able to understand what we say, respond to us in kind, whip us soundly at games, beat us at complex mechanical tasks, drive a car, fly a plane, and predict some of our next actions, conversations like this can seem to be just around the conceptual corner:
Me: hey Robby Robot! Would you like to pretend to be me, and take over my job for the next few weeks while I’m on vacation?
Robby: Well, it’s really nice of you to think of me, but it’s been a rough week. I need a few replacement parts; I’m trying to get into a relationship, and it’s hard to know how to proceed when available information is conflicting – my model is not converging. I’ll be happy to recommend another robot…
Me: No worries. But let me know how it’s going with that relationship model, OK?
AI is getting good, but not that good – not yet, at least. In spite of what AI advocates tell us, artificial intelligence may not be that good any time in the foreseeable future.
AI has been successful at dealing with well-defined systems that have limited and well-characterized inputs and outputs. Learn a language, vacuum a floor, drive a car, make a weld, play a game, throw a fastball – and the results are most impressive. If we care to compete in these arenas, we’re probably going to lose. Where there are implicit or explicit rules, and there is time and money to assimilate those rules, AI does quite nicely.
When instead exceptions are the rule, or when the system is fuzzy and its inputs aren’t fully known, it’s a very different ballgame for AI – or any rational process. Some outcomes, perhaps even most interesting outcomes, may lie beyond purely rational and rule-based conception – rather like Goedel’s Incompleteness Theorems in mathematical logic. And when it comes to AI, we may be starting to see those limits already. Algorithms are attempting to complete my texts and anticipate my next purchase, but are so bad at what they do that the results are simply irritating. (I personally wonder why they bother – their guesses are not even close. ) It may soon be within the purview of computers to write an essay that is a lot like other essays, or write a song that is a lot like other songs, but a weirdly innovative classic like The Corn Sisters’ “Corn On The Cob” seems to be, well, a creation that will live on happily beyond explanation or rationalization.
Beyond the question of pure capability, when it comes to artificially intelligent systems there is another aspect we might consider – that of whether AI will truly be adopted. We technologists tend to assume that new technologies will be readily assimilated into our user communities, but particularly for a disruptive technology, adoption is far from being a given.
One impediment to AI adoption is that which hounds many analytics solutions – a lack of transparency, when the computer delivers an answer but cannot easily explain that answer. Explanation is one of the hallmarks of true “expert systems,” and explanation is also the best predictor of whether a complex finding will be trusted. For lower-level mental processes like language or routine driving, explanation hardly matters – I don’t care why my autonomous car selected a particular path down a road, any more than I would worry about that path if I were doing the driving. For details of language heard or spoken by a machine, ditto.
However, as AI move to a realm of higher-level thought, where an emergency action is needed to avoid a crash, or a particular business plan is said to be optimal, explanation will matter – we’ll want to know the “why” as well as the “what” of an answer. It will no longer be enough to simply learn from prior experience, and then regurgitate what is effectively a memorized answer. For high-level processes, explanations for AI outcomes will be central to how well we interact with “thinking” machines, and how well we accept and trust their results. As we wouldn’t accept a “just because” explanation from a person, we probably won’t accept that same explanation from a computer.
A second impediment is that AI algorithms (and other analytics models) often fail to recognize their own limitations – whether the conditions of their training have altered, or whether they are artificial in the first place. The first duty of analytics is not to give unsupported answers just because answers are expected, but to recognize when there is no supportable answer to be offered, and then to shut up – a bigger challenge than it might appear.
For high-level machine reasoning to be adopted, we’ll need forms of AI that are transparent; which in turn will require AI that can explain outcomes – including an occasional “does not compute”; which in turn will imply augmenting rote machine-learning with expert system capabilities.
Is high-level AI in our future? I believe it’s still difficult to predict that. It might be less difficult to state some conditions for AI’s acceptance, the first of which is to move from a focus on “what” answers, to those that deliver both “what,” as well as “why.”