Many of our answers to why questions are essentially “It just is,” or “I just do.” When asked why one candidate is better, another candidate is unfit, one team is better, one country is superior, a certain policy is ideal, an investment should be made, or something is attractive, our response often boils down to A is better than B, without further explanation than an appeal to experience or Tradition! It just is.
That’s OK, sometimes – we need conventions and axioms just to get through the day. I will drive on the right-hand side of the road today, as I don’t care to die on my way to work. On the other hand, while I’m commuting it’s irrelevant (and dangerous) to ponder why everyone is driving on the right. I should just accept the fact – It just is. (Still, this is a rather intriguing historical question.)
Nonetheless, for centuries we’ve accepted that when answers to questions become important, it is also important to provide a rationalization that others can understand. The ability to explain why an answer has merit is an acknowledged attribute of Western thought since Enlightment times generally, and in specific endeavors long before that. It’s how we vet assertions and improve our own understanding of how things work.
Personally, I wouldn’t question the value of rational explanation. But I wonder if our society, in a supposedly rational age, has devalued explanation to the point it is irrelevant to most discourse. Look no further than the latest political attack ad or hearing “The Internet” being cited as an authority – just two examples in which unfiltered and unassessed assertions can supplant rational explanation.
Explaining why, well, we don’t seem to go for explaining why these days is speculative, but I see several factors. It’s always been easier and more efficient to follow someone else’s reasoning than to figure out something for ourselves. Amplifying this, we now find it much easier to look up something than it has been in the past. Find accurate information may not be easier, but we can quickly feel that we’re making progress, and perhaps more importantly, find something reinforcing our existing notions. Some kind of answer is usually available – for if we believe it, someone else probably does also.
Related to that, through repetition and reinforcement we can come to accept that a looked-up “fact” of uncertain quality is as good as an explanation. We primarily value what we know and spend time on. So if we’re not spending much time seriously assessing what we look up, well…
In addition, getting instantaneous information is addictive. For myself I find that there is a definite hit to getting a quick answer. With an increasing “need for speed,” facts get precedence over explanations, as we hardly have time for the latter. Statements become proxies for their own unconsidered explanations.
But statements are often not as good as the explanations – even in engineering, science, and computing activities this distinction can become lost. There can be great pressure to simply deliver results (i.e. answers). We regularly build and work with computational and analytics models that are much better at delivering answers than helping us understand those answers, but that’s only OK when understanding is irrelevant. Rationalizing an answer is more work – often a lot more, especially for elaborate models. It entails either analyzing the model answers we’ve obtained, or modeling the reasoning process itself (e.g. as with true rule-based expert systems). A quick example… I once worked on a project involving genetic algorithms, a type of machine-learning protocol for optimization. Our goal was to determine a best-performing chemical structure, and we tried this two different ways. The first was to work with the chemical structures and let the computer randomly try different structures looking for a better answer. The second was to encode the rules for how to construct the material, and let the computer tweak that recipe rather than the structures. That might sound the same, but the first method was like baking 100 cherry pies and then taste-testing for the best pie, without insight into how each pie would really be baked. In the second method we also would test for the tastiest pie, but at the end get to read the secret recipe that makes that tastiest pie the best. In our project, when our first method (working with the structures) gave a result, it was interesting that no one really cared much, as we couldn’t say why that result was optimal. It wasn’t disbelieved, but wasn’t really believed either. On the other hand, when we optimized the recipes themselves, two things happened. First, experts instantly understood why the result was good, and believed it even though they might not have guessed it a priori. Second, they could intervene in interim results and help the computer along to an even better answer, because the instructions were available. The nominal performance of the two approaches was about the same. But while the second approach was technically more difficult, it was also far preferable in terms of acceptance and utility.
If we’re going to use analytics and artificial intelligence to tell a machine how to clean our rugs, play games, or even drive our cars, I’m not sure how much explanation we need – a blind answer is probably good enough. These are answers of relatively low value or short shelf-life, so we can train the machine in any way that gives good answers. It just is works in those cases. On the other hand, if we’re going to use these tools to diagnose disease, develop drugs, solve business problems, or figure out who to kill on the battlefield, the underlying reasoning becomes a lot more relevant. If in those cases we ask why and hear it just is or its modeling equivalent in response, that won’t be more acceptable than the same response from a doctor, or pharmacist, or business analyst, or soldier. Rational and comprehensible responses to why still matter, no matter who or what is answering. We shouldn’t allow a computer-driven information age lull us into thinking otherwise.