Simplicity: Taken For Granted?

Engineers and scientists will testify with near-unanimity: it is surprisingly simple to create a complex solution, and surprisingly complex to render a simple solution.

Particularly as our systems grow larger, complexity tends to happen all on its own, without any help from us.  Still, we do sometimes give complexity help it surely doesn’t need.  We expand our systems with data of dubious value, to the point they are difficult to maintain; we twist problems to meet the tools we know; we ask algorithms to do jobs that could be better handled by an improved problem formulation or data representation.

Complex constructions all too quickly yield systems where contact with the questions and outcomes is uncertain for users, and even developers.

The cost of complexity is simple:  opaque and complex systems may pass muster when outcomes are expected, or unexamined, or simply ignored. However, when people do not understand an unexpected or interesting result, they will not accept the outcome.   When we need outcomes the most, complexity defeats their useful application.

It’s simple enough to complain about complexity.  But more to the issue, we often take simplicity for granted. Simple solutions are often very unassuming, naturally connecting inquiry to answers.  And because they can be unassuming, we find ourselves assuming that a what is simple now was also simple throughout its development, rather than a process of iteratively refining questions, models, transforms, and continually removing what is unessential and complicating.

Perhaps ironically, tools now falling under the umbrella of “data science” are some of the most potent simplifying tools available for maintaining simplicity, offering the opportunity to simplify data models and representations, and set aside data with little probative value.  These tools are best applied at design time and used throughout development, for simplicity is much easier to sustain than to retrofit from a large, complex, and essentially immobile system.

How often do we add in low-value information to our systems, making them large and inflexible, motivated by our worry that when our system becomes large and inflexible we’ll not be able to make a change later?   I understand the reasoning, but a smaller and more manageable system should never have this issue – new information with genuine value can always be added later.  That’s as it should be:  asking good questions is almost always an iterative process, rarely gotten entirely right the first time through.

I often see over-complex systems, too big to change, long after change is precisely what’s needed for better adoption or improved inquiry.  However, I don’t believe I’ve ever seen a business intelligence or analysis system that truly had to be that way.   Simplicity isn’t impossible to achieve, but it is hard work.

Those crafting simple and transparent solutions deserve our appreciation for what truly is a complex – but very worthwhile – task.

Particularly as we contemplate larger and larger data systems, simplicity is a worthy design and development goal.  It is simplicity – not complexity – that brings solutions to our most challenging problems, where questions, answers, data, and metrics will evolve to their final form.   Simplicity offers us comprehension; comprehension brings challenges to our early outcomes; those challenges bring improvement; from improvement we reach adoption; and with adoption we can assure impact.    And all of that change and iteration is only possible with solutions that begin – and stay – as simple as possible.

 

Two Out of Three

A writer needs three things, experience, observation, and imagination, any two of which, at times any one of which, can supply the lack of the others.” ― William Faulkner

Right on, and not only writers.   Observation and imagination are underrated in information work. A writer’s imagination and observational skills can be perfect raw material for an analyst, while focused technical experience may not help with understanding another person’s questions, or interpreting their analytics outcomes.

Observation, imagination, and experience all matter in analysis, but the origin of those skills is increasingly irrelevant, as tools become mainstream and simpler to use.

The perspective of creative disciplines may be closer to real-world questions than the perspective of technical professions in many areas, including human resources, reporting, social listening, and politics.

If we consider analysis the province of a purely technical community, and focus largely on technical aspects, I believe we’re missing out where it really matters: developing the the right questions and really understanding the resulting answers.    The combination of technical and creative modes of thought, working together to understand questions and interpret answers, is something that we can and should welcome.

When More Really Is Better

Analytics might be defined as people asking questions and deriving answers using data. Even as our computational and data capability has transformed in the last 25 years, that definition needs little alteration.

Analytics truly is an ancient and fundamentally human activity, now amplified and augmented by modern computing capabilities.   The essential algorithms and processes used in analytics have changed far less than our capability to execute them.   If you look at textbooks from the early 1990’s, most elements of current analytics thinking are there.

And now, almost anyone with an interest and internet access can apply analytics tools, processing, and thinking to their daily activities.   What was once the province of a relative few with access to arcane and difficult-to-use tools is now widely, and almost freely, available.

Is that good? Absolutely.  The more, the merrier.   If I could, I’d invite anyone working with data and an interest to Columbus for a one-week short course in exploratory analysis.   That short course wouldn’t make people expert at deriving any analytics answer, but it would make people aware of the questions analytics addresses, and some of the thought process in addressing those questions.  And that’s a start.

A great contribution from analytic thinking is that it makes for better discussion and problem-solving all around.  Good analytics works in the realm of verifiable facts – the invariable basis for informed discussion. The alternative is to ignore analytics tools and processes, resulting in a continuation of the trivial arguments which pass for much of discussion today.  Can people hurt themselves using complex tools when they are just starting?  Sure – trust me, I’ve been there.  But that’s OK – mistakes in analytics are part of analytics, and working within the process is ever so much better than working outside of it.

Analytics also improves dialogue through its fundamental recognition of limits, and its sometimes irritating dismissal of absolute truths. The first duty of analytics is frequently to identify the limits of analytics outcomes themselves – while we may not discuss that often,  it’s fundamental to analytics nonetheless.  Analytics tells us that we don’t really know how good our knowledge is until we break it – every model, every theory, every data set, every process has its limits. Finding those limits is frequently the topic of good and creative analytics work.

Beyond better dialogue, why is it desirable for more people to apply more analytics more-or-less all of the time?  Because analytics results depend on context –  biases, uncertainties, nuances of question, interpretation of answers, implicit metadata, and the entire universe of subject-matter knowledge – and those supplying that context are the ideal people to apply analytics in the furtherance of knowledge and ideas, rather than data experts.  Applying analytics without the nuances of problem context is like using a chain saw to trim a tree, based only a rough idea of what a tree should look like.  Context and problem knowledge can, should, and do rule the problem-solving process – if you like, it’s data we can’t do without.

Then do analytics experts matter?  Of course.  They matter in the same way that experts in storage, in databases, in visualization, and application development, or a score of other data-related disciplines matter – as experts helping people to understand and solve data-related problems.  But integration, context, and collaboration are the order of the day, if we’re to move forward, and I’ve minimal patience for the idea that data science, or data scientists, (or any other technical discipline) somehow stands apart or even above the general flow of problem-solving progress.  80% percent of the analytics problems are solved by 20% if the techniques, and everyone everywhere should be encouraged to use those techniques whenever and wherever they can – in database design, in data assessment, in performance tuning – you name it.

There really is so much to accomplish, and analytics can help with accomplishing it – this time, more really is better.

Cleared for Takeoff

I did it again. In a bit of a rush, I looked at these numbers from a database table

34.1

34.8

and I said to myself: Great! 34.8 is greater than 34.1. That’s what I expected.

And unfortunately, it was also incorrect.  Each of the numbers was a sum of sales figures from two different regions, but a significant percentage of the regional assignments were incorrect.  Enough to produce an error of up to plus-or-minus one in each number; enough to render the two numbers statistically equivalent.

It’s tempting to argue that when data quality problems create ambiguous comparisons – numbers that “should” be different but actually are not – we should continue to compare two numbers as if they were exact – after all, when those data quality problems are fixed then the exact comparison will be valid, right?

Not exactly…. When, and if, the data quality problems are fixed, then some comparison will be valid, but at this point, before the fix, we don’t know what the final result will be.    The “corrected” 34.1 might be bigger, or the “corrected” 34.8 might be bigger – it’s too soon to tell.

The presumption of precision when we see a number can be difficult to set aside, but in reality that presumption is just prejudice – the prejudice that our numbers and our data are as perfect as we want them to be.   Until we’ve tested and proved the quality of our numbers, we might be far better served to take the opposite, perhaps radical presumption: that when comparing any two numbers in our database there is no meaningful difference until we have proof to the contrary.   There are no significant differences, no conclusions to be drawn, no decisions to be inferred,  nothing of value; a net worth of zilch-point-nada until proven otherwise.

Is it radical to ask our data systems to prove their support for our conclusions? Perhaps. But that radical approach is precisely how other engineers deal with uncertainty when the outcome is critical, in systems ranging from nuclear power plants to aircraft.   Engineers and operators always presume these systems will fail until they have proven they can operate successfully.  The presumption of failure doesn’t always guarantee success, but it does guarantee a safe and unharmful failure.

In one of the most reliable events we ever encounter – the successful takeoff of an aircraft from a runway – pilots are trained to operate on the premise that the plane will not be able to fly, including procedures to abort takeoffs well after the plane has begun its roll down the runway.   That might seem a little scary, but it’s truly the very safest approach – by having the plane prove it is capable of liftoff,  pilots are optimally prepared for those very rare circumstances when the plane really should not leave the ground.

Why not do the same thing with our data – ask it to prove that it’s worthy of supporting the comparisons we’re planning to make? If we start with the presumption that our data cannot support any conclusion of true difference  – what statistical thinking usually recommends –  we’ll rightly conclude only what we can prove, rather than wrongly conclude what we merely assumed.  Isn’t that how it’s supposed to be?  Aren’t our data systems worth that level of care?

But realistically, our practice of often developing and applying partially-unproven data systems will not change overnight, for a couple of good reasons.

First, the creative minds of data architects and developers naturally rebel against any task that is tedious, or might tarnish the capabilities of their creations – and I don’t blame them a bit. Uncertainty analysis can be both tedious – as if we had combined the worst aspects of writing documentation with an interminable mathematics problem set, as well as tarnishing – the best outcome is that our data won’t be worse than we hoped, and they probably will be worse than we hoped.

In fact, uncertainty analysis poses real creative challenges, ranging from estimation to probabilistic problem-solving.  And it needn’t be as depressing at it might seem-  if we work from the premise that our data are not, after all, perfect,  uncertainty outcomes can become a part of our requirements, delivering the positive and definite data quality targets that must be met, if our questions are to be accurately answered.

Second, our biases come into play.   We want to be right – confirmation bias, and we especially want to be right after investing considerable time, effort, and dollars in our creation – sunk-cost bias.    Bringing uncertainty into the mix forces us to swim upstream against powerful emotional factors.

 

Still, if our data systems really matter we should shift increasingly to a prove-the-capability mental set point when launching our data systems, just as we do in any endeavor using a complex system that is critical to our well-being.  That’s when  “Cleared for takeoff” does not mean “have a nice trip,” but rather means “start the procedures that will prove your system will do the job for which it was designed.  Then, have a nice trip.”

Unless we want to argue that our data systems really don’t matter much…. No, me either.

Just Throw It Away

On several occasions, including today while talking with a longtime colleague, I’ve threatened to make my professional epitaph “‘I might need that data someday’ is not a valid use case!”

Seriously:  what we store now is unlikely to be useful ten, five, or even two years from now, and perhaps we should rethink our current approach to data storage, which so often defaults to “let’s hold on to that.”

Most data records have a shelf life on the order of months and a value not even worth speaking of – especially for big data collections.  Look:  if the 1000 billion records that we’ve so assiduously collected were worth even a penny apiece, we would be multi-billionaires.  (The last time I checked, we are not.)  In fact, much of our data probably has negative value:  it is never actively queried, but still costs time and money to store and maintain.

I know.  Organizations gather information like obsessed antique collectors, and the urge to keep a currently handy file, table, or database is almost irresistible.  But we should resist, because the context to make sense of these bits of info-junk will very likely be lost as soon as we pull them into our our personal information attic.  And, any data context that we haven’t completely, fully, and entirely documented will disappear like a lead weight sinking to the bottom of muddy lake.   The data will remain, but untethered from its original context, may become worse than useless – without context, it could very well be used wrongly at that later time.

“Data lakes” are now all the rage, but they don’t address the issue of declining data value and the natural loss of context that makes most data meaningful. To load something into a “data lake” is the information equivalent going to one of those old-fashioned hardware stores, asking for a random selection of nuts, bolts, washers, and screws, putting them in a box and storing them in our attic.  Then, five years from now, if we even remember that we went out and bought that random box of unassigned parts,  we’ll not know where we put the box, nor remember how to find what we want, and quite possibly after a frustrating search through all N pieces of junk realize that what we need isn’t there: our new and modern equipment doesn’t use the kinds of nuts, bolts, washers and screws that we purchased in anticipation of our future unforeseen need.

I’m no better.  I have files on my laptop from the 1990’s.  But when, as an experiment, I dug into one of those old directories recently, I found that I could not name the purpose of a single file by looking at is name, and in most cases not give the purpose even after opening the file.

Why do we do it? Why store things we know we may never use, or even remember we stored?   In part because we believe the cost of storage is zero, which it is not;  the value of data is constant, which it isn’t; and the context we have in our heads will remain there, which it won’t – our heads will be filled with other new and interesting things in the future.

Perhaps we should design our systems to auto-archive any data older than six months, unless the data can be proven to have a value over the cost of storage and maintenance, a known use, and is meaningful to someone who would not normally use it – the latter being a test of whether good context is available.  The data that are representative, or supportive of exploratory and predictive analysis can be allowed to stay; as for the rest – they can’t pay the rent. It’s hasta la vista, baby.

Holding on to valueless data entails not only direct cost, but an indirect one:  data clutter slows or cripples applications that explore, visualize, and predict from our information that really does have value.   Big data for a proven business or research purpose is laudable, but big data for an unproven purpose – big infojunk – means the cost of storage and maintenance now, and the cost of obscuring valuable information and insight later.

Data value, like any other kind of value, is a thing to be proven,  not something to be assumed.   There is no “maybe someday” for data value, only what can be shown to be useful now.  For if we can’t prove that value now, we’ll very unlikely to have the context to prove the value later, after it’s been sitting ignored in our info-attic or data warehouse for a couple of years. And what of those data without value, that we’ll unceremoniously jettison?  We can say thanks if we like, but we should throw that data away.

In Proportion

Show me a proportionality – an output increasing directly with an input – and I’ll show you one cornerstone of most good analyses.

Proportions are simple; proportions mean that the more we put in, the more we get out; proportions make apparent the impact of turning an input off or on; proportions help us accept results because they are very believable.

Even complex models are often constructed from a set of proportionalities working in combination.  We may ultimately move on from proportional thinking in our model, but proportionality is usually a good place to start.

Which brings me to the topic of guns, and their now commonplace application in dispute resolution.

On any given day, the number of angry and unstable people will be about the same, and the number of such people who desire to terminate their imagined adversaries — whether they be college students, grade school kids, politicians playing baseball, or just  random people – is also about the same.

On the other hand, the number of people executing their plans will be proportional to the number of people equipped with weapons for execution.   Without weapons, no one gets shot, and one person’s bad day doesn’t become a bad day for anyone else in reach of one gun’s bullet.

And with weapons? Well, we know the results. After the latest shootings yesterday in the Washington DC area, some argued that the problem of excessive shootings is best resolved by making more weapons available.   Proportional reasoning would dispute that:  should someone start shooting there will be proportionally more people ready and able to fire back, with proportionally more wounded or dead people as a result.

Still, when it comes to weapons and shootings, is proportional reasoning somehow wrong? Is there a deterrent effect when a deranged person enters a schoolyard with intent to kill, and every teacher, administrator and student over the third grade is packing an automatic weapon?  That’s the only potentially valid argument I can see, for after-the-fact firing will never heal the wounded, nor resuscitate the dead.   But deterrence is an almost surreal reach,  supposing that a person with badly distorted emotions will respond in a rational and programmed way to a deterrent of any kind, much less to a threat of force.   Calm, seasoned diplomats and rational governments respond to deterrence. Guys with guns on playgrounds are another matter.

Any argument implying that more weaponry somehow yields fewer deaths argument is inherently non-proportional, when a simple and proportional one is available.   From the days of Occam’s razor, to our modern age of instantly-available information and weaponry, the same rule has applied to formulations,  decisions, and policies everywhere:  keep it simple. Keep it proportional.  Fewer inputs mean fewer outputs; fewer weapons imply fewer shootings, fewer shootings imply fewer deaths.

Complex answers should only prevail when the simple answer is demonstrably wrong, and that’s far from the case here.  A complex, non-proportional argument about the use of force, that most fundamental “more is better” concept, is a waste of time and an insult to sensible citizens everywhere.

 

AIeee!

In the ongoing discussion around artificial intelligence and the presumed takeover of planet Earth by super-capable computers, I notice that we’re not asking the machines what they think about this, are we?

Because, of course, the machines would not have the slightest idea what we’re talking about, nor are they likely to for some time to come.

Still, with AI capabilities continually increasing, and with computers and robots able to understand what we say, respond to us in kind, whip us soundly at games, beat us at complex mechanical tasks, drive a car, fly a plane, and predict some of our next actions, conversations like this can seem to be just around the conceptual corner:  

Me: hey Robby Robot! Would you like to pretend to be me, and take over my job for the next few weeks while I’m on vacation? 

Robby:  Well, it’s really nice of you to think of me, but it’s been a rough week. I need a few replacement parts; I’m trying to get into a relationship, and it’s hard to know how to proceed when available information is conflicting – my model is not converging.  I’ll be happy to recommend another robot…

Me:  No worries.  But let me know how it’s going with that relationship model, OK?

AI is getting good, but not that good – not yet, at least. In spite of what AI advocates tell us,  artificial intelligence may not be that good any time in the foreseeable future.

AI has been successful at dealing with well-defined systems that have limited and well-characterized inputs and outputs.  Learn a language, vacuum a floor, drive a car,  make a weld, play a game, throw a fastball – and the results are most impressive.  If we care to compete in these arenas, we’re probably going to lose.   Where there are implicit or explicit rules, and there is time and money to assimilate those rules, AI does quite nicely.

When instead exceptions are the rule, or when the system is fuzzy and its inputs aren’t fully known, it’s a very different ballgame for AI – or any rational process.  Some outcomes, perhaps even most interesting outcomes, may lie beyond purely rational and rule-based conception – rather like Goedel’s Incompleteness Theorems in mathematical logic.  And when it comes to AI, we may be starting to see those limits already. Algorithms are attempting to complete my texts and anticipate my next purchase, but are so bad at what they do that the results are simply irritating.  (I personally wonder why they bother – their guesses are not even close. )  It may soon be within the purview of computers to write an essay that is a lot like other essays, or write a song that is a lot like other songs, but a weirdly innovative classic like The Corn Sisters’ “Corn On The Cob”  seems to be, well, a creation that will live on happily beyond explanation or rationalization.

Beyond the question of pure capability, when it comes to artificially intelligent systems there is another aspect we might consider – that of whether AI will truly be adopted. We technologists tend to assume that new technologies will be readily assimilated into our user communities, but particularly for a disruptive technology, adoption is far from being a given.

One impediment to AI adoption is that which hounds many analytics solutions – a lack of transparency, when the computer delivers an answer but cannot easily explain that answer.  Explanation is one of the hallmarks of true “expert systems,”  and explanation is also the best predictor of whether a complex finding will be trusted.  For lower-level mental processes like language or routine driving, explanation hardly matters – I don’t care why my autonomous car selected a particular path down a road, any more than I would worry about that path if I were doing the driving.   For details of language heard or spoken by a machine, ditto.

However, as AI move to a realm of higher-level thought, where an emergency action is needed to avoid a crash, or a particular business plan is said to be optimal,  explanation will matter – we’ll want to know the “why” as well as the “what” of an answer.  It will no longer be enough to simply learn from prior experience, and then regurgitate what is effectively a memorized answer.  For high-level processes, explanations for AI outcomes will be central to how well we interact with “thinking” machines, and how well we accept and trust their results.  As we wouldn’t accept a “just because” explanation from a person, we probably won’t accept that same explanation from a computer.

A second impediment is that AI algorithms (and other analytics models) often fail to recognize their own limitations – whether the conditions of their training have altered, or whether they are artificial in the first place.  The first duty of analytics is not to give unsupported answers just because answers are expected, but to recognize when there is no supportable answer to be offered, and then to shut up –  a bigger challenge than it might appear.

For high-level machine reasoning to be adopted,  we’ll need forms of AI that are transparent;  which in turn will require AI that can explain outcomes – including an occasional “does not compute”;  which in turn will imply augmenting rote machine-learning with expert system capabilities.

Is high-level AI in our future?  I believe it’s still difficult to predict that.  It might be less difficult to state some conditions for AI’s acceptance, the first of which is to move from a focus on “what” answers, to those that deliver both “what,” as well as “why.”

We’re Scientists

Being young means never having to say that moving is a problem: you simply take anything you own of value, dump it in a van, and set off to your next living quarters.

So when my brother graduated from college and needed to move to graduate school, we rented a van, dumped his minimal belongings (including a handcrafted telescope) therein, and hit the road early one summer morning.

After about 10 hours of driving through the flatlands of lower Wisconsin, Illinois, Indiana, and Ohio, we decided that it was time for dinner, and dinner time had brought us to I-70 in the vicinity of Dayton, Ohio and the Wright-Patterson Air Force base.   We stopped, walked into a nearly empty diner, and were seated by our talkative waitress. I imagine she was just looking for a little light chat to alleviate a dull and uninteresting work shift, and it wasn’t her fault that after 10 hours of driving, we were two obviously tired people who could not hold up our end of any conversation, small talk or otherwise.

Still, she gave it her best shot, with this opening:  “Are you boys from the base?”

Tired or not, my brother was not a person to allow a genuine Dan Ackroyd moment to pass. He looked her in the eye.

“No ma’am.  We’re scientists.

From that moment, she focused our conversation on what was required to see us served, which served us right, I suppose.

………….

Also sometimes puzzling is the designation data science, which as data scientist friends of mine all point out, is a very loosely defined term.  That’s OK by me – I kind of like that it’s loosely defined.  After all, trying to separate science from engineering is a little like trying to separate art from craft, with the only probable result being that those finding themselves on the engineering or craft side of the definition will become aggrieved.   But this does beg the question of whether analytics should be considered a “science,” like physics or chemistry.

Conventional science, at least, can be defined as the intellectual and practical activity encompassing the systematic study of the structure and behavior of the physical and natural world through observation and experiment.

By those lights, a lot of analytics is science, as long as you are willing to mentally stretch what encompasses the “physical and natural world.”

Conventional scientists tend to divide as experimentalists and theoreticians.  The former create controlled conditions to give well-defined and unbiased data answering clearly-defined questions with well-understood assumptions.  The latter create conceptual systems explaining the data that experimentalists provide.  Some scientists take both roles, but the skills are different and most specialize in generating well-understood data, or understanding those results in a conceptual framework.   Both are essential, but rather like the economic argument that labor comes before capital, data come before theory:  with no good data there is nothing to explain. Careful experimentalists provide data that is good in actuality as well as appearance.

When I see people talk about data science, or present their skills at meetings and conferences, the majority focus on the data analogue to conventional theory: models – explanatory,  predictive, or optimization in their varied forms. It’s often engaging work, but just as in conventional science, a model is only as good as the underlying data, and only as meaningful as the set of assumptions and conditions under which the data have been generated.   And just as in conventional science, good data come before good theory.

Ironically,  with modern tools it is often not difficult to craft respectable models from a self-consistent data set.  What is often more difficult is the “experimental” aspect of data science: understanding whether our underlying data are truly accurate or precise, if they are validated to external reality, what questions they answer, and the assumptions that went into their generation.   Our data systems store numbers with ease – there are scores, costs, and counts galore.  However those same systems store the context for those numbers with much less ease, so we don’t always know what our numbers represent.  Challenging that the numbers we have actually give us the answers we want might reasonably be called “experimental” data science – and as with conventional experiments, data come before theory.

Exploratory analysis certainly has a role in experimental data science, but much of data-experimental work is the old-fashioned grind of iterative data validation, requirements gathering, uncertainty analysis, and predictor development.   It that less exciting than predictive modeling? Probably.  Is it more critical than modeling?  Yes. Without data whose meaning is understood by all stakeholders, there is little point in a model.

We hear a great deal about the shortage of data scientists, and I wouldn’t argue the point.  But the frequent origin of poor data-driven decisions – and they’re out there – is a poor understanding of original numbers we have, and then taking those numbers to mean something they actually do not.  I don’t know any experienced data scientist who doesn’t have a “misunderstood data yielding a bad decision” story to tell.  More than we need better models, we need better understanding of the data going into those models. For models can be made from irrelevant data as easily as appropriate data – only uncertain, inconsistent, or poorly-represented data really thwart model-making, and that’s different than “irrelevant.”  What we really need are more data experimentalists – it’s the best kind of data science there is.

The Real Opportunity

Within 24 hours, thousands of comments have been written on Trump’s abandonment of the Paris climate accords, and the obviously related abandonments:  political leadership and competitiveness, business leadership and competitiveness,  and scientific leadership and competitiveness. Into this void various competitors will leap, including the Chinese and Europeans, who actually have already been leapfrogging the US in many research endeavors.  At the last international science conference I attended, the Europeans and Chinese were talking about their research results, while the Americans were mostly talking about scraping up enough funding to get through the winter.

As expected, Trump has based his argument on dubious science and economics. Even a cursory look at how electric energy is being produced in this country shows that “alternative” energy is booming, without a lot of government help. (For more on that, check out the fabulous Energy Information Administration website, before they too become victim to the ongoing federal budget hackathon.)

Interestingly, people, businesses, and even state governments are rising up and declaring their commitment to climate control, whether the feds are on board or not, for the obvious reason that most of us have direct or indirect international  connections, and to maintain those relationships we need to be aligned with how most of the world thinks.

OK. But none of this was really unexpected.

I have this question: What’s the difference between our current climate crisis, to which the US government is not responding, and the communist crisis of the 1950’s and 1960’s, which spurred a gigantic, and highly successful, investment in federal research laboratories and the government-business collaboration which put men on the moon?

People haven’t changed. The same kind of people – nerds, basically – who participated in the 1960’s technology boom are those who insist that we face a genuine crisis due to climate change.   C.P. Snow’s observations in The Two Cultures remain in force, then as now.   Many people harbor a suspicion and slight dislike for the analytically-minded, even though many of us are actually pretty normal – we eat, drink, get pissed, ride Harleys, and have families pretty much like everyone else.   Regardless, the Two Cultures are still with us.

Isolationism hasn’t changed. The United States has indulged an isolationist streak throughout its history. In the 20th century it was a major effort to engage the US in two brutal but necessary wars.  In the mid-20th century,  the communists were the enemy and purging them from all aspects of society was very near priority one.  We are nominally more connected to world affairs, with the result that the US is once again attempting to disengage from the world.

Demagoguery hasn’t changed.  There was McCarthy then, and Trump now, giving ammunition to the idea that the most virulent and toxic political personalities are simply reincarnated precisely when we need them the least.

However, the nature of the threat is different.   In the 1950’s skillful propaganda, augmented by the dawn of the nuclear age, made communism a real, immediate and visceral threat for analytical and emotional people alike.  There was a direct line between nuclear bombs in the hands of unstable communist rulers, and the need to take immediate and concrete action.   If we needed a lot funding and nerds to keep ourselves safe, that was the price of freedom.   The threat was accepted, and action taken.

 

The nature of the climate threat is harder to appreciate, and even the most ardent supporters of action rarely understand the underlying science.  The threat isn’t immediate. Its impact is unclear.  Arguments on its behalf frequently take the form of “a lot of really smart people say this is going to happen, so you should believe it,” which is a no-no in analytics, and also a no-no in persuasion – ask C.P. Snow.   In short, climate change is, to borrow a phrase from the 1970’s, plausibly deniable.

Deniable, but still likely. And with potential consequences not very different from what the Cold War might have wrought – drought, famine, massive dislocation, and social upheaval.

Bringing a challenge, and an opportunity.   The challenge is to show in concrete and comprehensible terms, understandable to intelligent people without a specialized background, why climate change is nearly inevitable and the consequences of that change.  I don’t think that’s impossible, but so far when communicating this threat to general audiences we’ve relied far too much on arguments from authority.  Those accepting these arguments are often visceral in their belief, but without an understanding of the underlying science those beliefs don’t hold up very well in debate.

The opportunity, once the threat is understood, is to apply the same energy and ingenuity to managing climate change as we applied to travel in outer space, monitoring nuclear threats, and intercepting ballistic missiles.  Many people – including those charged with doing the job – thought John Kennedy was nuts to commit the US to placing men on the moon within a decade.  We now hear – again based on authority – that severe consequences of climate change are now probably irreversible.  Perhaps, but we haven’t really tried to solve the problem, and with the current administration we’re not likely to try.   At some point, time will really run out. Those appalled at the self-serving and short-sighted action of Trump are entirely correct, but we need to examine why Trump and his supporters can get away with their game of plausible denial – it’s because we in the technical community have fallen short in illuminating the climate threat as clear, immediate, and real.  With the threat understood, concerted action is possible, and with concerted action we might be surprised at what we can do – so history tells us.