The Oldest Question

Last month we re-watched the BBC adaptation of John Le Carre’s Tinker Tailor Soldier Spy, featuring Alec Guinness as tired, retired spymaster George Smiley.  (It’s remarkable that one actor can convincingly portray Smiley, Obi-Wan Kenobi, and perhaps one dozen clownish relatives, of both sexes, standing in the way of Dennis Price’s inheritance in Kind Hearts and Coronets.)

In Tinker Smiley is offered a retirement job from which dreams are not made: to look into the past and discover which of his longtime associates has betrayed the principles for which he thought he was working.  Smiley’s looking for a British double-agent, but not just any ordinary spy working both sides of the espionage street – this traitor is almost certainly a longtime colleague, now at the top of British Intelligence, and evidently hiding in plain sight.  The hardest search can be when the answer has always been there to see, but no one wants to see it.

Le Carre has remarked that Tinker is about betrayal, as indeed it is.  It’s also about uncovering reality when deception is the norm, information is unreliable, and people are untrustworthy. As one of Smiley’s new conspirators puts it: “It’s the oldest question: who spies on the spies?”

Especially this time around, that was a theme which resonated – how can we ascertain what is real when the information we see ranges from well-intentioned but biased, to disinformation with malevolent intent?   Some information is valid, but the spector of disinformation has contaminated our views on all information – all sources are now at least slightly suspect. Acsertaining genuine facts can seem daunting.

This isn’t a new problem, but after the events of 2016 our collective awareness may be greater than ever.  That awareness might be a very good thing, once there is a plan for evaluating information sources.  In the meantime, what remains is not less than a crisis of confidence in our news sources.

Reputable organizations have taken note. Facebook and Google have verbally committed to stop making revenue on disinformation sources (but not as far as I’ve read, to returning revenue previously made from those sources).  Papers like the New York Times have given a degree of autonomy and column space to their public editors.

That’s a good start, but ultimately we’re left with a version of Smiley’s oldest question: who reports on the reporters?  The reporters can’t report on themselves much more effectively than the spies can spy on themselves – too many people don’t trust them to do the job properly.  So while suggestions that Facebook and Google use editors, or flag URLs as disinformation are reasonable, they do not address the fundamental issues of trust and validation. Ultimately we will need to answer the oldest reporting question for ourselves, channeling our inner George Smiley.

I’ve noticed that a number of people, feeling burned by a year of elect-o-meters, misplaced surety, and unctous editorials, have commited to taking news from multiple perspectives.  That’s an fine idea,  but it’s also a lot of work, particularly if we’re going to systematically evaluate all of this new information. It would be nice to have a little help in assessing what we read.

Are there metrics that might help us assess the quality and biases of a news source?  Ideal metrics would be simple so their calculations are evident, and transparent so their meanings are clear and largely beyond manipulation.  Some people will never be convinced by any metric – let’s forget about them.  We’re looking to help reasonable people in evaluating the multiple sources they consume.  That might be a small minority of readers, but in a polarized community where people’s opinions are often fixed, those willing to evaulate issues from multiple angles might constitute a tipping point in future elections and policy.

There are metrics that might fit the bill.  Some have even been with us for a while.  For decades commentators (Noam Chomsky comes to mind) have looked at total column-inches as a metric of level of commitment a news source has to a particular story.  In modern guise, variants such as total story word count or page-one story count would do just as well.  There is an implicit question as to what, exactly, constitutes a particular story or topic – let me come back to that.   Simply seeing a story word count or number of page-one articles can tell us a lot about what a particular news source regards as most relevant to its readers.  The Wall Street Journal will report far more on business matters on page one than the New York times.  Seeing the top five or ten stories would give us a quick idea.

I have nothing against re-tweets and re-posts, but they transmit content rather than create it.  Quality news sources write much of their own material.  Simply reporting the ratio of original to re-posted material would help us assess whether we’re reading material that someone took the time to write, or an article that was simply re-transmitted.  It’s more work but technically feasible to identify nominally original content that is effectively plagerized,  but I suspect this is less useful here.

A related, if slightly snobbish, metric: syntax.   A reputable source is more likely to have correct tense agreement and other basic grammatical features.

Factually false story count.  One has to be careful with something like this, and consider to the degree of automation that can be automated.  I’m really thinking of the most egregious cases here, but stories like “Pope Endorses Trump” can be flagged.   Drilling to a pairing of the original and correcting sources in each case would be useful.   I prefer something like this to the more subjective and potentially-manipulated. “bad URL” metric.   Fact-checker sites perform a good part of this function now, but the information can be difficult to consume at-a-glance.

Headline and story bias.   A bias measure is  quite different than the previous metrics, which calculate simple counts or measure against established norms.   How do we even measure bias? That’s not an easy question, but I believe the key to the matter is twofold: first, ask people what bias they detect – people are better than computers at seeing bias. Second, transparently show how respondent’s answers are used to report bias, so we can assess this more-subjective metric ourselves.

It might work like this: people are asked to evaluate bias (positive, neutral, negative) for a headline or story, stripped from its source.  But we want to do more than report how people voted – that might be interesting but is not transparent by itself.  A vote doesn’t indicate what triggered a bias in people’s minds.

On the other hand, we can identify words and phrases which do the best job of replicating our respondent’s evaluations – in effect giving the rules reflecting the bias.  As an example, we might ask readers to rate articles on Donald Trump, and learn that headline containing Trump+conflict best align with negative bias,  revolution or new+era  with positive bias. [Disclosure: Vadim Koganov and I worked on related algorithms and visualizations, to help identify simple rules behind difficult-to-define text categorizations (e.g. is that a good or bad resume?)   There is no guarantee a rule is forthcoming, but in practice they are often available.]

Getting a quick glance on reporting bias along with a few indicative words or phrases  is something that I, at least, would find very helpful.   We could then see why something was considered biased, as well as the assigned bias itself.   If I don’t think the displayed words convey a real bias, I can ignore them or post a comment arguing for an altered metric. (We may never see the day when ordinary news readers haggle over the words being used to define a story’s bias, but wouldn’t that be something?)

I mentioned the question of “what is a story” above.  In the same way that bias is defined, we can ask people whether an article related to a topic area. Often keywords are enough,  but sometimes topics can be a little difficult to assign.   Nonetheless, we can hope asking people to pick from a short list will give sensible results. And as with the bias metric, the aggrieved always have recourse.

People might very reasonably have altered or different evaluation metrics than the ones I’ve proposed.   What matters most, I believe, is that we start to identify metrics – simple, transparent, easy to consume, and hard to manipulate – that allow us to evaluate features of quality and bias in the news we consume.   News outlets are part of an established but not fully trusted system, and monitoring should now come from outside that system, at least in part.  For now and possibly for the indefinite future, the answer to the oldest question is “We do.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s