The thuggish vandalism that passes for government in the Trump administration is largely devoid of principles or ideas, but still propelled by powerful forces. Like other forms of extremism, Trumpism is animated largely by what it loathes. I argued the other day that Trumpian hatreds can be enumerated in a few “anti-ideas” that illuminate the administration’s general behavior, and also act in lieu of conventional operating principles. These are: denigrate considered policy and expertise; denigrate people as individuals; denigrate thinking as contrasted with action. Finally, equate anti-ideas with true ideas – i.e. in the playing field of statements, quality is irrelevant. It is only the source that really matters. These anti-ideas certainly won’t predict individual outbursts – those seem to be random – but they may help in understanding general trends.
Of course none of this took place overnight. In a sense, the Trump administration is the culmination of a long conceptual descent in which facts have become increasingly irrelevant, and competence has been more a matter of declaration than one of proof.
Squarely in the sights of these anti-principles is the American system of research, and I don’t think it is an overstatement to consider that system, particularly government-funded research, to be at risk. In terms of Trumpian anti-ideas, research represents a great deal that the administration dislikes: it’s a very significant aggregate of varied expertise; it represents a well-defined intellectual elite; it involves complex thinking; and it’s expensive (as part of being against what-is, Trump selected a budget director who thinks that government-funded research is a waste of money). Finally, research involving science, engineering, history, the arts – you name it – is dedicated to creation and to finding what is new, or true, or both, versus what is none of those. If as an operating anti-principle you want to obscure the difference between new and true vs. old and untrue, that’s a problem.
US research is vulnerable, simply because research can be expensive and because it often lacks instantly-digestible deliverables. Republicans have already been chipping away at US research primacy by failing to pass budgets many times in the last decade. There is a real irony there, because while the GOP ostentatiously unfurls the flag at each viable opportunity, they have been simultaneously working against American science and engineering primacy.
There are many reasons for doing research – national primacy isn’t the only reason, or even the best reason. In fact, I’d argue that contrary to the anti-truth and anti-intellectual stances of the current administration, research is a very significant part of what makes the US really great. Any country that invests in understanding the unknown stands to make the lives of its citizens better and more enjoyable. Just a few concrete benefits of government-sponsored research include: more robust crops and inexpensive food, improved understanding and treatment of cancers, safer vehicles, better armor and weaponry to protect our fighting men and women, stronger materials, better computer security, improved understanding of economic markets, and sensitive detection of nuclear events. Every one of these has been strongly supported and improved by government-sponsored research. We needn’t confine ourselves to the purely practical: historical studies provide context for current problems, and are intrinsically interesting. And it’s fundamentally human to look into the sky and just wonder how the universe works. And now, thanks to investigators in the US and elsewhere, we know at least some of the story.
In short: if being a great nation means helping to fulfill its people’s potential, than research is part of being great. To take a nationalist viewpoint for a moment – if we don’t, someone else will.
We may have to forget about all of that, unless we’re careful. My purpose here is a little beyond the theme of “Research is essential and Trumpism threatens it.” I am a product of the university research system, as are many of my friends – but I also have friends who come from different backgrounds, and I thought I might say a little about how I believe research does (and doesn’t) work in practice. There are some well-meaning misconceptions in play, but I never really attempted to address those in writing. I’m giving it a shot, now. Hopefully, making the process a little more real and less remote will help in its support.
One thing I hear, which truly is a misconception, is that the free market can handle all of our research needs. Corporations, national labs, private institutions, and universities all can and do make good research contributions. However, the role of corporations is frequently circumscribed. Corporations usually expect a well-defined return on their investments in a specific time frame – as they should. But that is just not the nature of many research programs. Pure research is often too risky for a corporation – after all, you’re exploring something new – and by definition this will fail sometimes. A great idea that didn’t work? Par for the course. Some research doesn’t fit directly into a corporate structure at all – that isn’t how a new work of history would be written, for example. There is also essential research almost guaranteed to lose money or requiring high-security environments. These are great programs for universities and national laboratories.
There are exceptions to this “corporation” rule –the pharmaceutical industry has systematized long-range and often risky drug research – pharmaceutical companies are not the pure-profit engines portrayed in popular media. Very large corporations like IBM will sometimes assimilate risk in a long-term technology like artificial intelligence.
Acknowledging the many places where research occurs, nearly all serious research, whether it takes place in a university or elsewhere, has its roots in universities. The overlap of many research programs with university research is often very strong. More crucially, the people and specialized training required to run a research program only comes from universities. Bona fide Ph.D. degrees in particular are a university specialty.
Speaking of Ph.D.s, there can be confusion about that degree really entails, and what the degree is good for. Let me say this: Doctors of Philosophy are not necessarily really, really, smart. But a Ph.D. does imply special training tuned to the needs of research, (and yeah, they’re smart….). The degree – and I’ll restrict myself to science and engineering where I have an understanding of how the system works –trains people in a few ways. The first aspect is to get a good understanding of the subject area – chemistry, physics, math, or engineering. Most Ph.D. programs still have the rather hair-raising ordeal of a “qualifying exam” – sometimes written and sometimes oral – in which the examiners have the right to ask a student pretty much whatever they like. Mine was an oral exam, and I still recall many of the details as my interviewers enjoyed torturing me with questions ranging from quantum mechanics to some whacked-out diffusion problems. If all goes well, you pass, while realizing you still have a hell of a lot to learn.
A second item for the Ph.D degree is to learn how to craft a sensible proposal, with a reasonable line of inquiry. Things might have changed since my day, but we focused on the technical problem in proposals and less on the (important) components of communications and budgeting.
Finally, you get to perform some actual research and make a “significant” contribution to your field, as judged by the faculty members who review your thesis. That’s both harder and less difficult than it might sound. As a student you’re rarely charging off to do something that’s totally different from all else that has gone before. So there is a support system of existing techniques and other students to help you along the way. On the other hand there is a certain feeling, when you realize that what you are now doing hasn’t been done before – a requirement of the degree. It’s a little hard to describe, but something like the feeling you have when you Google something, expect to get back a procedure, but instead get back nothing. You have to figure out what to do – maybe invent some things to move ahead – and then prove to others that you got the job done. You do get used to this, and learn not to be intimidated, or often to particularly care, whether a solution is already in place or not. One of the reasons that funding agencies often require Ph.D. primary investigators is to assure that the research team will have leadership used to not having a solution technique in place from the start.
It can be something of trip – often four years or more. For all of that, most people with Ph.D. degrees report that they enjoyed the experience. I certainly did, and the training allowed me to participate periodically in some very interesting projects as part of my data consulting practice.
Although I’ve touched on it above, let me return to the idea that research should always “work.” It’s actually rather the opposite – if it’s research, it’s not working all of the time. You just can’t look at new things and always get what you expect – otherwise it wouldn’t be very new, at all. My graduate advisor John Dahler, who enjoyed a distinguished career as a theoretical physicist and chemist at the University of Minnesota, would tell me often – it seemed like at least once a month – that “if we knew exactly what to do, it wouldn’t be research.” A failure doesn’t mean a disaster, though – part of the research game is to salvage something good from those train wrecks that periodically happen. You usually can – something’s almost always learned, even if it is just a new line of inquiry.
The idea that research must sometimes fail could seem evident, but it creates issues in practice. Researchers as well as funding agencies are often rated on ‘what was accomplished,” even for exploratory research. Getting a result other than what was expected can be regarded as unhelpful, and even jeopardize additional funding. As a result, researchers – often strapped for funds – are in the position of scaling back risk and expectations, or in rare cases exaggerate what was actually done. Of course, these problems are amplified when funding is scarce. I’ve personally experienced the frustration of seeing a perfectly decent piece of research be put on the shelf, not because of lack of interest, but because of DC budget haggling. Now, that really is a waste.
I believe research investment to be essential to our well-being, but I also understand that as a process research may not seem appealing. People can readily feel that the processes of academic or government research are remote, elitist, and wasteful. In my position – I’m not part of that system but I’ve periodically worked with it – I can see how these perceptions might develop. Are researchers really remote and elitist? Well yeah, it can happen, and yes, it’s a little tedious. Chalk it up to a combination of serious expertise and what can be an insulating work community. But it’s largely a stereotype, at the end of the day. And is research truly wasteful? I’d agree that US proposal processes could use a good looking-over – researchers spend an awful lot of time dealing with funding problems, and as I suggested above, I think the economics of funding tend to make research more narrowly-focused and specialized than it could be. But that’s as far as I go. Research is only a “waste” if we agree that the necessarily error-prone process of learning new things is a waste. I say no. On the contrary, our research infrastructure investment is part of what has made this a great country, offering benefits to people here and abroad. If we’re smart, it will continue to do so in the future.