Sunday, August 30, 2020

Do You Believe in Science in a Young Boy's Heart?

A friend of mine posted this meme on Facebook recently, and I'm afraid it annoyed me more than perhaps it should have.  That's partly because "believing in science" has become a mantra in our culture wars and especially in electoral politics.  Biden supporters, like Obama and Clinton supporters, say that they want a President who believes in Science, just as Trump supporters want a President who honors God.  Of course Trump, like his Republican predecessors, puts God after his donors (just as Democrats do), and Obama put his donors before the science about climate change.  Biden has made it clear he'll do the same. Obama also put his religious beliefs ahead of Constitutional principles about equality in order to block same-sex marriage during his first term; how can you honor God more than that?  But then his deeply held beliefs threatened to get in the way of his re-election, so he ditched them.

On its face I can't disagree with most of what this meme says, though it's disingenuous and ultimately dishonest.  Its account of how scientists work is incomplete: scientists do collect data and are supposed to revise their conclusions in the light of new information.  But scientists also work out hypotheses without working in the laboratory at all.  Einstein and other physicists are the most famous examples of this aspect of science; Einstein did math, published papers, and waited for others to do the experiments that would confirm or disconfirm his theory.  When the first tests failed to confirm his predictions, Einstein didn't go back to the drawing board.  As the physics-trained philosopher Paul Feyerabend tells it, "Einstein's theory of special relativity clashed with evidence produced only one year after its publication. Lorentz, Poincare, and Ehrenfest withdrew to a more classical position. Einstein persisted: his theory, he said, had a wonderful symmetry and should be retained. He gently mocked the widespread urge for a 'verification by little effects.'"  This isn't a bad thing: If scientists threw out promising theories when they encounter obstacles, no theory would last for long.

It's also been argued - I'm not sure how accurately - that many scientists never adopt newer, better theories such as Relativity.  They do their best to cling to what they learned in their youth, and bitterly attack the crazy new ideas that students are wild about.  This suggests that the young scientists aren't necessarily wiser, they just go with the flow, to get jobs and teaching posts and grants - and in their time, become a drag on the field, refusing to adopt whatever comes next.  But again, some conservatism is necessary.  Every theory has holes in it, anomalies it can't explain, and its adherents simply have to have faith that eventually those holes will be patched over.

And you know that joke that compares scientists to a drunk looking for his lost keys under a lamppost instead of looking in the dark where he actually lost them, because the light is better around the lamppost?  Scientists tell that joke on themselves.  There's a similar one that compares scientists to a besieging army that encircles a walled town, and if the walls hold out, the army moves on to the next town, hoping for easier prey.  It's okay for scientists to joke about such things, just as clergy and other professionals do about their respective domains, but the laity had better not.

Scientists are also, let's say, inconsistent about the self-scrutiny they brag about.  Consider this example, from Margot Lee Shetterly's Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race (William Morrow, 2016, p. 178). It refers to engineering research, but you'll find similar descriptions of the peer review to which scientific journals subject submissions.
Building an airplane was nothing compared to shepherding research through Langley’s grueling review process. “Present your case, build it, sell it so they believe it”—that was the Langley way. The author of a NACA document—a technical report was the most comprehensive and exacting, a technical memorandum slightly less formal—faced a firing squad of four or five people, chosen for their expertise in the topic. After a presentation of findings, the committee, which had read and analyzed the report in advance, let loose a barrage of questions and comments. The committee was brusque, thorough, and relentless in rooting out inaccuracies, inconsistencies, incomprehensible statements, and illogical conclusions obscured by technical gibberish. And that was before subjecting the report to the style, clarity, grammar, and presentation standards that were Pearl Young’s legacy, before the addition of the charts and fancy graphics that reduced the data sheet to a coherent, visually persuasive point. A final report might be months, even years, in the making.
Even after publication, we're told, scientists, are ruthless in tearing apart each other's work in their dedicated pursuit of truth.  And that's good.

Except when it isn't.  When the entomologist Edward O. Wilson published Sociobiology: The New Synthesis in 1975, the book inspired controversy and searching criticism from other scientists.  Scientists who were attracted by Wilson's doctrines protested that this wasn't fair, as if a scientific publication wasn't supposed to be scrutinized by colleagues in the field.  In Sense and Nonsense: Evolutionary Perspectives on Human Behavior (Oxford, 2002), evolutionary biologist Kevin N. Laland and psychologist Gillian R. Brown complained that even colleagues in Wilson's own department picked on him.
[They] vehemently attacked the book in the popular press as simple-minded and reductionist. Yet most biologists could see the potential of the sociobiological viewpoint, which had paid great dividends in understanding other animals, and many were drawn into using these new tools to interpret humanity. The debate became polarized and highly political, with the sociobiologists accused of bolstering right-wing conservative values and the critics associated with Marxist ideology [5].
Laland and Brown concede that there were a lot of scientific problems with Wilson's book, but they try to explain away the criticisms as politically motivated.  Wilson's highly speculative application of his ideas to human beings in the final chapter, with no real scientific support, were somehow exempt from suspicion of ideology.  Laland and Brown lament what they represent as emotional, "knee-jerk reactions" to Sociobiology, which not only confuses moderation of tone with moderation of substance, it erases the scientific objections that were made.

It's doubtful, though, that pronouncements about human beings and the societies we live in can ever be free of politics.  Consider BiDil, a blood-pressure drug that the FDA approved for use by "patients who identity as black".*  NitroMed, the company that owned BiDil, "funded the Congressional Black Caucus, the National Medical Association, the Association of Black Cardiologists, and the National Association for the Advancement of Colored People, all of whom encouraged the FDA to approve the drug."  Critics "described BiDil as a cynical effort to exploit race and loopholes in patent law and FDA policy to extend patent protection on an old drug ... Defenders of the drug went so far as to accuse social scientists of trying to kill black people by sowing controversy about BiDil and misrepresenting it as a racial drug, even though it was Cohn [the cardiologist who had patented the drug] and NitroMed who had pursued the race-specific approvals in the first place" (164).

BiDil did not turn out as expected. Despite projections of a billion-dollar bonanza, the drug proved to be a commercial failure. NitroMed lost $108 million in the first year after FDA approval. It is not clear which of several factors contributed most to its demise. NitroMed priced the drug high—$1.80 per pill, or $10.60 per day for a common dose. Since BiDil was simply a fixed-dose combination of two existing drugs, each of which was available as a cheaper generic, many insurers simply substituted the generics whenever physicians did prescribe BiDil. Moreover, the controversy over the “black drug” was read in many ways by patients and doctors. While some saw it as something valuable, others saw the “special treatment” as uncomfortably reminiscent of Tuskegee. Whatever the causes of its failure, NitroMed laid off most of its workforce and stopped marketing BiDil in January 2008 [165].
After discussing some other examples of "racial differences in drug response, Jones notes that "in every study, however, the amount of variation within each racial group was far larger than the differences between the between the groups ... As a result, 80 to 95 percent of all black and white patients will likely have indistinguishable responses to each medication.  Although racial differences might exist, they are irrelevant for the majority of patients" (167).

Several years after NitroMed stopped marketing BiDil, however, I heard it touted on an NPR science program as a casualty of 'politically correct' hostility to race as a scientific concept.  The popular complaint that the Left has politicized the science of racial difference and made it professionally dangerous to study it, reverses the facts.  Despite the ongoing and consistent failure to find meaningful racial differences, scientists still pursue that Holy Grail, and have no evident difficulty getting funding to do so.  Even if science weren't political, funding for science always will be.  The same thing goes for sexual differences, which male scientists continue to assert while claiming to be not only Scientists but Feminists.  Is this Science to believe in?

Scientists and their apologists do admit their fallibility - but usually only after they've been caught in some egregious error.  Before that happens, they demand that you respect their authoritah, else you are (gasp) Anti-Science.  The more loudly they demand it, the more skeptical I become.  (I lost a lot of respect for the philosopher John Searle when he referred to the historians Thomas Kuhn and Paul Feyerabend as "anti-science," and hinted darkly that the difference between them and the "pro-science" Karl Popper wasn't "as great as many philosophers and scientists think.") **  It's a lot like religious authorities admitting their human fallibility when they're confronted with some issue on which they have proven embarrassingly wrong - race, for example.  Or the question of wearing masks in this pandemic: early on, Scientists brushed aside their value for protection.  This was primarily because they wanted them to be reserved for front-line healthcare workers, but if they weren't effective, why would the doctors and nurses need them?  The severe shortages of PPE made it reasonable to give priority to health care workers, but Anthony Fauci and others waffled and confused the issue.  "Waffled" is too kind, since they knew very well that masks are effective.  I think it's fair to suspect that they are so used to asserting authority, instead of treating the public as adults who can understand what's at stake if it's explained seriously rather than condescendingly, that they simply saw no reason to bother.  Is that Science to believe in?

(Some will probably reply that all the idiots and morons out there have no Faith in Science and are too stupid to understand it anyway.  They should remember that Fauci was addressing them, the wise sheep with true Faith, no less than the morons.  Maybe they do consider themselves stupid, and want to be lied to for their own good if Science is doing it.  The concept of the Holy Lie is familiar in religion, too.  But such people are in no position to cast the first stone.)

As an atheist, I insist on remembering human fallibility in science, religion, politics, and any other area.  I also insist that dissent must be informed and rational.  But then, so should assent, and I find that most people who wave around their faith in Science don't know much about the Science they believe in.  Think of the LGBT people and allies who continued to talk about the Gay Gene long after the scientists they relied on had abandoned the concept for vague talk of "epigenetic factors" and "genetic predisposition."  Trans people have taken up the claim that trans identity is genetic, totally without evidence.  Home DNA testing has become popular among African-Americans, although it doesn't really reveal the Roots they're hoping to find -- just as racially-specific drugs were pushed by prominent African-Americans who, it's safe to say, had no idea whether the Science was valid or not.

Faith in Science can't be reconciled with recognizing its limits.  If you recognize that scientists can be wrong, then you won't and shouldn't have faith in them.  You should take their claims seriously, but skeptically, and skepticism is the opposite of faith.  As the examples I've given show (and they can be multiplied), scientists don't welcome skepticism, especially from outsiders no matter how well informed.  It's my position that science is not something anyone should have faith in, and those who say we should have faith in it don't understand science.

* David Jones, "The Prospects of Personalized Medicine," in Genetic Explanations: Sense and Nonsense, ed. Sheldon Krimsky & Jeremy Gruber (Harvard, 2013), p. 163.  Future page numbers refer to this article.

** "Twenty-One Years in the Chinese Room."  In Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, ed. John Preston & Mark Bishop (Oxford UP, 2002), Kindle edition loc 914.