The Abdication of the Philosophers
If philosophy is regarded as a
legitimate and necessary discipline, then one might think that a certain
degree of philosophical training would be very useful to a scientist.
Scientists ought to be able to recognize how often philosophical issues
arise in their work — that is, issues that cannot be resolved by
arguments that make recourse solely to inference and empirical
observation. In most cases, these issues arise because practising
scientists, like all people, are prone to philosophical errors.
To take
an obvious example, scientists can be prone to errors of elementary
logic, and these can often go undetected by the peer review process and
have a major impact on the literature — for instance, confusing
correlation and causation, or confusing implication with a
biconditional. Philosophy can provide a way of understanding and
correcting such errors. It addresses a largely distinct set of questions
that natural science alone cannot answer, but that must be answered for
natural science to be properly conducted.
These questions include how we define and understand science itself.
One group of theories of science — the set that best supports a clear
distinction between science and philosophy, and a necessary role for
each — can broadly be classified as “essentialist.” These theories
attempt to identify the essential traits that distinguish science from
other human activities, or differentiate true science from nonscientific
and pseudoscientific forms of inquiry. Among the most influential and
compelling of these is Karl Popper’s criterion of falsifiability
outlined in The Logic of Scientific Discovery (1959).
A falsifiable theory is one that makes a specific prediction about
what results are supposed to occur under a set of experimental
conditions, so that the theory might be falsified by performing the
experiment and comparing predicted to actual results. A theory or
explanation that cannot be falsified falls outside the domain of
science. For example, Freudian psychoanalysis, which does not make
specific experimental predictions, is able to revise its theory to match
any observations, in order to avoid rejecting the theory altogether. By
this reckoning, Freudianism is a pseudoscience, a theory that purports
to be scientific but is in fact immune to falsification. In contrast,
for example, Einstein’s theory of relativity made predictions (like the
bending of starlight around the sun) that were novel and specific, and
provided opportunities to disprove the theory by direct experimental
observation. Advocates of Popper’s definition would seem to place on the
same level as pseudoscience or nonscience every statement — of
metaphysics, ethics, theology, literary criticism, and indeed daily life
— that does not meet the criterion of falsifiability.
The criterion of falsifiability is appealing in that it highlights
similarities between science and the trial-and-error methods we use in
everyday problem-solving. If I have misplaced my keys, I immediately
begin to construct scenarios — hypotheses, if you will — that might
account for their whereabouts: Did I leave them in the ignition or in
the front door lock? Were they in the pocket of the jeans I put in the
laundry basket? Did I drop them while mowing the lawn? I then proceed to
evaluate these scenarios systematically, by testing predictions that I
would expect to be true under each scenario — in other words, by using a
sort of Popperian method. The everyday, commonsense nature of the
falsifiability criterion has the virtue of both showing how science is
grounded in basic ideas of rationality and observation, and thereby also
of stripping away from science the aura of sacred mystery with which
some would seek to surround it.
An additional strength of the falsifiability criterion is that it
makes possible a clear distinction between science properly speaking and
the opinions of scientists on nonscientific subjects. We have seen in
recent years a growing tendency to treat as “scientific” anything that
scientists say or believe. The debates over stem cell research,
for example, have often been described, both within the scientific
community and in the mass media, as clashes between science and
religion. It is true that many, but by no means all, of the most vocal
defenders of embryonic stem cell research were scientists, and that
many, but by no means all, of its most vocal opponents were religious.
But in fact, there was little science being disputed: the central
controversy was between two opposing views on a particular ethical
dilemma, neither of which was inherently more scientific than the other.
If we confine our definition of the scientific to the falsifiable, we
clearly will not conclude that a particular ethical view is dictated by
science just because it is the view of a substantial number of
scientists. The same logic applies to the judgments of scientists on
political, aesthetic, or other nonscientific issues. If a poll shows
that a large majority of scientists prefers neutral colors in bathrooms,
for example, it does not follow that this preference is “scientific.”
Popper’s falsifiability criterion and similar essentialist
definitions of science highlight the distinct but vital roles of both
science and philosophy. The definitions show the necessary role of
philosophy in undergirding and justifying science — protecting it from
its potential for excess and self-devolution by, among other things,
proposing clear distinctions between legitimate scientific theories and
pseudoscientific theories that masquerade as science.
By contrast to Popper, many thinkers have advanced understandings of
philosophy and science that blur such distinctions, resulting in an
inflated role for science and an ancillary one for philosophy. In part,
philosophers have no one but themselves to blame for the low state to
which their discipline has fallen — thanks especially to the logical
positivist and analytic strain that has been dominant for about a
century in the English-speaking world. For example, the influential
twentieth-century American philosopher W. V. O. Quine spoke modestly of a
“philosophy continuous with science” and vowed to eschew philosophy’s
traditional concern with metaphysical questions that might claim to sit
in judgment on the natural sciences. Science, Quine and many of his
contemporaries seemed to say, is where the real action is, while
philosophers ought to celebrate science from the sidelines.
This attitude has been articulated in the other main group of
theories of science, which rivals the essentialist understandings —
namely, the “institutional” theories, which identify science with the
social institution of science and its practitioners. The institutional
approach may be useful to historians of science, as it allows them to
accept the various definitions of fields used by the scientists they
study. But some philosophers go so far as to use “institutional factors”
as the criteria of good science. Ladyman, Ross, and Spurrett,
for instance, say that they “demarcate good science — around lines which
are inevitably fuzzy near the boundary — by reference to institutional
factors, not to directly epistemological ones.” By this criterion, we
would differentiate good science from bad science simply by asking which
proposals agencies like the National Science Foundation deem worthy of
funding, or which papers peer-review committees deem worthy of
publication.
The problems with this definition of science are myriad. First, it is
essentially circular: science simply is what scientists do. Second, the
high confidence in funding and peer-review panels should seem misplaced
to anyone who has served on these panels and witnessed the extent to
which preconceived notions, personal vendettas, and the like can torpedo
even the best proposals. Moreover, simplistically defining science by
its institutions is complicated by the ample history of scientific
institutions that have been notoriously unreliable.
Consider the decades
during which Soviet biology was dominated by the ideologically
motivated theories of the geneticist Trofim Lysenko, who rejected
Mendelian genetics as inconsistent with Marxism and insisted that
acquired characteristics could be inherited. An observer who
distinguishes good science from bad science “by reference to
institutional factors” alone would have difficulty seeing the difference
between the unproductive and corrupt genetics in the Soviet Union and
the fruitful research of Watson and Crick in 1950s Cambridge. Can we be
certain that there are not sub-disciplines of science in which even
today most scientists accept without question theories that will in the
future be shown to be as preposterous as Lysenkoism? Many working
scientists can surely think of at least one candidate — that is, a
theory widely accepted in their field that is almost certainly false,
even preposterous.
Confronted with such examples, defenders of the institutional
approach will often point to the supposedly self-correcting nature of
science. Ladyman, Ross, and Spurrett assert that “although scientific
progress is far from smooth and linear, it never simply oscillates or
goes backwards. Every scientific development influences future science,
and it never repeats itself.” Alas, in the thirty or so years I
have been watching, I have observed quite a few scientific sub-fields
(such as behavioral ecology) oscillating happily and showing every sign
of continuing to do so for the foreseeable future. The history of
science provides examples of the eventual discarding of erroneous
theories. But we should not be overly confident that such
self-correction will inevitably occur, nor that the institutional
mechanisms of science will be so robust as to preclude the occurrence of
long dark ages in which false theories hold sway.
The fundamental problem raised by the identification of “good
science” with “institutional science” is that it assumes the
practitioners of science to be inherently exempt, at least in the long
term, from the corrupting influences that affect all other human
practices and institutions. Ladyman, Ross, and Spurrett explicitly state
that most human institutions, including “governments, political
parties, churches, firms, NGOs, ethnic associations, families ... are
hardly epistemically reliable at all.” However, “our grounding
assumption is that the specific institutional processes of science have
inductively established peculiar epistemic reliability.” This assumption
is at best naïve and at worst dangerous. If any human institution is
held to be exempt from the petty, self-serving, and corrupting
motivations that plague us all, the result will almost inevitably be the
creation of a priestly caste demanding adulation and required to answer
to no one but itself.
It is something approaching this adulation that seems to underlie the
abdication of the philosophers and the rise of the scientists as the
authorities of our age on all intellectual questions. Reading the work
of Quine, Rudolf Carnap, and other philosophers of the positivist
tradition, as well as their more recent successors, one is struck by the
aura of hero-worship accorded to science and scientists. In spite of
their idealization of science, the philosophers of this school show
surprisingly little interest in science itself — that is, in the results
of scientific inquiry and their potential philosophical implications.
As a biologist, I must admit to finding Quine’s constant invocation of
“nerve-endings” as an all-purpose explanation of human behavior to be
embarrassingly simplistic. Especially given Quine’s intellectual
commitment to behaviorism, it is surprising yet characteristic that he
had little apparent interest in the actual mechanisms by which the
nervous system functions.
Ross, Ladyman, and Spurrett may be right to assume that science
possesses a “peculiar epistemic reliability” that is lacking in other
forms of inquiry. But they have taken the strange step of identifying
that reliability with the institutions and practitioners of science,
rather than with any particular rational, empirical, or methodological
criterion that scientists are bound (but often fail) to uphold. Thus a
(largely justifiable) admiration for the work of scientists has led to a
peculiar, unjustified role for scientists themselves — so that,
increasingly, what is believed by scientists and the public to be
“scientific” is simply any claim that is upheld by many scientists, or
that is based on language and ideas that sound sufficiently similar to
scientific theories.
No comments:
Post a Comment