×

Research & Publications

Network Past Issues

Issue: January - March 2016
Issue Title: Falsifiability as a criterion of demarcation
Author: Suddhachit Mitra

Falsifiability as a criterion of demarcation
Austrian-British philosopher Karl Popper was well known for his rejection of the classical inductivist views in favour of empirical falsification

Sir Karl Raimund Popper (1902-1994), a critic of conventionalism and relativism in science and a self-proclaimed “critical-rationalist”, is a seminal figure in the philosophy of science in the twentieth century. He was born in Vienna, which was regarded by many as the cultural capital of the world at the time. His mother was instrumental in instilling in him the love for music, which was pivotal in shaping his thought, including his ideas regarding the distinction between subjectivity and objectivity. He attended the University of Vienna where he was exposed to the psychoanalytic theories propounded by Freud and Adler as well as Marxist theory. He also had the opportunity to listen to a lecture on the theory of relativity by Einstein in Vienna. He was very impressed by the ‘critical spirit’ in Einstein’s theory. The complete absence of the latter in Marx and Freud, on the hand, rendered their theories impervious to disconfirmation according to Popper. This too, he believed, was of crucial significance.

A key difference between the two theories (Freud’s Psychoanalytic theory and Einstein’s Theory of Relativity), Popper conjectured, was the inherent ‘risk’ in Einstein’s theory that could lead to its potential falsification whereas the psychoanalytic theory was, even in principle, not falsifiable. The element of risk in Einstein’s theory came from the fact that highly improbable or even seemingly impossible consequences, in the light of the Newtonian paradigm (such as light bending towards massive bodies, a fact confirmed by Eddington in 1919) would – potentially –follow from the theory. If they did not, the theory would be falsified. Similarly, Popper was critical of the Marxian account of history while admitting that it had started out as a truly predictive theory; when it was falsified on facts the theory was worked on by the addition of adhoc hypotheses to reflect these facts. Hence, Marxism, a scientific theory, was reduced to a “pseudo-scientific dogma”. Popper concluded that “theories” including the psychoanalytic theory and revised Marxism were synonymous with primitive myths and not with modern science. Such experiences propelled Popper to use falsifiability as a benchmark for demarcating science from non-science. A theory, he said, would be deemed to be scientific if it were incompatible with at least some of all possible empirical observations. On the other hand, a theory compatible with all such possible observations, either because it has been modified on an ad-hoc basis to accommodate these observations (Marxism) or it has been constructed to be compatible with all possible observations (such as psychoanalytic theories), is unscientific. A theory that is unscientific, being unfalsifiable, may however, become scientific with the development of technology or its further refinement.

 Popper wrote three major books between 1935 through 1957. The first book, in German, was Logik der Forschung (1935), which was translated into English as The Logic of Scientific Discovery in 1959. This book provides an overview of his ideas on science and its philosophy. His other books include The Poverty of Historicism (1957), which criticizes the notion of historical laws, and The Open Society and its Enemies (1945), which is a treatise on philosophy of society, history and politics.

Demarcation and Falsifiability
According to Popper, the key issue in the philosophy of science is that of demarcation, that is, distinguishing science from non-science, such as metaphysics, Freudian psychoanalysis and Adler’s individual psychology, which is a psychological method formulated by the Austrian psychiatrist Alfred Adler. The ‘individual’ in individual psychology refers to an “indivisible whole” patient. It, however, takes into consideration societal factors in determining a person’s psychology. Popper accepts as valid Hume’s critique of induction saying that induction is not used by a scientist, generally speaking. He further argues that all observation is theory-driven and selective and debunks the Baconian-Newtonian paradigm of “pure observation” as the initial steps in theory formation. Or, in other words, there is no observation without theory. Thus, he challenges the hitherto dominant view that the inductive methodology distinguishes science from non-science. Popper, then, rejects induction as a valid method for scientific investigation and, instead, substitutes falsification for it. Popper says that a theory may be corroborated as scientific only if it endures truly ‘risky’ predictions that have the potential to turn out false. Logically speaking, the test of a scientific theory is an attempt to falsify it with only one counterinstance rendering the whole theory false. As is clear, Popper’s idea of demarcation follows from the fact that there exists a logical asymmetry between verification and falsification. That is to say, as Hume argued, it is not possible to conclusively verify a universal proposition by induction, whereas one counter-example falsifies the universal law.

A true scientific theory, thus, according to Popper, is prohibitive since it forbids or prohibits certain events. Hence, testing and falsification of the theory is possible but not its logical verification.   Hence, a theory, even after being subjected to very rigorous testing for years, should not be assumed to be verified. One can say, on the other hand, that it has been highly corroborated and is a fit candidate for the best available theory till it is (if and when) falsified.

However, Popper distinguishes between the logic of falsifiability and the relevant applied methodology. For instance, if a single ferrous metal (such as iron) can be shown to be unaffected by magnetic fields, then it cannot be said that all ferrous metals are affected by magnetic fields. This is the Popperian paradigm: a scientific law is falsifiable but not conclusively verifiable. As can be seen, it goes against the grain of inductive thought. However, experimental or methodological errors bring in a dimension of uncertainty and it needs to be asked if there was an experimental error that affected the outcome of the experiment.

Popper admits that in practice, a single counter-example is not sufficient for falsifying a theory; that is why scientific theories are retained in many cases, in spite of anomalous evidence. One recent example follows. In 2011, the OPERA experiment, a collaborative effort between CERN, Geneva and LNGS, Italy, for detecting neutrinos – a subatomic particle – reported that neutrinos travel faster than light. Scientists announced the results in September, 2011. However, the scientific world retained faith in Einstein’s theory of relativity specifying an upper limit to the velocity of a particle. Later the team admitted two errors in their experimental set-up.

Another interesting point that Popper makes is that there is no ‘unique way’ or unique methodology such as induction that paves the way to a scientific theory. The exact manner in which a certain scientist comes to formulate a scientific theory is of no consequence in the philosophy of science. Einstein says something similar:

 â€œThere is no logical path leading to the highly universal laws of science. They can only be reached by intuition, based upon something like an intellectual love of the objects of experience.”

Based on the criterion of demarcation through falsifiability Popper classified, inter-alia, physics, chemistry, nonintrospective psychology as sciences, psycho-analysis as pre-science, and astrology and phrenology as pseudosciences.

 A Challenge to Falsifiabilty
Gillies describes a challenge to falsifiability as the demarcation criterion, known as the Duhem-Quine thesis. The following presents the gist of the thesis.

Consensus runs high regarding Newton’s first law of motion being a scientific law. It turns out that it is not falsifiable. The law states that a body continues in its state of rest or in a state of uniform motion in a straight line, unless acted upon by an external impressed force. Let us suppose a body is found neither at rest nor at uniform motion in a straight line and, seemingly, is not acted upon by an external force. This observation apparently refutes Newton’s law, but in reality this is not necessarily true. Newton himself observed the elliptical orbits of planets and came to the conclusion that they were acted on by gravitational forces from other celestial bodies.

The issue at hand here is discussed by Duhem (1954) as cited in the Stanford Encyclopedia of Philosophy: “…the physicist can never subject an isolated hypothesis to experimental test, but only a whole group of hypotheses; when the experiment is in disagreement with his predictions, what he learns is that at least one of the hypotheses constituting this group is unacceptable and ought to be modified; but the experiment does not designate which one should be changed.”

Newton’s first law cannot be tested on its own as a standalone hypothesis but only as a theoretical group of hypotheses. In order to achieve meaningful results the law should be used in conjunction with: • Further assumptions, such as Newton’s second and third laws and the law of universal gravitation • Auxiliary assumptions, mainly that the mass of the sun is much greater than that of the planets Since the first law is used in conjunction with so many assumptions, it is not possible to refute the law in case what the law predicts is not realized, since further assumptions or auxiliary assumptions could be at fault. Hence, going by the Duhem-Quine thesis Newton’s first law is unfalsifiable. Popper answered the issue mentioned above by using a three-level model of types of statements divided on the basis of their falsifiability and confirmability, which Gillies extended.

 Gillies points out an interesting area of convergence between Kuhn and Popper. Thomas Kuhn, it may mentioned, was one of the greatest p h i l o s o p h e r s of science. He propounded the idea of “paradigm shifts” or periodic revolutions when the nature of scientific inquiry in a particular scientific discipline undergoes a drastic and sudden transformation.

Level 2 theories, such as Newton’s first law, cannot be directly falsified through observation. Thomas Kuhn was of the view that the Newtonian paradigm was replaced by the Einsteinian paradigm not through one observation but through a process of scientific revolution. This is expected to be the case of level two theories that are not falsifiable. However, the Popperian schema of falsification applies to level-one theories.

Further, a level one hypothesis such as Kepler’s first law (which states that planets orbit around a star in ellipses with the star at one focus of the  ellipse) may be tested by observing the positions of the planet and validating whether these points lie on an ellipse with defined parameters. This may be called direct confirmation. Newton’s laws, along with a few additional assumptions mentioned earlier, can deduce an approximate form of Kepler’s law. Newton’s theory is confirmed by observation on planets, motions of pendulum and projectiles, among other things. Confirmation of the Newtonian theory, along with the fact that Kepler’s first law in an approximate form obtains from Newtonian theory, points to an indirect confirmation of Kepler’s law.

Conclusion
In the social sciences, Popper’s falsifiability remains a very strong criterion, where research may be founded on value-laden assumptions. Social scientists willing to subject their research to more difficult tests can show the way to sound research practice. However, as a student of philosophy of sciences, one would contend that Kuhn’s idea of probabilistic verification is, in many cases, a superior philosophical guide. Normal science, according to Kuhn, advances by the probabilistic verification of competing theories wherein the better theory becomes the most viable one through a process akin to natural selection. There is always an imperfect data-theory fit and if the inconsistency is severe, then testing the theory through falsification will require a degree of falsification or level of improbability leading to probabilistic verification. Kuhn and Popper, though in many ways at odds with each other, have a semblance of unanimity in this regard.

 By: Suddhachit Mitra Email: f092@irma.ac.in (The author is an FPRM participant)