A Very Short Introduction to the Philosophy of Science Book Summary – Samir Okasha

Summarising book….


What you will learn from reading Short Intros – Philosopy of Science

– What is Science?

– Why probability is problematic.

– The difference between realism and idealism and why it matters.

A Very Short Introduction to The Philosophy of Science Book Summary

This introduction to the philosophy of science is the perfect start for anyone interested in Science in general. Out of all the very introduction books I’ve read so far. This happens to be my favourite. It’s a great read and will leave you with many questions to things you believed to be true.


What is Science?

This has a multi-faceted answer:

Science is an attempt to understand, explain, and predict the world – but it can’t only be this as that is also what religions do.

Is Science just a particular method? Yet not all sciences share the same methods. Some sciences are not experimental and have to be content with careful observation (think astrology).

Is Science just the construction of theories? Experiments and observations record results and use the results to create a theory. 

As you can see Science is all these things and more. It’s not just one of the other. Science is a nuanced concept.


Science and Falsifiability:

If a theory can’t make definite predictions, which are capable of being test against experience then it is unfalsifiable. Popper believed if scientific theories didn’t satisfy this condition then they were pseudo-science.

However, in general scientists do not abandon their theories whenever they conflict with observational data, but usually search for a way to eliminate the conflict. Obviously if a theory persistently conflicts with more and more data, and there’s no plausible way of reconciling this then it should be rejected. But, little progress would be made if they simply abandoned theories at first sign of trouble.

Could it be that science has a core set off shared features which define it to be a science but don’t always define it as a science? Similar to Ludwig Wittgenstein who argues that there is no fixed set of features that define a game. Rather there is a loose cluster of features which are possessed by most games. If this is the case then Poppers theory that all science has to be falsifiable looks like it can’t be the only definition of science.


Deductive and Inductive Reasoning:

The word ‘Proof’ should strictly only. be used when we are dealing with deductive inferences. In this strict sense of the word, scientific hypotheses can rarely if ever be proved true by data.

Science frequently uses a type of inductive reasoning, induction by best explanation (IBE).

Darwins theory is a good example. Darwins theory can explain a diverse range of facts about the living world, not just anatomical similarities between species. Each of these facts could in principle be explained in different ways. But, the theory of evolution explains all the facts in one go, this makes it the best explanation of the data.


The problem with probability:

Probability has both an objective and a subjective guise. In its objective guise, probability refers to how often things in the world happen, or tend to happen. For example, if you are told that the probability of an Englishwoman living to age 90 is one in ten, you would understand this as meaning that one-tenth of all Englishwomen attain that age. Similarly, a natural understanding of the statement ‘the probability that the coin will land heads is a half’ is that in a long sequence of coin flips, the proportion of heads would be very close to a half. Understood this way, statements about probability are objectively true or false, independently of what anyone believes.

In its subjective guise, probability is a measure of rational degree of belief. Suppose a scientist tells you that the probability of finding life on Mars is extremely low. Does this mean that life is found on only a small proportion of all the celestial bodies? Surely not. For one thing, no one knows how many celestial bodies there are, nor how many of them contain life. So a different notion of probability is at work here.


Remember; Science can be hijacked:

In many countries, scientists are viewed much as religious leaders used to be: possessors of specialised knowledge that is inaccessible to the laity.


Hempel’s covering law model of explanation

The essence of explanation is to show that the phenomenen to be explained is ‘covered’ by some general law of nature.

The basic idea behind the covering law model is straightforward. Hempel noted that scientific explanations are usually given in response to what he called ‘explanation-seeking why-questions’.

These are questions such as ‘why is the earth not perfectly spherical? or why do women live longer than men? they are demands for explanation. To give a scientific explanation is thus to provide a satisfactory answer to an explanation-seeking why-question.

Hempel’s answer to the problem was threefold. 

First, the premises should entail the conclusion, i.e. the argument should be a deductive one. 

Secondly, the premises should all be true. 

Thirdly, the premises should consist of at least one general law. General laws are things such as ‘all metals conduct electricity’, ‘a body’s acceleration varies inversely with its mass.

Schematically, Hempel’s model of explanation can be written as follows:

General Laws

Particular Facts

= Phenomenon to be explained


Realism Vs Idealism:

There is an ancient debate in philosophy between two opposing schools of thought called realism and idealism. Realism holds that the physical world exists independently of human thought and perception. Idealism denies this-it claims that the physical world is in some way dependent on the conscious activity of humans. To most people, realism seems more plausible than idealism. For one, realism fits well with the common sense view that the facts about the world are ‘out there’ waiting to be discovered.

A contemporary debate that is specifically about science, and is in some ways analogous to the traditional issue. The debate is between a position known as scientific realism and its converse, known as anti-realism or instrumentalism.

Scientific Realism:

The basic idea of scientific realism is straightforward. Realists hold that science aims to provide a true description of the world, and that it often succeeds. So a good scientific theory, according to realists, is one that truly describes the way the world is. This may sound like a fairly innocuous doctrine. For surely no one thinks that science is aiming to produce a false description of the world?

But that is not what anti-realists think. Rather, anti-realists hold that the aim of science is to find theories that are empirically adequate, i.e. which correctly predict the results of experiment and observation.

Anti-realism / instrumentalism:

Anti-realists argue that empirical adequacy, not truth, is the real aim of scientific theorising. Physicists may talk about unobservable entities, but they are merely convenient fictions introduced in order to help predict observable phenomena.

The Arguments:

We can see why anti-realism is sometimes called ‘instrumentalism’ it regards scientific theories as instruments for helping us predict observable phenomena, rather than as attempts to describe the underlying nature of reality.

Realists do not regard this argument as decisive. The role of idealised models in scientific theorising does not compel us to reject outright the idea that science aims at truth. Instead we need to accept that approximate truth, rather than exact truth, is the goal of such models, realists argue.


No Miracles:

One anti-realist response to the no miracles argument appeals to the history of science. Historically, there are many examples of scientific theories which were empirically successful in their day but later turned out to be false. In a well-known article from the 1980s, the American philosopher of science Larry Laudan listed more than thirty such theories, drawn from a range of different scientific disciplines and eras. 

The phlogiston theory of combustion is one example. This theory, which was widely accepted until the end of the 18th century, held that when any object burns it releases a substance called phlogiston’ into the atmosphere. Modern chemistry teaches us that this is false: there is no such substance as phlogiston. Rather, burning occurs when things react with oxygen in the air. But despite the non-existence of phlogiston, the phlogiston theory was empirically quite successful: it fitted the data available at the time reasonably well.

The first modification is to claim that a theory’s empirical success is evidence that it is approximately true, rather than precisely true. This weaker claim is less vulnerable to counter examples from the history of science. It is also more modest: it allows the realist to admit that today’s scientific theories may not be correct down to every last detail, while still holding that they are broadly on the right lines. And as we have seen, the realist needs the notion of approximate truth anyway, to account for idealised models. 

The second modification of the no miracles d argument involves refining the notion of empirical success. Some realists hold that empirical success is not just a matter of fitting the known data, but also allowing us to predict new observations that were previously unknown.


Kuhns Philosophy of Science

What exactly does normal science involve? According to Kuhn it is primarily a matter of puzzle-solving. However successful a paradigm is, it will always encounter certain problems-phenomena which it cannot easily accommodate, or mismatches between the theory’s predictions and the experimental facts. The job of the normal scientist is to try to eliminate these minor puzzles while making as few changes as possible to the paradigm. So normal science is a conservative activity-its practitioners are not tryìng to make any earth-shattering discoveries, but rather just to develop and extend the existing paradigm.

When anomalies are few they tend to just get ignored. But as anomalies accumulate, a burgeoning sense of crisis envelops the scientific community.

Confidence in the existing paradigm breaks down, and the process of normal science grinds to a halt. This marks the beginning of a period of ‘revolutionary science’ as Kuhn calls it. During subsiquent periods, fundamental scientific ideas are up for grabs. A variety of alternatives to the old paradigm are proposed, and eventually a new paradigm becomes established. A generation is usually g required before all members of the scientific community are won over to the new paradigm-an event which marks the completion of a scientific revolution. The essence ofa scientific revolution is thus the shift from an old paradigm to a new one.


Scientific Change and Objective Truth:

Kuhn also made some controversial claims about the overall direction of scientific change. According to a widely held view, science progresses towards the truth in a linear fashion, as older incorrect ideas get replaced by newer, correct ones. Later theories are thus objectively better than earlier ones, so scientific knowledge accumulates over time. This linear, cumulative conception of science is popular among laypeople and scientists alike, but Kuhn argued that it is both historically inaccurate and philosophically naive.

Moreover, Kuhn questioned whether the concept of objective truth actually makes sense at all. The idea that there is a fixed set of facts about the world, independent of any particular paradigm, was of dubious coherence, he believed. Kuhn suggested a radical alternative: the facts about the world are paradigm-relative, and thus change when paradigms change. If this suggestion is right, then it makes no sense to ask whether a given theory corresponds to the facts ‘as they really are’, nor therefore to ask whether it is objectively true.


The theory-ladenness of data:

The theory-ladenness of data had two important consequences for Kuhn. 

First, it meant that a dispute between competing paradigms could not be resolved by simply appealing to ‘the data’ or ‘the facts’, for what a scientist counts as data, or facts, will depend on which paradigm they accept. Perfectly objective choice between two paradigms is therefore impossible: there is no neutral vantage-point from which to assess the claims of each.

Secondly, the very idea of objective truth is called into question. To be objectively true, a theory must correspond to the facts, but the idea of such a correspondence makes little sense if the facts themselves are infected by our theories. This is why Kuhn was led to the radical view that truth itself is relative to a paradigm.

A scientists’ experimental and observational reports are often couched in highly theoretical language. For example, a scientist might report the outcome of an experiment by saying ‘an electric current is flowing through the copper rod’. But this data report is obviously laden with a large amount of theory. It would not be accepted by a scientist who did not hold standard beliefs about electric currents, so it is clearly not theory-neutral.


Choosing the right theory:

Kuhn’s insistence that there is no algorithm for theory-choice in science is probably correct. 

Certainly no one has ever succeeded in producing such an algorithm. Lots of philosophers and scientists have made plausible suggestions about what to look for in theories–simplicity, broadness of scope, close fit with the data and so on. But these suggestions fall short of providing a true algorithm, as Kuhn knew well. 

For one thing, there may be trade-offs: theory A may be simpler than theory B, but B may fit the data more closely. So an element of subjective judgement, or scientific common sense, will often be needed to decide between competing theories.


The Species Problem:

The situation was eloquently described by the English biologist John Maynard Smith, who wrote: ‘any attempt to divide all living organisms, past and present, into sharply defined groups between which no intermediates exist, is foredoomed to failure.

The taxonomist is faced with a contradiction between the practical necessity and the theoretical impossibility of his task. So in practice biologists continue to treat species as if they were sharply defined kinds, in the knowledge that this is only an approximation to reality.

Our focus here will be on the first stage of the taxonomist’s task, namely how to assign organisms to species. This is less straightforward than it may seem, primarily because biologists do not agree on what a species actually is, nor therefore on what criteria should be used for identifying species. Indeed competing definitions of a biological species, or ‘species concepts’ as these definitions are known, abound in modern biology. This lack of consensus is sometimes called ‘the species problem.

Evolution also teaches us that variation among organisms is likely to be pervasive. For variation is the engine that drives natural selection: if the organisms in a species do not vary then natural selection cannot operate. The significance of this is that it undermines the common sense idea that the members of a biological species must all possess some essential feature, e.g.

some genetic property, which sets them apart from non-members.

This idea is part of the ‘natural kind’ view of species, and is something that many non-biologists appear to believe. Empirically, however, there is extensive genetic variation among  the individuals within a typical species, which sometimes exceeds the genetic variation between closely related species. This is not to deny that biologists can often tell what species an organism belongs to by sequencing its DNA. However this is not always possible, and it does not show that membership of a species is determined by a fixed ‘genetic essence.