Categories
Uncategorized

David Hume’s Problem of Induction (Simply Explained)

Induction Problem shribe

The problem of induction, as formulated by David Hume, addresses one of the most significant questions in epistemology: what can science truly know?

If you’ve ever delved into empirical research methods, you’ve likely encountered the terms induction and deduction.

While a Grounded Theory approach follows an inductive logic, an experiment relies on deductive logic. Is one better than the other? How are both connected, and why are scientific results never definitive?

The answers to all these questions are tied to Hume’s problem of induction. In this video, you’ll learn everything you need to know to hold your own in a discussion with a ninth-semester philosophy student.

Additionally, this knowledge will help you better understand and critique scientific methods. It’s definitely worth sticking around.

What is Inductive Reasoning?

In science, inductive reasoning involves deriving a general theory from the observation of a specific phenomenon.

For instance, consider an interview study where 30 interviews are conducted. The data collected is analyzed using Grounded Theory, leading to a new theory.

Induction isn’t limited to qualitative research. Any type of research that draws conclusions about a theory or natural law from observations employs induction.

This could be a statistical evaluation, where conclusions about the entire population are drawn from a sample, or it could be a physicist making repeated measurements from which she derives a natural law.

What is Hume’s Problem of Induction?

David Hume’s problem of induction is a fundamental question in epistemology that deals with whether and under what conditions inductive inferences can be considered reliable or rational.

The Scottish philosopher first raised this question in the 18th century in his work “A Treatise of Human Nature.” Although Hume initially discussed the problem only in the context of empirical science, it remains relevant to all sciences that recognize induction as a valid proof method.

And there are many.

Having a bit of knowledge about the problem of induction is certainly beneficial. It continues to be referred to as “the problem of induction” because it has yet to be solved. For over two centuries, philosophers of science have been grappling with it, including the famous Karl Popper. But more on that later.

An Example of an Inductive Inference

To better understand the problem of induction, let’s look at an example of an inductive inference.

An ornithologist conducts an observation in nature. During his research expedition, he observes 100 swans, all of which are white. That’s 100%.

Assumption 1: 100% of the observed swans are white.

From this, he concludes that all swans are white.

Conclusion 1: All swans are white.

If he reasons in this way, it doesn’t matter how many more swans he observes. He could even observe 100,000 swans. The conclusion remains, which logicians describe as non-compelling. The 100,001st swan could be black, and his conclusion would be false.

Induction Problem

The Uniformity of Nature

For this conclusion to become logically rational and allow the ornithologist’s colleagues to rest easy, he must add an additional condition.

Assumption 1: 100% of the observed swans are white.

Assumption 2: All swans are similar to those already observed.

Conclusion 1: All swans are white.

This second assumption is also known as the principle of the uniformity of nature. It means that all future observations will be similar to past observations.

Or, put simply: In the future, everything will always occur as it has in the past.

So far so good.

If the principle of the uniformity of nature is true, then there is no problem of induction. The inductive conclusion would be logically valid.

But then David Hume comes into play.

He asserts: There is no logical basis for the law of the uniformity of nature. It cannot be true.

Hume himself and those who followed have tried to logically justify this principle, but have failed. This is partly because these attempts at justification themselves require inductive reasoning, which is subject to the problem of induction.

Hume writes:

“It is therefore impossible that any arguments from experience can prove this resemblance of the past to the future; since all such arguments are founded on the supposition of that resemblance. Let the course of things be ever so regular hitherto, that alone, by no means, assures us of the continuance of such regularity.”

If you’ve ever invested money in the stock market, then you know what he means.

Is Deduction the Solution to the Problem of Induction?

Two hundred years after Hume, another big player in the field of epistemology enters the scene: Karl Popper.

And he believes he has found the solution to the problem of induction.

Actually, he can’t solve it but suggests instead to simply ignore it. He completely agreed with David Hume that general laws cannot be derived through induction.

What one can logically do, however, is falsify general laws.

Instead of generating a theory based on an inductive conclusion, one could simply concoct a theory (form a hypothesis), and then try to falsify it.

What remains are only the theories that have not been falsified (yet).

Here, we are no longer in the realm of induction but in that of deductive reasoning (from general to specific).

For the philosophy of science, Popper’s new approach was a milestone. However, it was not the hoped-for solution to the problem of induction.

Induction Problem 2

Why We Should Sometimes Trust Induction

Many philosophers later showed that even Popper’s approach to falsification relies partly on inductive reasoning.

While Popper initially rejected all forms of induction as irrational early in his career, he softened his stance towards the end.

He acknowledged that under certain circumstances, there might be a pragmatic justification for induction. Consider the context of medicine, for example.

If we were to completely reject induction, both doctors and patients would face a significant problem.

After diagnosing a disease, we choose a medication that has led to healing in thousands of past cases. We thus hope that the future will behave like the past and follow an inductive conclusion.

If we rejected induction as Popper originally intended, we would have to rely more on a medication that has never been tested.

Therefore, there seems to be a difference between pragmatic and purely theoretical induction. Due to these complications, the discourse in the philosophy of science largely reached a consensus that Popper could not solve the problem of induction either.

Induction Problem 4

What This Means for Today’s Science

The problem of induction remains unsolved to this day. Concluding from this that science can know nothing with 100% accuracy is theoretically correct, but not practically helpful.

To better interpret the results of scientific studies, scientists must make a series of so-called judgment calls.

These are the additional assumptions we must make for science to be pragmatically implementable. That is, everyone must define for themselves what they are willing to assume, even if there is no formal logical basis for it.

As a scientist, one must therefore take a certain risk of being wrong. How high that risk is, can be decided by oneself.

Lee and Baskerville (2012) define 4 such judgment calls.

The first one you already know:

#1 The future will behave like the past.

The risk here is that a theory or result may no longer be true once it is applied to a new context.

#2 The conditions in the new context are similar enough to apply the theory or result there.

Imagine you’ve determined a natural law on Earth. If you apply this law to understand a phenomenon on Mars, you must assume that the conditions there are similar enough to those on Earth.

This second judgment call must also be made on a smaller scale. If you want to apply the results of a management case study from Amazon to your mid-sized company, you must assume that the conditions are similar enough to do so.

#3 The theory or natural law covers all relevant variables.

When you want to apply a theory, you must assume that it is complete and hasn’t overlooked any variable.

#4 The theory is true

This judgment call would probably not sit well with Karl Popper. But to apply a theory, you must assume it is true, even though Popper would argue this is never possible.

References

Lee, A. S., & Baskerville, R. L. (2003). Generalizing generalizability in information systems research. Information systems research14(3), 221-243. https://pubsonline.informs.org/doi/abs/10.1287/isre.14.3.221.16560

Lee, A. S., & Baskerville, R. L. (2012). Conceptualizing generalizability: New contributions and a reply. MIS quarterly, 749-761. https://www.jstor.org/stable/41703479

Leave a Reply

Your email address will not be published. Required fields are marked *