This is the second part of a series of writings talking about the difficulties of probabilistic reasoning; here, we zoom in the problem of applying probability to actual data, examining the paradigms and techniques of the field of statistics.
Our focus is, similarly to the first article, on the hidden assumptions people make, the effects these assumptions have on the validity of their inferences, and the absence of perfect solutions — in short, why statistics is difficult. Along the way, we’ll be introducing lots of statistical frameworks, techniques, and models, generally at a relatively rigorous level. You don’t necessarily have to follow the more rigorous aspects of each argument and derivation to get the gist of it, but a familiarity with calculus helps immensely. Useful properties of some common distributions are given in part A of the Appendices, while the most technical and/or tedious derivations are stowed away in part B.
Table of Contents
The Uncertainty Series
Part 1: Probability is Difficult
Part 2: Statistics is Difficult
Part 3: Causality is Difficult
Part 4: Modeling is Difficult
Part 5: Prediction is Difficult
Part 6: Experimentation is Difficult
Part 7: Science is Difficult
Epilogue: The Past, Present, and Future of Uncertainty
Appendix: Uncertain Appendices
The Questions of Statistical Inference
In the previous article, we covered the basic interpretations of probability:
- The classical interpretation, in which the probability of an event is determined by its position in a finite collection of $n$ events which, to us, seem equivalent; these events are each given equal probability, such that, if they cover all possibilities, then each must have a probability of $1/n$. Made by and for French gamblers, it is only useful in the simplest of problems, where it is best understood as a description of the influence of ignorance on belief.
- The propensity interpretation, in which the probability of an event is the propensity of the underlying physical system to generate it, either on a per-case basis (single-case propensity) or as a frequency over many runs (long-run propensity). For practical purposes, this interpretation is basically irrelevant, being little more than a philosophical curiosity.
- The Bayesian interpretation, in which the probability of an event is the degree of belief we are rationally justified in having. By the famous Dutch Book argument, these probabilities, if they are to be rational, must conform to Kolmogorov’s axioms, and must be updated in response to observation in a manner described by Bayes’ theorem. This interpretation silently smoldered under the name of “inverse probability” for about two centuries after its foundation by Bayes and Laplace, only erupting in the middle of the 20th century after intense justification and popularization by a small club of super-Bayesians (primarily de Finetti, Savage, Jeffreys, and Jaynes).
- The frequentist interpretation, in which the probability of an event is the frequency at which it’s observed in a series of trials, either after a finite number of trials (empirical frequentism) or as the limit of a hypothetically infinite number of trials (hypothetical frequentism). Developed largely as a response to the weaknesses of the classical interpretation, it is strongly preferred by scientists due to its clear empirical nature.
Now, we’ll put probability to the test by figuring out how to use it to understand the world. That it is useful is clear, for it is used all the time — to determine the reliability of industrial processes, to predict the fluctuations of the stock market, to verify the accuracy of measurements, and, in general, to help us make informed decisions. Hence, we must understand why, how, and when it is useful, and how to use it correctly.
To paraphrase Bandyopadhyay’s Philosophy of Statistics, there are at least four different motivations for the use of statistical techniques:
document.querySelectorAll('.notion-topbar').style.height = "0px"; document.querySelectorAll('.notion-frame').style.height = "auto";
- To determine what belief one should hold, and to what degree to hold it to;
- To understand whether some data constitutes evidence for or against some hypothesis;
- To figure out what action to take in order to achieve some end;