The bayesian basis
Last updated
Last updated
Frequentist (the more classical version of statistics) assume that probability is the long-run frequency of events. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences.
Bayesians interpret a probability as measure of belief, or confidence, of an event occurring. Simply, a probability is a summary of an opinion.
The belief about event A is denoted as P(A). We call this quantity the prior probability.
Given a new evidence X, it cannot be ignored. Even if the evidence is counter to what was initially believed, the evidence.
We denote our updated belief as P(A|X), interpreted as the probability of A given the evidence X. We call the updated belief the posterior probability so as to contrast it with the prior probability.
Let Z be some random variable. It may be discrete, continuous or mixed.
A continuous random variable has a probability density function. An example of continuous random variable is a random variable with exponential density.
In the real world, λ is hidden from us. We see only Z, and must go backwards to try and determine λ. Bayesian inference is concerned with beliefs about what λ might be. Rather than try to guess λ exactly, we can only talk about what λ is likely to be by assigning a probability distribution to λ.
The distribution of this random variable Z will be a probability mass function or pmf. Let's say a random variable Z is Poisson-distributed, would be the parameter of the distribution. So, it controls the distribution's shape.
Then, the probability mass distribution of the random variable Z will be denoted by writing .
When a random variable Z has an exponential distribution with parameter , we say Z is exponential and write .