Published
- 8 min read
Deriving probability distribution from entropy
The main motivation is to show that from a general principle, we can recover an already known probability distribution. This is useful to understand the reason why a formula has to be that way.
Uniform distribution
Before information theory was established, it was already declared by statistician that there should be no prior bias when defining probability of event. In other words, each possibility has to be fair, unless you include a bias. Any random variable that follows this principle is called uniformly distributed.
For any N possible discrete states, then the probability of each state happens is exactly . An example would be a coin toss or a die toss. Each sides must have equal chance to happens.
This is usually accepted as fact, without questioning the reason.
In the perspective of information theory, an observation can be measured with information entropy. Differential Information entropy can be used to measure continuous probability distribution, and is defined as.
With is the probability of the event at .
By perceiving this as an optimization problem (min-max), we can reason that since entropy has to be always increasing in an observation, the probability distribution when all the information are gained needs to be stabilizing when entropy is at maximum.
From a physical perspectives, an alternative analogy was that when two heat contact touching each other, the final temperature has to happen when the entropy is maximized.
If we treat as a functional using variational calculus, we can then use Lagrangian multiplier method to derive the probability distribution.
The Lagrangian (function) can be constructed by using the entropy with an added linear constraints. Then if we set the constraint to 0, we should find the optimum points when the partial derivatives are 0.
The most fundamental constraint of probability distribution was that if you added all the chances, then it has to sum up to 1. In other words, . The constraint function , is basically a rearrangement of previous statement in such a way that
Our Lagrangian function then becomes:
Now, since the Lagrangian includes integrals, technically we should integrate it first to obtain the correct function. But the expression of is neatly an integral along . Usually in physics, by stationary action principle (or historically known as least-action principle), we can define the action of the Lagrangian as .
By matching it with the action expression, it would mean that we can also use the derivative of our previous Lagrangian , as our new Lagrangian .
Applying Lagrangian multiplier method, we take the partial derivative wrt and set it to zero.
Note that since is basically a constant, then is a constant as well. We just rename it as . Using the constraint we can then find
The explanation was that is the range of the integral, since is constant. If is a continuous random variable, then is just a segment from to , which is . So is essentially the uniform distribution we are familiar with.
Normal distribution
The article would not be complete if we didn’t derive normal distribution. Normal distribution is the simplest continuous probability distribution with a standard mean and variance to perform statistical analysis. It also exists in most cases in nature. From information theory perspectives, it was a consequence of nature that tends to maximize information entropy, such as heat exchange, or energy distribution.
The constraint is the same with uniform distribution. The integral over all must sum to 1. So, . Thus .
The second constraint is that we include additional assumptions that “not every chances are equal”, but “the average has to be at the center”.
This is just coming from a naive assumptions about the notion of “average”. For example, suppose you have datasets of person heights, You imagine that more people exists with the height around the center of the height range. For a distribution function, this would mean that . In a concept of physics (typically mass distribution), it just means that the first moment (center of mass) of the distribution exists. In this case, the first moment we are going to be set to a constant . To summarize,
The third constraint is that we assume the distribution is symmetric around the mean. In the concept of moment distribution, this would imply that . It will have a constant value . So, .
Note that the interesting things about these constraints is that they are all constants subtracted with an integral over . We can use the same approach to get a new Lagrangian . Because we apply derivative over and then over , then these constants doesn’t matter at all in the end, whatever value that was. Using the same approach, the Lagrangian
Again, step (20) to (21) happens because is a constant, so we just renamed it as .
To find the values of these , “normally” you integrate and then match it with the constant being used . But using that kind of approach is like circular definition if we have a prior assumptions about the normal distribution. We won’t be using that, and instead work on a more fundamental assumptions.
We can’t solve the first constraint yet, which is essentially a normalization, since there are 2 Lagrange multipliers needs to be solved first.
We are going to solve the second constraint. Notice that if the distribution has average values, then the probability at the average has to be the highest, since it is the most common. If it is the highest values, then the derivative of the distribution on that point is 0. Let us call this point an arbitrary name , so .
We now have expression for
We are going to solve the third constraint. Using the same principle, we put assumptions that the second derivative must have an inflection point in two places. This is because as we see in the first derivative above, the values changes. Because the shape of the distribution is symmetric with a center, then obviously the second derivative will have two zeroes, which is in the left and right of the center, within the same distance from the center. We already know the center will be in . So these two particular position has to be in and , where is just arbitrary distance from .
Plugging this back to
An explanation, step (33) to (34) we added terms to complete the squares. Step (34) to (35) rearranging into a square expressions Step (35) to (36) we rename because it is just a constant.
Finally, applying the first constraint
The explanation, Step (38) to (39) we made a change of variables . Step (39) to (40) is because the definite integral of . This expression probably deserves its own articles.
Thus we recovered the normal distribution:
The interesting thing with this derivation is that it “predicts” the datasets, rather than deducing from a datasets. The formula rises because when the observation is in a saturated state, then the distribution has to use that function.
No wonder a normal distribution is the most common we can see in nature. This is because it is the next simplest distribution with minimal assumptions that can fit more data.
To see what I really meant, let’s use the recovered formula to calculate the differential entropy.
As you can see above, the entropy only depends on the variance . This can only means from a different sets of data, they will tend to approach the same entropy and distribution if they have the same variance.