Published
- 14 min read
Deriving probability distribution from entropy: part 2
Yesterday, I was planning to make a follow-up article from this part 1. So I googled my previous article to reread it first.
Accidentally I found this awesome article talking about the same thing!. Heck, it even uses similar title: “Deriving probability distribution using the Principle of Maximum Entropy”. So, I guess this method is very common, since the article itself is dated 2017.
Poisson distribution
Originally, I only planned to write up to Normal distribution. But at some point, I saw some question in Twitter feeds regarding why atomic decay rate has to “randomly” decay by half in a given interval. Because it seems they “break causality” since they decay randomly without cause.
Before we derive the distribution, let’s talk about the characteristic of Poisson process.
The atomic decay is actually the perfect example of that. Let’s say a single nucleus decay with a certain probability . We are now concern ourselves with the probability of a group of nuclei. What was the decay rate of the whole group? The probability is not the sum of each individual nucleus. As time goes on the normalization factor changes, because the total nucleus changes. So the distribution actually measures the probability of a given size of the total independent events, if a single probability of one event is known.
Our first constraint is still the normalization axiom.
Second constraint is taken from the context. A Poisson process is characterized by a “memoryless” property. In the case of event occurrence, each event doesn’t care about the history of previous event. In the case of atomic decay above (that means ), a single decay event doesn’t care if the current size of the atoms is 1000 or just 2. It will decay with the same probability. However, since what human observe is a time interval between each occurrence, the distribution we recorded is “probability of the size of the nuclei to become half, given a specified interval ”. So, the input of the distribution is time instead of frequency, thus it appears somewhat counterintuitive to apply entropy on it.
From what the process describes, we know that the distribution is a function of two variables, and . But since occurrence is a discrete variables, then is a discrete probability distribution. Our entropy formula is the plain simple sum, instead of integral that we used in the previous article.
Next, since we want to count the probability of events happens in a fixed interval, we need to introduce another parameters. Notice that in the case of radioactive decay events, what we observed is time. So we have variable . But the distribution we want to make is for a fixed intervals. That means, we need to express it backwards. For a given interval that means there can be different number of decay events happens/possible. Let’s just suppose that the average is . That would mean the total number of average events happens if we have variable , becomes .
For practical purposes, is usually set to a specific unit of time, in which can be converted into. For example, can be 1 second or 1 minute or 1 hour, then the value of will match accordingly. So, as a value, we can also say that for most usages. I was being pedantic about the unit, because it was just a habit from physics.
In other words, it is okay to swap parameter with as long as you understand what it means. The parameter here acts as some sort fo “frequency” in the sense of probability distribution. It counts how many events happens on average, for a unit of time (kind of like Hertz unit).
We are ready with the constraints.
Now consider what happened with the entropy for a single decay event. Entropy is additive from the relative information. We are using discrete distribution now, so we need to count it individually.
Where is the Shannon’s self-information we currently considers.
When a single decay events happened, we will have information from the following constraints:
- It must satisfy the normalization axiom (total probability for all is 1)
- It must have a fixed average frequency/rate of total events for a unit time
- For a very large number, the probability of multiple events happened must be increasingly small
Constraints 1 and 2 were straightforward because it is similar with other derivation from previous article, in which we derive the Uniform Distribution and Gaussian Distribution.
But the key here is the third constraint, which is the property of a Poisson process. If we only measure a single decay events (which is ), then the constraint disappear. Since the information is surely 0 because of the probability of measuring a single event, is a certainty. This is what happened with Uniform Distribution.
However, a Poisson process specifically tried to measure if .
Since the condition is more specific. We can intuitively guess that the maximum entropy must be less than the entropy of Uniform Distribution.
Let us simulate how we gain this information. If , we observed only 1 decay events, which is a certainty, relative to the observation. That means:
The probability of us observing the second event within the same interval, should not be affected by the previous event. This is due to the memory-less property we are talking about earlier. Then it becomes like a coin flip where the chance of the second event happening is just .
Extending the analogy to a certain number of events, . The probability of the -th event happened is just , just like in uniform distribution. But this information needs to add up because corresponds to the total number of events, not the -th event, in our case.
In summary our 3rd constraint involved adding all currently observed for events.
Let’s summarize our Lagrangian constraints:
Our full entropy function is as follows (I omit the input notation for brevity).
A little bit different from previous article where we construct the Lagrangian by differentiating with respect to to eliminate the constants. We can’t do this now, since the parameter is discrete. So instead, we treat as a Lagrangian.
Notice that each terms has sum over , due to the fact that we observed total events, instead of just one.
But, the maximum entropy principle implies that at each -th event, the Lagrangian condition applies as well. So, we could pick any arbitrary , and the equation should still be the same. This allows us to remove all the sum sign (we observe on specific -th event)
We got an expression, but it would be difficult to find each constant. We haven’t even know where we are going to put in it.
For now, if we set , then the probability function becomes:
But this probability must have been affected by the average rate . If the average rate events is high, the probability for no events observed, should be increasingly small. So we have some intuition that is related with proportionally. We will replace it with a function
Now, let’s say . Then
But the probability of single event happened in Poisson distribution is exactly the probability of “no event happened” times the average of events. Like this:
For , if we think about it, the probability of total events must be equal to the probability of the previous total events (), times a factor that depends on just the average and the -th value.
To illustrate this. Let’s say the average of events is 5. If we know , then intuitively we can say that value must be higher, because total events is still below average of 5. If we know we can also think that will have lower value/chance, because is above the average of 5. Basically the factors should look like .
From this hint and the general formula above. We can guess that Poisson distribution has some kind of recurrence relation:
Since the factor in front of should take a general form like . It means that and .
Now, from this intuition alone, we can’t really decide the value of . But it’s a pretty good intuition.
The definite way on how we got the value of these constants. Is by applying back the constraints above.
For the first constraint. The sum of all the probabilities must equal to one:
This is a difficult equality to solve. To sum up the right hand side using integral, we need some kind of generating function. Instead of that approach, we know that can be broken apart as infinite sum. So we will use that, and let the sum index to match k:
Suppose that we match it term by term ( left side by right side). Then the only reasonable results is that and .
From our previous intuitive guess, we can conclude that and . But if we want to make sure, then we use the second constraints.
The average must be equal to the first moment of the distribution (by definition).
Step (13), (14), (15), and (16), were justified because if we sum to , the index can be replaced with arbitrary index , in which it will not change the definition of expansion as infinite series. So, we could replace the sum into .
We got the conclusion that .
To wrap up, a Poisson probability distribution will maximize entropy using this formula:
Some properties of Poisson Distribution
It is quite straightforward to derive Poisson distribution using Maximum Entropy Principle, because the third constraint is very special.
The third constraint uniquely define the distribution, just from the memory-less property of Poisson process. Just like we have described above, since the self-information content doesn’t depend on previous event, the new event is a simple uniform distribution over the existing information. For each new event, the new information adds up with the entropy equal to the probability of the -th Poisson event, times the information added up if we treat the event as cumulative successive events up to .
If we only look at the final form of Poisson distribution, it might not be clear from where the factors comes from. But, using information theory, it is clear that the term is the amount of information necessary to reduce the entropy from successive events being counted. It is the bias needed to constraint the distribution. Suppose that this term don’t exist, then we will recover Uniform Distribution from constraint 1 and 2.
From the final formula of . We got some interesting properties
Recurrence relation
We already guess the recurrence relation, it turns out to be in the form of:
An interesting corollary is when , it turns out that
An easier way to understand this relation is if we view it in terms of information:
Information gain from event to has to be negative if is less than the average, because it was more common. Information gain from to has to be positive if is greater than average, because it is much rare/surprising.
In the special case of it implies that the relative information is zero, meaning that and convey the same amount of information around the average total events.
Variance
One other interesting property from Poisson distribution is that the process itself implies that the mean and variance has to be the same, which is . This is because if , then it is not possible to infer that the process is a Poisson process. We need to observe the smallest unit of time possible for the average events to be non-zero, in order to get a variance.
However, one can calculate the variance using the second moment
The variance is the second moment subtracted by the square of the first moment.
So we got the variance again, as expected. Meaning, the standard deviation is
Characteristic function
A characteristic function completely determines how probability distribution behaves. For this Poisson distribution, we compute it as follows:
The characteristic function shows an interesting property. Suppose we have multiple different Poisson distribution. The characteristic difference is only defined by its value, which is the rate average events per unit time. If we have multiple independent group of Poisson events, and we observe it using the same unit time, it turns out that the total number of events is additive. Which means the probability is also additives.
Summing the probability distribution corresponds to multiplying the characteristic function.
The last result is just a characteristic function of the same Poisson probability distribution, but with parameter as it’s mean.
As an example, suppose we have clump of radioactive elements with decay event rate . It means, on average there will be 5 decay events per unit time. That would also mean, if we observe these clumps together, there will be on average decay events per the same unit time.
It seems obvious, but now we know why this is true. It was just a consequence on how the characteristic function multiplied together.
This property is usually known “backwards” as “infinitely divisible property”. Suppose we have a group of radioactive elements with average of 10 decay events, then we can divide the group into 5 group where we can observe 2 decay events each.
In my opinion, the term infinitely divisible here is somewhat misleading because the probability distribution in this case is a discrete one. So it can only be meaningfully divided until each group has 1 decay events, which is the bare minimum for us to observe Poisson process for this unit of time.