Entropy and Perplexity

Update: 2023-10-13

Entropy

Entropy is a measure of the uncertainty, randomness, or information content of a random variable or a probability distribution. The entropy of a random variable X={x1,,xn}X = \{x_1, \ldots, x_n\} is defined as:

H(X)=i=1nP(xi)(logP(xi))=i=1nP(xi)logP(xi)H(X) = \sum_{i=1}^n P(x_i) \cdot (-\log P(x_i)) = -\sum_{i=1}^n P(x_i) \cdot \log P(x_i)

P(xi)P(x_i) is the probability distribution of xix_i. The self-information of xix_i is defined as logP(x)-\log P(x), which measures how much information is gained when xix_i occurs. The negative sign indicates that as the occurrence of xix_i increases, its self-information value decreases.

Entropy has several properties, including:

  • It is non-negative: H(X)0H(X) \geq 0.

  • It is at its minimum when xix_i is entirely predictable (all probability mass on a single outcome).

  • It is at its maximum when all outcomes of xix_i are equally likely.

Why is the self-information value expressed in a logarithmic scale?

Sequence Entropy

Sequence entropy is a measure of the unpredictability or information content of the sequence, which quantifies how uncertain or random a word sequence is.

Assume a long sequence of words, W=[w1,,wn]W = [w_1, \ldots, w_n], concatenating the entire text from a language L\mathcal{L}. Let S={S1,,Sm}\mathcal{S} = \{S_1, \ldots, S_m\} be a set of all possible sequences derived from WW, where S1S_1 is the shortest sequence (a single word) and Sm=WS_m = W is the longest sequence. Then, the entropy of WW can be measured as follows:

H(W)=j=1mP(Sj)logP(Sj)H(W) = -\sum_{j=1}^m P(S_j) \cdot \log P(S_j)

The entropy rate (per-word entropy), H(W)H'(W), can be measured by dividing H(W)H(W) by the total number of words nn:

H(W)=1nj=1mP(Sj)logP(Sj)H'(W) = -\frac{1}{n} \sum_{j=1}^m P(S_j) \cdot \log P(S_j)

In theory, there is an infinite number of unobserved word sequences in the language L\mathcal{L}. To estimate the true entropy of L\mathcal{L}, we need to take the limit to H(W)H'(W) as nn approaches infinity:

H(L)=limnH(W)=limn1nj=1mP(Sj)logP(Sj)H(\mathcal{L}) = \lim_{n \rightarrow \infty} H'(W) = \lim_{n \rightarrow \infty} -\frac{1}{n} \sum_{j=1}^m P(S_j) \cdot \log P(S_j)

The Shannon-McMillan-Breiman theorem implies that if the language is both stationary and ergodic, considering a single sequence that is sufficiently long can be as effective as summing over all possible sequences to measure H(L)H(\mathcal{L}) because a long sequence of words naturally contains numerous shorter sequences, and each of these shorter sequences reoccurs within the longer sequence according to their respective probabilities.

The bigram model in the previous section is stationary because all probabilities rely on the same condition, P(wi+1,wi)P(w_{i+1}, w_i). In reality, however, this assumption does not hold. The probability of a word's occurrence often depends on a range of other words in the context, and this contextual influence can vary significantly from one word to another.

By applying this theorem, H(L)H(\mathcal{L}) can be approximated:

H(L)limn1nlogP(Sm)H(\mathcal{L}) \approx \lim_{n \rightarrow \infty} -\frac{1}{n} \log P(S_m)

Consequently, H(W)H'(W) is approximated as follows, where Sm=WS_m = W:

H(W)1nlogP(W)H'(W) \approx -\frac{1}{n} \log P(W)

What does it mean when the entropy of a corpus is high?

Perplexity

Perplexity measures how well a language model can predict a set of words based on the likelihood of those words occurring in a given text. The perplexity of a word sequence WWis measured as:

PP(W)=P(W)1n=1P(W)nPP(W) = P(W)^{-\frac{1}{n}} = \sqrt[n]{\frac{1}{P(W)}}

Hence, the higher P(W)P(W) is, the lower its perplexity becomes, implying that the language model is "less perplexed" and more confident in generating WW.

Perplexity, PP(W)PP(W), can be directly derived from the approximated entropy rate, H(W)H'(W):

H(W)1nlog2P(W)2H(W)21nlog2P(W)=2log2P(W)1n=P(W)1n=PP(W)\begin{array}{rcl} H'(W) &\approx & -\frac{1}{n} \log_2 P(W) \\[5pt] 2^{H'(W)} &\approx & 2^{-\frac{1}{n} \log_2 P(W)} \\[5pt] &= & 2^{\log_2 P(W)^{-\frac{1}{n}}} \\[5pt] &= & P(W)^{-\frac{1}{n}} \\[5pt] &= & PP(W) \end{array}

How does high entropy affect the perplexity of a language model?

Last updated

Copyright © 2023 All rights reserved