Maximum Likelihood Estimation
Update: 2023-10-13
Maximum likelihood estimation (MLE) is a statistical method used to estimate the parameters of a probability distribution based on observed data. MLE aims to find the values of the model's parameters that make the observed data most probable under the assumed statistical model.
In the previous section, you have already used MLE to estimate unigram and bigram probabilities. In this section, we will apply MLE to estimate sequence probabilities.
Sequence Probability
Let us examine a model that takes a sequence of words and generates the next word. Given a word sequence "I am a", the model aims to predict the most likely next word by estimating the probabilities associated with potential continuations, such as "I am a student" or "I'm a teacher," and selecting the one with the highest probability.
The conditional probability of the word "student" occurring after the word sequence "I am a" can be estimated as follows:
The joint probability of the word sequence "I am a student" can be measured as follows:
Counting the occurrences of n-grams, especially when n can be indefinitely long, is neither practical nor effective, even with a vast corpus. In practice, we address this challenge by employing two techniques: Chain Rule and Markov Assumption.
Chain Rule
By applying the chain rule, the above joint probability can be decomposed into:
Thus, the probability of any given word sequence can be measured as:
The chain rule effectively decomposes the original problem into subproblems; however, it does not resolve the issue because measuring is as challenging as measuring .
Markov Assumption
The Markov assumption (aka. Markov property) states that the future state of a system depends only on its present state and is independent of its past states, given its present state. In the context of language modeling, it implies that the next word generated by the model should depend solely on the current word. This assumption dramatically simplifies the chain rule mentioned above:
The joint probability can now be measured by the product of unigram and bigram probabilities.
How do the chain rule and Markov assumption simplify the estimation of sequence probability?
Initial Word Probability
Let us consider the unigram probabilities of and . In general, "the" appears more frequently than "The" such that:
Let be an artificial token indicating the beginning of the text. We can then measure the bigram probabilities of "the" and "The" appearing as the initial words of the text, denoted as and , respectively. Since the first letter of the initial word in formal English writing is conventionally capitalized, it is likely that:
This is not necessarily true if the model is trained on informal writings, such as social media data, where conventional capitalization is often neglected.
Thus, to predict a more probable initial word, it is better to consider the bigram probability rather than the unigram probability when measuring sequence probability:
This enhancement allows us to elaborate the sequence probability as a simple product of bigram probabilities:
Is it worth considering the end of the text by introducing another artificial token, , to improve last-word prediction by multiplying the above product with ?
The multiplication of numerous probabilities can often be computationally infeasible due to slow processing and the potential for decimal points to exceed system limits. In practice, logarithmic probabilities are calculated instead:
Last updated