Tokenization
Update: 2023-10-31
Tokenization is the process of breaking down a text into smaller units, typically words or subwords, known as tokens. Tokens serve as the basic building blocks used for a specific task.
What is the difference between a word and a token?
When examining the word types from the previous section, you notice several words that need further tokenization, where many of them can be resolved by leveraging punctuation:
"R1:
->['"', "R1", ":"]
(R&D)
->['(', 'R&D', ')']
15th-largest
->['15th', '-', 'largest']
Atlanta,
->['Atlanta', ',']
Department's
->['Department', "'s"]
activity"[26]
->['activity', '"', '[26]']
centers.[21][22]
->['centers', '.', '[21]', '[22]']
Depending on the task, you may want to tokenize [26]
into ['[', '26', ']']
for more generalization. In this case, however, we consider "[26]" as a unique identifier for the corresponding reference rather than as the number 26 surrounded by square brackets. Thus, we aim to recognize it as a single token.
delimit()
delimit()
Let us write the delimit()
function that takes a word and a set of delimiters and returns a list of tokens by splitting the word using the delimiters:
L2: Find the index of the first character in
word
that is in thedelimiters
set (enumerate(), next()). If no delimiter is found inword
, return -1 (generator expressions).L3: If no delimiter is found, return a list containing
word
as a single token.L5: If a delimiter is found, create a list
tokens
to store the individual tokens.L6: If the delimiter is not at the beginning of
word
, add the characters before the delimiter as a token totokens
.L7: Add the delimiter itself as a separate token to
tokens
.L9-10: If there are characters after the delimiter, recursively call the
delimit()
function on the remaining part ofword
and extend thetokens
list with the result (extend()).
We now test delimit()
using the following cases:
L1: Set types.
postprocess()
postprocess()
When reviewing the above output, the first four test cases yield accurate results, while the last five are not handled correctly, which should have been tokenized as follows:
Department's
->['Department', "'s"]
activity"[26]
->['activity', '"', '[26]']
centers.[21][22]
->['centers', '.', '[21]', '[22]']
149,000
->['149,000']
U.S.
->['U.S.']
To handle these special cases, let us post-process the tokens generated by delimit()
:
L2: Initialize variables
i
for the current position andnew_tokens
for the resulting tokens.L4: Iterate through the input tokens.
L5: Case 1: Handling apostrophes for contractions like "'s" (e.g., it's).
L6: Combine the apostrophe and "s" and append it as a single token.
L7: Move the position indicator by 1 to skip the next character.
L8-10: Case 2: Handling numbers in special formats like [##], ###,### (e.g., [42], 12,345).
L11: Combine the special number format and append it as a single token.
L12: Move the position indicator by 2 to skip the next two characters.
L13: Case 3: Handling acronyms like "U.S.".
L14: Combine the acronym and append it as a single token.
L15: Move the position indicator by 3 to skip the next three characters.
L16-17: Case 4: If none of the special cases above are met, append the current token.
L18: Move the position indicator by 1 to process the next token.
L20: Return the list of processed tokens.
Once the post-processing is applied, all outputs are handled correctly:
tokenize()
tokenize()
At last, we write the tokenize()
function that takes a file path to a corpus and a set of delimiters and returns a list of tokens from the corpus:
L2: Read the contents of a file (
corpus
) and split it into words.L3: Tokenize each word in the corpus using the specified delimiters.
postprocess()
is used to process the special cases further. The resulting tokens are collected in a list and returned (list comprehension).
Given the new tokenizer, let us recount word types in the corpus, emory-wiki.txt, and save them:
Compared to the original tokenization, where all words are split solely by whitespaces, the more advanced tokenizer increases the number of word tokens from 305 to 363 and the number of word types from 180 to 197 because all punctuation symbols, as well as reference numbers, are now introduced as individual tokens.
Despite the increase in word types, using a more advanced tokenizer effectively mitigates the issue of sparsity in language modeling. What exactly is the sparsity issue, and how can appropriate tokenization help alleviate it?
References
Source: tokenization.py
ELIT Tokenizer - A heuristic-based tokenizer.
Last updated