Contextual Encoding
Last updated
Was this helpful?
Last updated
Was this helpful?
Contextual representations are representations of words, phrases, or sentences within the context of the surrounding text. Unlike word embeddings from where each word is represented by a fixed vector regardless of its context, contextual representations capture the meaning of a word or sequence of words based on their context in a particular document such that the representation of a word can vary depending on the words surrounding it, allowing for a more nuanced understanding of meaning in natural language processing tasks.
, Vaswani et al., Proceedings of Advances in Neural Information Processing Systems (NeurIPS), 2017.
Q1: How can document-level vector representations be derived from word embeddings?
Q2: How did the embedding representation facilitate the adaption of Neural Networks in Natural Language Processing?
Q3: How are embedding representations for Natural Language Processing fundamentally different from ones for Computer Vision?