NLP Essentials
GitHub Author
  • Overview
    • Syllabus
    • Schedule
    • Development Environment
    • Homework
  • Text Processing
    • Frequency Analysis
    • Tokenization
    • Lemmatization
    • Regular Expressions
    • Homework
  • Language Models
    • N-gram Models
    • Smoothing
    • Maximum Likelihood Estimation
    • Entropy and Perplexity
    • Homework
  • Vector Space Models
    • Bag-of-Words Model
    • Term Weighting
    • Document Similarity
    • Document Classification
    • Homework
  • Distributional Semantics
    • Distributional Hypothesis
    • Word Representations
    • Latent Semantic Analysis
    • Neural Networks
    • Word2Vec
    • Homework
  • Contextual Encoding
    • Subword Tokenization
    • Recurrent Neural Networks
    • Transformer
    • Encoder-Decoder Framework
    • Homework
  • NLP Tasks & Applications
    • Text Classification
    • Sequence Tagging
    • Structure Parsing
    • Relation Extraction
    • Question Answering
    • Machine Translation
    • Text Summarization
    • Dialogue Management
    • Homework
  • Projects
    • Speed Dating
    • Team Formation
    • Proposal Pitch
    • Proposal Report
    • Live Demonstration
    • Final Report
    • Team Projects
      • Team Projects (2024)
    • Project Ideas
      • Project Ideas (2024)
Powered by GitBook

Copyright © 2023 All rights reserved

On this page
  • Contents
  • References

Was this helpful?

Export as PDF

Contextual Encoding

PreviousHomeworkNextSubword Tokenization

Last updated 1 month ago

Was this helpful?

Contextual representations are representations of words, phrases, or sentences within the context of the surrounding text. Unlike word embeddings from where each word is represented by a fixed vector regardless of its context, contextual representations capture the meaning of a word or sequence of words based on their context in a particular document such that the representation of a word can vary depending on the words surrounding it, allowing for a more nuanced understanding of meaning in natural language processing tasks.

Contents

References

  • , Vaswani et al., Proceedings of Advances in Neural Information Processing Systems (NeurIPS), 2017.

Q1: How can document-level vector representations be derived from word embeddings?

Q2: How did the embedding representation facilitate the adaption of Neural Networks in Natural Language Processing?

Q3: How are embedding representations for Natural Language Processing fundamentally different from ones for Computer Vision?

Word2Vec
Subword Tokenization
Recurrent Neural Networks
Transformer
Encoder-Decoder Framework
Attention is All You Need
Word2Vec