Assignments
Last updated
Last updated
Copyright © 2023 All rights reserved
The EM algorithm stands as a classic method in unsupervised learning. What are the advantages of unsupervised learning over supervised learning, and which tasks align well with unsupervised learning?
What are the disadvantages of using BPE-based tokenization instead of rule-based tokenization? What are the potential issues with the implementation of BPE above?
How does self-attention operate given an embedding matrix representing a document, where is the number of words and is the embedding dimension?
Given the same embedding matrix as in question #3, how does multi-head attention function? What advantages does multi-head attention offer over self-attention?
What are the outputs of each layer in the Transformer model? How do the embeddings learned in the upper layers of the Transformer differ from those in the lower layers?
How is a Masked Language Model used in training a language model with a transformer?
How can one train a document-level embedding using a transformer?
What are the advantages of embeddings generated by transformers compared to those generated by Word2Vec?
Neural networks gained widespread popularity for training natural language processing models since 2013. What factors enabled this popularity, and how do they differ from traditional NLP methods?
Recent large language models like ChatGPT or Claude are trained quite differently from traditional NLP models. What are the main differences, and what factors enabled their development?
Attention is All you Need, Vaswani et al., NIPS 2017.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Devlin et al., NAACL 2019.