Assignments

Quiz

  1. The EM algorithm stands as a classic method in unsupervised learning. What are the advantages of unsupervised learning over supervised learning, and which tasks align well with unsupervised learning?

  2. What are the disadvantages of using BPE-based tokenization instead of rule-based tokenization? What are the potential issues with the implementation of BPE above?

  3. How does self-attention operate given an embedding matrix WRn×d\mathrm{W} \in \mathbb{R}^{n \times d} representing a document, where nn is the number of words and dd is the embedding dimension?

  4. Given the same embedding matrix as in question #3, how does multi-head attention function? What advantages does multi-head attention offer over self-attention?

  5. What are the outputs of each layer in the Transformer model? How do the embeddings learned in the upper layers of the Transformer differ from those in the lower layers?

  6. How is a Masked Language Model used in training a language model with a transformer?

  7. How can one train a document-level embedding using a transformer?

  8. What are the advantages of embeddings generated by transformers compared to those generated by Word2Vec?

  9. Neural networks gained widespread popularity for training natural language processing models since 2013. What factors enabled this popularity, and how do they differ from traditional NLP methods?

  10. Recent large language models like ChatGPT or Claude are trained quite differently from traditional NLP models. What are the main differences, and what factors enabled their development?

References

Last updated

Copyright © 2023 All rights reserved