Given the input text where is the 'th token in , a contextualized encoder (e.g., BERT) takes and generates an embedding for every token using as well as its context. The challenge is that this encoder can take only up the -number of tokens such that it cannot handle any input where .
What are the ways to handle arbitrarily large input using a contextualized encoder?
One popular method is called the "Sliding Window", which splits the input into multiple blocks of text, generates embeddings for each block separately, and merges them at the end.
Let W = W_1 \cup \cdots \cup W_k \where if ; otherwise, such that . Then, the encoder takes each and generates for every token in . Finally, the embedding matrix is created by sequentially stacking all embeddings in .
Modify the baseline method such that a block has overlapped tokens with its surrounding blocks (both front and back). Once all blocks are encoded, each overlapped token should have two embeddings. Create an average embedding of those two embeddings and make it the final embedding for the overlapped token.
In a sequence-to-sequence model (aka, an encoder-decoder model), a decoder takes an embedding matrix and predicts what token should come next. It is often the case that this embedding matrix is also bounded by a certain size, which becomes an issue when the size of the matrix becomes larger than (for the case above, where ). One common method to handle this issue is to use an attention matrix for dimensionality reduction as follows:
The embedding matrix is first transposed to then multiplied by an attention matrix such that . Finally, the transpose of , that is gets fed into the decoder.
Would the following method be equivalent to the above method?
An attention matrix is multiplied by the embedding matrix such that . Finally, gets fed into the decoder.
When you create a dataset, the followings need to be clearly described:
Data collection (e.g., sources of the data).
Preprocessing if performed (e.g., scripts that you write, existing tools used).
Annotation scheme and guidelines if conducted with justification.
People involved in this process (e.g., annotators, survey subjects).
Quality of the created data (e.g., inter-annotator agreement).
Statistics and analysis of the original, preprocessed, annotated data.
Here are a few papers presenting new datasets:
, Li et al., EMNLP 2020 (see Section 3).
, Yang and Choi, SIGDIAL, 2019 (see Section 3).
This chapter guides you to write the approach section.
Typically, it is better to write the approach section as abstract as possible so your methods become generalizable for many tasks. For example, even if you use as an encocder but your approach can take any transformer as an encoder, it is better to denote that your method uses a transformer as an encoder instead of BERT.
This chapter discusses how to develop new algorithms and write them in pseudocode.
Your task is to design an algorithm that takes a post with the title and its comments with replies from a discussion forum (e.g., Reddit) and converts them into a multi-turn one-to-one dialogue.
A post with the title ():
How do you focus when you’re depressed?
I have so many assignments due, with exams coming up too. Life's just keeps hitting me recently and I'm finding it really hard to sit down and take information in. Writing is hard, listening and paying attention is hard. Even if I manage to listen or read none of the information stays in my head. Any help is very appreciated!!
Comments and replies:
Get up early everyday and use the library to study if you have one, idk if you're like me but as soon as I get home I'm kinda done for the day so it helps to stay somewhere where you can't really relax.
Thank you, I’ll give this a try tomorrow
What helps me is embracing when I'm feeling down and allowing myself to take a deserved break. Sometimes I confuse my depressive episodes with burnout and it's important to know your limits. The biggest pro tip to not be overwhelmed with so much to do all at once is doing something every day. Dedicating simply 30 minutes to an hour a day of intense studying goes a long way over time vs cramming at the end. If you're able to do more than 1 hour then great! But know that you don't have to do 6-7 intense studying hours a day to be successful. Be intentional with your time and work smarter vs harder. Your future self will thank you.
This is so nice to hear, and very helpful, thank you!
Hardest part is starting to study, once I have like 15 minutes into my study session that’s my only focus and just forget everything else.
Give a comparison overview of your algorithms with key features:
We introduce two algorithms for the reddit-to-dialogue generation: the baseline algorithm considers every sentence in the post an utterance of Speaker 1 and each comment an utterance of Speaker 2 (Section 3.1), whereas the advanced algorithm finds an appropriate span of sentences from the post to form an utterance for Speaker 1 and an appropriate span of any comment to form an utterance for Speaker 2 (Section 3.2).
Indicate the objective of your algorithm(s):
The main objective is to generate a multi-turn dialogue using a post, its comments, and replies that flows naturally in context.
Describe what the input and output data should be (possible with a figure) that are commonly applied to all algorithms:
All algorithms assume that the number of sentences in the input post is less than or equal to the number of comments. The generated dialogues involve two speakers where utterances of Speakers 1 and 2 are extracted from the post and comments, respectively.
The title or each sentence in the post is considered an utterance of Speaker 1 (S1
).
S1
: How do you focus when you’re depressed?
S2
: What helps me is embracing when I'm feeling down and allowing myself to take ...
S1
: I have so many assignments due, with exams coming up too.
S2
: Get up early everyday and use the library to study if you have one, ...
S1
: Life's just keeps hitting me recently and I'm finding it really hard to sit down and take information in.
S2
: Hardest part is starting to study, once I have like 15 minutes into my study session that’s my only focus and just forget everything else.
Illustrate the baseline algorithm in pseudocode. Create helper methods if they help the readability and/or generalizability of your algorithm.
Give a brief overview of the algorithm by explaining what each line of the code does.
Describe helper methods (if any) in detail.
Define the input:
Is the input correctly described according to the objective?
Initialize the output and auxiliary data structures:
Describe the loop:
Return the output:
How do you estimate such likelihoods?
Any span of consecutive sentences is considered an utterance of S1
.
S1
: How do you focus when you’re depressed? I have so many assignments due, with exams coming up too.
S2
: What helps me is embracing when I'm feeling down and allowing myself to take a deserved break.
S1
: Life's just keeps hitting me recently and I'm finding it really hard to sit down and take information in.
S2
: Get up early everyday and use the library to study if you have one, idk if you're like me but as soon as I get home I'm kinda done for the day so it helps to stay somewhere where you can't really relax.
S1
: Writing is hard, listening and paying attention is hard.
S2
: Hardest part is starting to study, once I have like 15 minutes into my study session that’s my only focus and just forget everything else.
S1
: Even if I manage to listen or read none of the information stays in my head. Any help is very appreciated!!
S2
: Sometimes I confuse my depressive episodes with burnout and it's important to know your limits. The biggest pro tip to not be overwhelmed with so much to do all at once is doing something every day.
For each utterance of S1
, find a comment that is the most relevant and make it the response to from Speaker 2 (S2
).
Let be an input post where is the 'th sentence in , and be a set of 's comments such that where is the 'th comment in and is the 'th sentence in .
Let be the list of utterances representing the output dialogue (L1
) and be a set of segments created from (L2
).
The algorithm visits every sentence (L3
) and appends it to (L4
). It then finds the most-relevant segment (L5
) and adds to (L6
). gets trimmed with (L7
).
Finally, it returns as the output (L8
).
Describe the method:
The method removes and returns the first sentence in .
Describe the method:
The method makes each comment a segment s.t. , where (: text concatenation).
Describe the method:
The method takes comprising all previous utterances and , then estimates the likelihood of being the next utterance.
Describe the method:
the method removes from such that .
For each utterance of S1
, find a span of any consecutive sentences in comments that is the most relevant and make it the response to from S2
.