Given the input text W={w1β,β¦,wnβ} where wiβ is the i'th token in βW, a contextualized encoder (e.g., BERT) takes W and generates an embedding eiββR1Γd for every token wiββWusing wiβ as well as its context. The challenge is that this encoder can take only up the m-number of tokens such that it cannot handle any input where n>m.
What are the ways to handle arbitrarily large input using a contextualized encoder?
One popular method is called the "Sliding Window", which splits the input into multiple blocks of text, generates embeddings for each block separately, and merges them at the end.
Let where if ; otherwise, such that . Then, the encoder takes each β and generates for every token in . Finally, the embedding matrix is created by sequentially stacking all embeddings in .
The baseline method does not have enough context to generate high-quality embeddings for tokens on the edge of each block.
Modify the baseline method such that a block has overlapped tokens with its surrounding blocks (both front and back). Once all blocks are encoded, each overlapped token should have two embeddings. Create an average embedding of those two embeddings and make it the final embedding for the overlapped token.
In a (aka, an encoder-decoder model), a decoder takes an embedding matrix and predicts what token should come next. It is often the case that this embedding matrix is also bounded by a certain size, which becomes an issue when the size of the matrix becomes larger than (for the case above, where ). One common method to handle this issue is to use an attention matrix for dimensionality reduction as follows:
The embedding matrix is first transposed to then multiplied by an attention matrix such that . Finally, the transpose of , that is gets fed into the decoder.
Would the following method be equivalent to the above method?
An attention matrix is multiplied by the embedding matrix such that . Finally, gets fed into the decoder.
This chapter discusses how to develop new algorithms and write them in pseudocode.
Typically, it is better to write the approach section as abstract as possible so your methods become generalizable for many tasks. For example, even if you use BERT as an encocder but your approach can take any transformer as an encoder, it is better to denote that your method uses a transformer as an encoder instead of BERT.
A post with the title (https://www.reddit.com/r/college/comments/v7h9rs):
How do you focus when youβre depressed?
I have so many assignments due, with exams coming up too. Life's just keeps hitting me recently and I'm finding it really hard to sit down and take information in. Writing is hard, listening and paying attention is hard. Even if I manage to listen or read none of the information stays in my head. Any help is very appreciated!!
Comments and replies:
Get up early everyday and use the library to study if you have one, idk if you're like me but as soon as I get home I'm kinda done for the day so it helps to stay somewhere where you can't really relax.
Thank you, Iβll give this a try tomorrow
What helps me is embracing when I'm feeling down and allowing myself to take a deserved break. Sometimes I confuse my depressive episodes with burnout and it's important to know your limits. The biggest pro tip to not be overwhelmed with so much to do all at once is doing something every day. Dedicating simply 30 minutes to an hour a day of intense studying goes a long way over time vs cramming at the end. If you're able to do more than 1 hour then great! But know that you don't have to do 6-7 intense studying hours a day to be successful. Be intentional with your time and work smarter vs harder. Your future self will thank you.
This is so nice to hear, and very helpful, thank you!
Hardest part is starting to study, once I have like 15 minutes into my study session thatβs my only focus and just forget everything else.
Give a comparison overview of your algorithms with key features:
We introduce two algorithms for the reddit-to-dialogue generation: the baseline algorithm considers every sentence in the post an utterance of Speaker 1 and each comment an utterance of Speaker 2 (Section 3.1), whereas the advanced algorithm finds an appropriate span of sentences from the post to form an utterance for Speaker 1 and an appropriate span of any comment to form an utterance for Speaker 2 (Section 3.2).
Indicate the objective of your algorithm(s):
The main objective is to generate a multi-turn dialogue using a post, its comments, and replies that flows naturally in context.
Describe what the input and output data should be (possible with a figure) that are commonly applied to all algorithms:
All algorithms assume that the number of sentences in the input post is less than or equal to the number of comments. The generated dialogues involve two speakers where utterances of Speakers 1 and 2 are extracted from the post and comments, respectively.
The title or each sentence in the post is considered an utterance of Speaker 1 (S1).
For each utterance u1β of S1, find a comment that is the most relevant and make it the response to u1β from Speaker 2 (S2).
S1: How do you focus when youβre depressed?
S2: What helps me is embracing when I'm feeling down and allowing myself to take ...
S1: I have so many assignments due, with exams coming up too.
S2: Get up early everyday and use the library to study if you have one, ...
S1: Life's just keeps hitting me recently and I'm finding it really hard to sit down and take information in.
S2: Hardest part is starting to study, once I have like 15 minutes into my study session thatβs my only focus and just forget everything else.
Illustrate the baseline algorithm in pseudocode. Create helper methods if they help the readability and/or generalizability of your algorithm.
Give a brief overview of the algorithm by explaining what each line of the code does.
Describe helper methods (if any) in detail.
Define the input:
Let P=[p1β,..,pnβ]be an input post where piβ is the i'th sentence in P, and C={C1β,..,Cmβ} be a set of P's comments such that Cjβ=[cj1β,..,cjββ] where Cjβ is the j'th comment in C and cjkβ is the k'th sentence in Cjβ.
Is the input correctly described according to the objective?
Initialize the output and auxiliary data structures:
Let D be the list of utterances representing the output dialogue (
L1) and T be a set of segments created from C (L2).
Describe the loop:
The algorithm visits every sentence p0ββP (
L3) and appends it to D (L4). It then finds the most-relevant segment t^βT (L5) and adds t^ to D (L6).T gets trimmed with t^ (L7).
Return the output:
Finally, it returns D as the output (
L8).
Describe the first method:
The first method removes and returns the first sentence in P.
Describe the segment method:
The segment method makes each comment a segment s.t. segment(C)={C1β²β,β¦,Cβ²m}, where Cβ²j=cj1ββ’...β’cjββ (β’: text concatenation).
Describe the ranker method:
The ranker method takes D comprising all previous utterances and piβ, then estimates the likelihood of t being the next utterance.
How do you estimate such likelihoods?
Describe the trim method:
the trim method removes t^=Cjβ²β from T such that trim(T,t^)=TβCjβ²β.
Any span of consecutive sentences is considered an utterance of S1.
For each utterance u1β of S1, find a span of any consecutive sentences in comments that is the most relevant and make it the response to u1β from S2.
S1: How do you focus when youβre depressed? I have so many assignments due, with exams coming up too.
S2: What helps me is embracing when I'm feeling down and allowing myself to take a deserved break.
S1: Life's just keeps hitting me recently and I'm finding it really hard to sit down and take information in.
S2: Get up early everyday and use the library to study if you have one, idk if you're like me but as soon as I get home I'm kinda done for the day so it helps to stay somewhere where you can't really relax.
S1: Writing is hard, listening and paying attention is hard.
S2: Hardest part is starting to study, once I have like 15 minutes into my study session thatβs my only focus and just forget everything else.
S1: Even if I manage to listen or read none of the information stays in my head. Any help is very appreciated!!
S2: Sometimes I confuse my depressive episodes with burnout and it's important to know your limits. The biggest pro tip to not be overwhelmed with so much to do all at once is doing something every day.
When you create a dataset, the followings need to be clearly described:
Data collection (e.g., sources of the data).
Preprocessing if performed (e.g., scripts that you write, existing tools used).
People involved in this process (e.g., annotators, survey subjects).
Quality of the created data (e.g., inter-annotator agreement).
Statistics and analysis of the original, preprocessed, annotated data.
Here are a few papers presenting new datasets:
, Li et al., EMNLP 2020 (see Section 3).
, Yang and Choi, SIGDIAL, 2019 (see Section 3).