Recurrent Neural Networks
Update: 2023-10-26
A Recurrent Neural Network (RNN) [1] maintains hidden states of previous inputs and uses them to predict outputs, allowing it to model temporal dependencies in sequential data.
The hidden state is a vector representing the network's internal memory of the previous time step. It captures information from previous time steps and influences the predictions made at the current time step, often updated at each time step as the RNN processes a sequence of inputs.
RNN for Sequence Tagging
Given an input sequence X=[x1,…,xn] where xi∈Rd×1, an RNN for sequence tagging defines two functions, f and g:
f takes the current input xi∈X and the hidden state hi−1 of the previous input xi−1, and returns a hidden state hi∈Re×1 such that f(xi,hi−1)=α(Wxxi+Whhi−1)=hi, where Wx∈Re×d, Wh∈Re×e, and α is an activation function.
g takes the hidden state hi and returns an output yi∈Ro×1 such that g(hi)=Wohi=yi, where Wo∈Ro×e.
Figure 1 shows an example of an RNN for sequence tagging, such as part-of-speech tagging:

Notice that the output y1 for the first input x1 is predicted by considering only the input itself such that f(x1,0)=α(Wxx1)=h1 (e.g., the POS tag of the first word "I" is predicted solely using that word). However, the output yi for every other input xi is predicted by considering both xi and hi−1, an intermediate representation created explicitly for the task. This enables RNNs to capture sequential information that Feedforward Neural Networks cannot.
Q6: How does each hidden state hi in a RNN encode information relevant to sequence tagging tasks?
RNN for Text Classification
Unlike sequence tagging where the RNN predicts a sequence of output Y=[y1,…,yn] for the input X=[x1,…,xn], an RNN designed for text classification predicts only one output y for the entire input sequence such that:
Sequence TaggingRNNst(X)→Y
Text Classification: RNNst(X)→y
To accomplish this, a common practice is to predict the output y from the last hidden state hn using the function g. Figure 2 shows an example of an RNN for text classification, such as sentiment analysis:

Q7: In text classification tasks, what specific information is captured by the final hidden state hn of a RNN?
Bidirectional RNN
The RNN for sequence tagging above does not consider the words that follow the current word when predicting the output. This limitation can significantly impact model performance since contextual information following the current word can be crucial.
For example, let us consider the word "early" in the following two sentences:
They are early birds -> "early" is an adjective.
They are early today -> "early" is an adverb.
The POS tags of "early" depend on the following words, "birds" and "today", such that making the correct predictions becomes challenging without the following context.
To overcome this challenge, a Bidirectional RNN is suggested [2] that considers both forward and backward directions, creating twice as many hidden states to capture a more comprehensive context. Figure 3 illustrates a bidirectional RNN for sequence tagging:

For every xi, the hidden states hi and hi are created by considering hi−1 and hi+1, respectively. The function g takes both hi and hi and returns an output yi∈Ro×1 such that g(hi,hi)=Wo(hi⊕hi)=yi, where (hi⊕hi)∈R2e×1 is a concatenation of the two hidden states and Wo∈Ro×2e.
Q8: What are the advantages and limitations of implementing bidirectional RNNs for text classification and sequence tagging tasks?
Advanced Topics
Long Short-Term Memory (LSTM) Networks [3-5]
Gated Recurrent Units (GRUs) [6-7]
References
Finding Structure in Time, Elman, Cognitive Science, 14(2), 1990.
Bidirectional Recurrent Neural Networks, Schuster and Paliwal, IEEE Transactions on Signal Processing, 45(11), 1997.
Long Short-Term Memory, Hochreiter and Schmidhuber, Neural Computation, 9(8), 1997 (PDF available at ResearchGate).
End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF, Ma and Hovy, ACL, 2016.*
Contextual String Embeddings for Sequence Labeling, Akbik et al., COLING, 2018.*
Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation, Cho et al., EMNLP, 2014.*
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, Chung et al., NeurIPS Workshop on Deep Learning and Representation Learning, 2014.*
Last updated
Was this helpful?