Recurrent Neural Networks
Update: 2023-10-26
Last updated
Update: 2023-10-26
Last updated
Copyright © 2023 All rights reserved
A Recurrent Neural Network (RNN) [1] maintains hidden states of previous inputs and uses them to predict outputs, allowing it to model temporal dependencies in sequential data.
The hidden state is a vector representing the network's internal memory of the previous time step. It captures information from previous time steps and influences the predictions made at the current time step, often updated at each time step as the RNN processes a sequence of inputs.
Given an input sequence where , an RNN for sequence tagging defines two functions, and :
takes the current input and the hidden state of the previous input , and returns a hidden state such that , where , , and is an activation function.
takes the hidden state and returns an output such that , where .
Figure 1 shows an example of an RNN for sequence tagging, such as part-of-speech tagging:
The RNN for sequence tagging above does not consider the words that follow the current word when predicting the output. This limitation can significantly impact model performance since contextual information following the current word can be crucial.
For example, let us consider the word "early" in the following two sentences:
They are early birds -> "early" is an adjective.
They are early today -> "early" is an adverb.
The POS tags of "early" depend on the following words, "birds" and "today", such that making the correct predictions becomes challenging without the following context.
To overcome this challenge, a Bidirectional RNN is suggested [2] that considers both forward and backward directions, creating twice as many hidden states to capture a more comprehensive context. Figure 3 illustrates a bidirectional RNN for sequence tagging:
Does it make sense to use bidirectional RNN for text classification? Explain your answer.
Long Short-Term Memory (LSTM) Networks [3-5]
Gated Recurrent Units (GRUs) [6-7]
Finding Structure in Time, Elman, Cognitive Science, 14(2), 1990.
Bidirectional Recurrent Neural Networks, Schuster and Paliwal, IEEE Transactions on Signal Processing, 45(11), 1997.
Long Short-Term Memory, Hochreiter and Schmidhuber, Neural Computation, 9(8), 1997 (PDF available at ResearchGate).
End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF, Ma and Hovy, ACL, 2016.*
Contextual String Embeddings for Sequence Labeling, Akbik et al., COLING, 2018.*
Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation, Cho et al., EMNLP, 2014.*
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, Chung et al., NeurIPS Workshop on Deep Learning and Representation Learning, 2014.*
Notice that the output for the first input is predicted by considering only the input itself such that (e.g., the POS tag of the first word "I" is predicted solely using that word). However, the output for every other input is predicted by considering both and , an intermediate representation created explicitly for the task. This enables RNNs to capture sequential information that Feedforward Neural Networks cannot.
What does each hidden state represent in the RNN for sequence tagging?
Unlike sequence tagging where the RNN predicts a sequence of output for the input , an RNN designed for text classification predicts only one output for the entire input sequence such that:
Sequence Tagging
Text Classification:
To accomplish this, a common practice is to predict the output from the last hidden state using the function . Figure 2 shows an example of an RNN for text classification, such as sentiment analysis:
What does the hidden state represent in the RNN for text classification?
For every , the hidden states and are created by considering and , respectively. The function takes both and and returns an output such that , where is a concatenation of the two hidden states and .