Introduction


This post is aimed at introducing the concept of reinforcement learning for sequence tasks. This particular method has been used in many tasks involving recurrent neural networks such as LSTM’s where we would like to optimize the LSTM on an external reward, rather than providing per time-step supervision labels.

Reinforcement Learning


Let’s imagine that you want to train an LSTM to maximize an external reward such as the ROUGE metric. There is no way to do this using standard backprop. But if we frame this as a reinforcement learning problem, we can view the sequences of actions we take until the LSTM produces an EOS token as an action, and the ROUGE score of the produced sentence as its corresponding reward. Reinforcement learning allows us to bridge the gap between an external metric such as ROUGE and a sequential model such as LSTM.

This adaptation of reinforcement learning to sequence-based tasks can be seen as a special case of the general policy gradient method, which optimizes \pi_{\theta}(s,a). In our case , we are specifically interested in the case where the state s is an internalization of the actions taken so far, a_{1...t-1}, a.k.a the hidden state h_{t} of the LSTM or GRU, which gives us \pi_{\theta}(s_{t},a_{t})=\pi_{\theta}(h_{t},a_{t})=\pi_{\theta}(a_{1...t-1},a_{t})

Policy Gradient


Let’s define the components of the reinforcement learning model.

  1. First, we have the policy function \pi_{\theta}. The policy function is the behavior rule that our reinforcement learning agent will follow. The only requirement is that it defines a probability distribution over the space of possible actions. In most cases this is simply the familiar softmax function.
  2. Secondly, we will define the reward function as the reward we expect to get by behaving according to \pi_{\theta}.

Since the policy is probabilistic, our reward should be an expectation over the probability distribution defined by \pi_{\theta}:

J(\theta)= \sum\limits_{a_{1:T}\in{\mathbb{A^\dagger}}} \pi_{\theta}(a_{1:T})R(a_{1:T})                    (2)

Where a_{1:T}=[a_1,a_2...a_T] is a sequence of actions that constitute an episode that ends at time T.

Policy Gradient Derivation


We’ve now defined our reward function. In order to do learning on this reward, we will need to compute the gradient of this expression, \nabla J(\theta). Unfortunately, J(\theta) itself is an impossible sum to calculate in most cases, since the possible combination of action sequences grows exponentially with the sequence length T. The gradient is likewise impossible to evaluate. However, since the reward is a sum over a probability distribution and therefore an expectation, we have the option to approximate it via an unbiased estimator (random sampling). How does this help us? As we will show below, a mathematical trick allows us to calculate \nabla J(\theta) also as an expectation, allowing it to be estimated via random sampling as well.

By the chain rule, we have:

\nabla log(F(x))= \dfrac{1}{F(x)}*\nabla F(x)

If we multiply both sides by  F(x), we get

F(x)\nabla log(F(x))= \nabla F(x)                    (3)

This is often referred to as the log trick. Using it, we can write:

\nabla J(\theta)=\sum\limits_{a_{1:T}\in{\mathbb{A^\dagger}}}\nabla\pi_{\theta}(a_{1:T})R(a_{1:T})=\sum\limits_{a_{1:T}\in{\mathbb{A^\dagger}}}\pi_{\theta}(a_{1:T})\nabla log(\pi_{\theta}(a_{1:T}))R(a_{1:T})                    (4)

Which is an expectation:

=\mathbb{E}_{a_{1:T}~\pi_{\theta}}\nabla log(\pi_{\theta}(a_{1:T}))R(a_{1:T})             (5)

We can use (5) to approximate the gradient we should follow, by randomly sampling the action sequence until the end of an episode, observing the reward we got, and plugging it into the expression. The learning process now boils down to executing many trials of randomly sampled action sequences (this is called Monte-Carlo sampling), observing their rewards, and updating the policy weights accordingly until we reach convergence.

Note on the Policy Gradient with an LSTM


We have now derived the policy gradient equation. However, the implementation might not be clear in your mind. Implementing the sequential version of the policy gradient using LSTM is extremely simple because we do not manually loop over the time dimension to calculate any gradients. This is already baked into the backpropagation-through-time (BPTT) calculation for the LSTM function \pi_{\theta}(a_{1:T}), as we will see below.

Recall that the LSTM outputs at each timestep t are defined as \pi(a_{t}|h_{t-1}). This is the same as \pi(a_{t}|a_{1...t-1}) because the hidden state h_{t-1} encodes the actions taken so far, a_{1...t-1}.

Over multiple timesteps, the LSTM models the join probability distribution that is simply the multiplication of the conditional probabilities at each timestep:

\pi(a_{1})*\pi(a_{2}|a_{1})*\pi(a_{3}|a_{1...2})*\pi(a_{4}|a_{1...3})*...*\pi(a_{T}|a_{1...T-1})=\pi(a_{1},a_{2},a_{3}...,a_{T-1},a_{T})=\pi(a_{1:T})

Since we will take the entire action-sequence chosen by the LSTM during an episode as an ‘action’, and therefore have a single corresponding reward for the action-sequence, the gradients at each time-step are simply the standard BPTT gradients. All modern deep learning graph libraries support the BPTT feature which automatically unrolls the graph in the time dimension and uses the chain-rule to calculate each \nabla\pi(a_{t}|a_{1...t-1}) for t\in{T}. By default, these BPTT gradients are accumulated as a sum over time, which essentially replaces the sum over time that appears in the general policy gradient algorithm.  Therefore, the only thing we need to do to calculate the correct gradients for each time-step is to weight the standard BPTT gradients with the final episode reward.

Policy Gradient with a Baseline


Although the estimate of the gradient is unbiased there is high variance. A popular method to reduce the variance is to subtract a baseline b from the reward, which reduces the variance but does not effect the expected value of the gradient.

We subtract b from R(a_{1:T}) in (4):

\sum\limits_{a_{1:T}\in{\mathbb{A^\dagger}}}\pi_{\theta}(a_{1:T})\nabla(\pi_{\theta}(a_{1:T}))(R(a_{1:T})-b)=\sum\limits_{a_{1:T}\in{\mathbb{A^\dagger}}}\pi_{\theta}(a_{1:T})\nabla log(\pi_{\theta}(a_{1:T}))R(a_{1:T})-\sum\limits_{a_{1:T}\in{\mathbb{A^\dagger}}}\pi_{\theta}(a_{1:T})\nabla log(\pi_{\theta}(a_{1:T}))b

Using (3) to rewrite the second term on the RHS,

\sum\limits_{a_{1:T}\in{\mathbb{A^\dagger}}}\pi_{\theta}(a_{1:T})\nabla log(\pi_{\theta}(a_{1:T}))b=\sum\limits_{a_{1:T}\in{\mathbb{A^\dagger}}}\nabla \pi_{\theta}(a_{1:T})b

Since b is independent of a_{1:T},

=b\sum\limits_{a_{1:T}\in{\mathbb{A^\dagger}}}\nabla \pi_{\theta}(a_{1:T})

And finally the sum rule gives us,

=b\nabla\sum\limits_{a_{1:T}\in{\mathbb{A^\dagger}}}\pi_{\theta}(a_{1:T})=b\nabla1=0

The base line trick can be interpreted as measuring a relative reward (w.r.t the baseline), rather than the raw reward, to reduce the variance of the gradient estimate.

Conclusion


We’ve reviewed the derivation of policy gradients in the sequence setting. As we can see, the method rather straightforward, but can be useful in scenarios where standard supervised learning can’t be employed. It should be noted that other variations and extensions have improved this method in useful and interesting ways, and further research into the recent literature is strongly recommended.

Posted by:cerberusd

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s