Reward shaping#

Learning outcomes

The learning outcomes of this chapter are:

  1. Explain how reward shaping can be used to help model-free reinforcement learning methods to converge.

  2. Manually apply reward shaping for a given potential function to solve small-scale MDP problems.

  3. Design and implement potential functions to solve medium-scale MDP problems automatically.

  4. Compare and contrast reward shaping with Q-value initialisation.

Overview#

In the previous chapters, we looked at fundamental temporal difference (TD) methods for reinforcement learning. As noted, these methods have some weaknesses, including that rewards are sometimes sparse. This means that there are few state/actions that lead to non-zero rewards. This is problematic because initially, reinforcement learning algorithms behave entirely randomly and will struggle to find good rewards. Remember the example of a UCT algorithm playing Freeway.

In this section, we look at two simple approaches that can improve temporal difference methods:

  1. Reward shaping: If rewards are sparse, we can modify/augment our reward function to reward behaviour that we think moves us closer to the solution.

  2. Q-value Initialisation: We can “guess” good Q-values at the start and initialise \(Q(s,a)\) to be this at the start, which will guide our learning algorithm.

Reward shaping#

Definition – Reward sharping

Reward shaping is the use of small intermediate ‘fake’ rewards given to the learning agent that help it converge more quickly.

In many applications, you will have some idea of what a good solution should look like. For example, in our simple navigation task, it is clear that moving towards the reward of +1 and away from the reward of -1 are likely to be good solutions.

Can we then speed up learning and/or improve our final solution by nudging our reinforcement learner towards this behaviour?

The answer is: Yes! We can modify our reinforcement learning algorithm slightly to give the algorithm some information to help, while also guaranteeing optimality.

This information is known as domain knowledge — that is, stuff about the domain that the human modeller knows about while constructing the model to be solved.

Exercise: Freeway What would be a good heuristic for the Freeway game to learn how to get the chicken across the freeway?

image

Exercise: GridWorld What would be a good heuristic for GridWorld?

../_images/9322df0c1c9c3378ee27b457ed43bad878f17fd29ae92fd027efe05f7c835593.png

Shaped Reward#

In TD learning methods, we update a Q-function when a reward is received. E.g, for 1-step Q-learning:

\[Q(s,a) \leftarrow Q(s,a) + \alpha [r + \gamma \max_{a'} Q(s',a') - Q(s,a)]\]

The approach to reward shaping is not to modify the reward function or the received reward \(r\), but to just give some additional reward for some actions:

\[Q(s,a) \leftarrow Q(s,a) + \alpha [r + \underbrace{F(s,s')}_{\text{additional reward}} + \gamma \max_{a'} Q(s',a') - Q(s,a)]\]

The purpose of the function is to give an additional reward \(F(s,s')\) when any action transitions from state \(s\) to state \(s'\). The function \(F : S \times S \to \mathbb{R}\) provides heuristic domain knowledge to the problem that is typically manually programmed.

We say that \(r + F(s,s')\) is the shaped reward for an action.

Further, we say that \(G^{\Phi} = \sum_{i=0}^{\infty} \gamma^i (r_i + F(s_i,s_{i+1}))\) is the shaped reward for the entire episode.

If we define \(F(s,s') > 0\) for states \(s\) and \(s'\), then this provides a small positive reward for transitioning from \(s\) to \(s'\), thus encouraging actions that transition from \(s\) to \(s'\) in future exploitation. If we define \(F(s,s') < 0\) for states \(s\) and \(s'\), then this provides a small negative reward for transitioning from \(s\) to \(s'\), thus discouraging actions that transition like this in future exploitation.

Potential-based Reward Shaping#

Potential-based reward shaping is a particular type of reward shaping with nice theoretical guarantees. In potential-based reward shaping, \(F\) is of the form:

\[F(s,s') = \gamma \Phi(s') - \Phi(s)\]

We call \(\Phi\) the potential function and \(\Phi(s)\) is the potential of state \(s\).

So, instead of defining \(F : S \times S \to \mathbb{R}\), we define \(\Phi : S \to \mathbb{R}\), which is some heuristic measure of the value of each state \(s \in S\).

Theoretical guarantee: this will still converge to the optimal policy under the assumption that all state-action pairs are sampled infinitely often.

This is quite straightforward to show as follows. Consider an episode with shaped reward \(G^{\Phi}\):

\( \begin{array}{lll} G^{\Phi} & = & \sum_{i=0}^{\infty} \gamma^i (r_i + F(s_i,s_{i+1}))\\ & = & \sum_{i=0}^{\infty} \gamma^i (r_i + \gamma\Phi(s_{i+1}) - \Phi(s_i))\\ & = & \sum_{i=0}^{\infty} \gamma^i r_i + \sum_{i=0}^{\infty}\gamma^{i+1}\Phi(s_{i+1}) - \sum_{i=0}^{\infty}\gamma^i\Phi(s_i)\\ & = & G + \sum_{i=0}^{\infty}\gamma^{i}\Phi(s_{i}) - \Phi(s_0) - \sum_{i=0}^{\infty}\gamma^i\Phi(s_i) \\ & = & G - \Phi(s_0) \end{array} \)

where \(G\) refers to the shaped reward for the episode, and \(s_0\) is the starting state of the episode. What this says is that the shaped reward \(G^{\Phi}\) is just the unshaped reward \(G\) minus the potential of the initial state \(s_0\). However, because \(F\) does not depend on the actions and \(G^{\Phi}\) does not depend on shaped rewards beyond the initial state, the shaped Q function, which we refer to as \(Q^{\Phi}\), can be defined as just \(Q^{\Phi}(s,a) = Q(s,a) + \Phi(s)\). Given this, any optimal policy extracted from \(Q^{\Phi}\) will be equivalent to any optimal policy extracted from \(Q\).

However! While it provides guarantees about the end result, potential-based reward shaping may either increase or decrease the time taken to learn. A well-designed potential function decrease the time to convergence.

Example – Potential Reward Shaping for GridWorld#

For Grid World, we use the Manhattan distance to define the potential function, normalised by the size of the grid:

\[ \Phi(s) = 1 - \frac{|x(g) - x(s)| + |y(g) - y(s)|}{width + height - 2} \]

in which \(x(s)\) and \(y(s)\) return the \(x\) and \(y\) coordinates of the agent respectively, \(g\) is the goal state. and \(width\) and \(height\) are the width and height of the grid respectively. Note that the coordinates are indexed from 0, so we subtract 2 from the denominator.

Even on the very first iteration, a greedy policy such as \(\epsilon\)-greedy, will feedback those states closer to the +1 reward. From state (1,2) with \(\gamma=0.9\) if we go Right, we get:

\[\begin{split} \begin{array}{lll} F((1,2), (2,2)) & = & \gamma\Phi(2,2) - \Phi(1,2)\\ & = & 0.9 \cdot (1 - \frac{1}{5}) - (1 - \frac{2}{5})\\ & = & 0.12 \end{array} \end{split}\]

We can compare the Q-values for these states for the four different possible moves that could have been taken from (1,2), using and \(\alpha=0.1\) and \(\gamma=0.9\):

\[\begin{split} \begin{array}{lllcc} \hline \textbf{Action} & r & F(s,s') & \gamma \max_{a'}Q(s',a') & \textrm{New}~ Q(s,a)\\ \hline Up & 0 & 0.9(1 - \frac{2}{5}) - (1 - \frac{2}{5}) = -0.06 & 0 & -0.006\\ Down & 0 & 0.9(1 - \frac{2}{5}) - (1 - \frac{2}{5}) = -0.06 & 0 & -0.006\\ Right & 0 & 0.9(1 - \frac{1}{5}) - (1 - \frac{2}{5}) = \phantom{-}0.12 & 0 & \phantom{-}0.012\\ Left & 0 & 0.9(1 - \frac{3}{5}) - (1 - \frac{2}{5}) = -0.24 & 0 & -0.024\\ \hline \end{array} \end{split}\]

Thus, we can see that our potential reward function rewards actions that go towards the goal and penalises actions that go away from the goal. Recall that state (1,2) is in the top row, so action Up just leaves us in state (1,2) and Down similarly because we cannot go through the walls.

But! It will not always work. Compare states (0,0) and (0,1). Our potential function will reward (0,1) because it is closer to the goal, but we know from from our value iteration example that (0,0) is a higher value state than (0,1). This is because our reward function does not consider the negative reward.

In practice, it is non-trivial to derive a perfect reward function – it is the same problem as deriving the perfect search heuristic. If we could do this, we would not need to even use reinforcement learning – we could just do a greedy search over the reward function.

Implementation#

To implement potential-based reward shaping, we need to first implement a potential function. We implement potential functions as subclasses of PotentialFunction. For the GridWorld example, the potential function is 1 minus the normalised distance from the goal:

from potential_function import PotentialFunction
from gridworld import GridWorld


class GridWorldPotentialFunction(PotentialFunction):
    def __init__(self, mdp):
        self.mdp = mdp

    def get_potential(self, state):
        if state != GridWorld.TERMINAL:
            goal = (self.mdp.width, self.mdp.height)
            x = 0
            y = 1
            return 0.1 * (
                1 - ((goal[x] - state[x] + goal[y] - state[y]) / (goal[x] + goal[y]))
            )
        else:
            return 0.0

Reward shaping for Q-learning is then a simple extension of the QLearning class, overriding the get_delta method:

from qlearning import QLearning


class RewardShapedQLearning(QLearning):
    def __init__(self, mdp, bandit, potential, qfunction):
        super().__init__(mdp, bandit, qfunction=qfunction)
        self.potential = potential

    def get_delta(self, reward, state, action, next_state, next_action):
        q_value = self.qfunction.get_q_value(state, action)
        next_state_value = self.state_value(next_state, next_action)
        state_potential = self.potential.get_potential(state)
        next_state_potential = self.potential.get_potential(next_state)
        potential = self.mdp.discount_factor * next_state_potential - state_potential
        delta = reward + potential + self.mdp.discount_factor * next_state_value - q_value
        return delta

We can run this on a GridWorld example with more states, to make the problem harder::

from qtable import QTable
from qlearning import QLearning
from reward_shaped_qlearning import RewardShapedQLearning
from gridworld_potential_function import GridWorldPotentialFunction
from q_policy import QPolicy
from multi_armed_bandit.epsilon_greedy import EpsilonGreedy


mdp = GridWorld(width = 15, height = 12, goals = [((14,11), 1), ((13,11), -1)])
qfunction = QTable()
potential = GridWorldPotentialFunction(mdp)
RewardShapedQLearning(mdp, EpsilonGreedy(), potential, qfunction).execute(episodes=200)
policy = QPolicy(qfunction)
mdp.visualise_q_function(qfunction)
mdp.visualise_policy(policy)
reward_shaped_rewards = mdp.get_rewards()
../_images/f79d3d621440bf6ce1f92ca760443389213f3739aa8344dde68fb2894b7e898d.png ../_images/27e043da221aa9cb3a5a8d6c80c1cbf89cf269a4600da8ef6348f57c670c98e7.png

Now, we compare this with Q-learning without reward shaping:

mdp = GridWorld(width = 15, height = 12, goals = [((14,11), 1), ((13,11), -1)])
qfunction = QTable()
QLearning(mdp, EpsilonGreedy(), qfunction).execute(episodes=200)
policy = QPolicy(qfunction)
mdp.visualise_q_function(qfunction)
mdp.visualise_policy(policy)
q_learning_rewards = mdp.get_rewards()
../_images/10ac5a06744c6ee07a6a37e2711d0e4affdcf34ceaf1190719dd74ab5fd49133.png ../_images/e24cb89ddf5ee076b55cc5f6132784e44f0fc7fbe6b928e65d11a83a8c7114c0.png

If we plot the average episode length during training, we see that reward shaping reduces the length of the early episodes because it has knowledge nudging it towards the goal:

from tests.plot import Plot

Plot.plot_episode_length(
    ["Tabular Q-learning", "Reward shaping"],
    [q_learning_rewards, reward_shaped_rewards],
)
../_images/4349ea2c3391e0009771c4c3c520d282c1957879045ced70ca37abf873aa1a32.png

Q-value initialisation#

An approach related to reward shaping is Q-value initialisation. Recall that TD learning methods can start at any arbitrary Q-function. The closer our Q-values is to the optimal Q-values, the quicker it will converge.

Imagine if we happened to initialise our Q-values to the optimal Q-value. It would converge in one step!

Q-value initialisation is similar to reward shaping: we use heuristics to assign higher values to ‘better’ states. If we just define \(\Phi(s) = V_0(s)\), then they are equivalent. In fact, if our potential function is static (the definition does not change during learning), then Q-value initialisation and reward shaping are equivalent[1].

Example – Q-value Initialisation in GridWorld#

Using the idea of Manhattan distance for a potential function, we can define an initial Q-function as follows for state (1,2) using our potential function:

\[\begin{split} \begin{array}{llllr} Q((1,2), Up) & = & 0.9(1 - \frac{2}{5}) - (1 - \frac{2}{5}) & = & -0.06\\ Q((1,2), Down) & = & 0.9(1 - \frac{2}{5}) - (1 - \frac{2}{5}) & = & -0.06\\ Q((1,2), Right) & = & 0.9(1 - \frac{1}{5}) - (1 - \frac{2}{5}) & = & 0.12\\ Q((1,2), Left) & = & 0.9(1 - \frac{3}{5}) - (1 - \frac{2}{5}) & = & -0.24 \end{array} \end{split}\]

Once we start learning over episodes, we will select those actions with a higher heuristic value, and also we are already closer to the optimal Q-function, so will will converge faster. As with reward shaping though, this entirely depends on having a good potential funtion! A poor potential function will give an inaccurate initial Q-values, which may take longer to converge.

Takeaways#

Takeaways

  • A weakness of model-free methods is that they spend a lot of time exploring at the start of the learning. It is not until they find some rewards that the learning begins. This is particularly problematic when rewards are sparse.

  • Reward shaping takes in some domain knowledge that “nudges” the learning algorithm towards more positive actions.

  • Q-value initialisation is a “guess” of the initial Q-values to guide early exploration

  • Reward sharping and Q-value initialisation are equivalent if our potential function is static.

  • Potential-based reward shaping guarantees that the policy will converge to the same policy without reward shaping.