Happy New Year! I sure enjoyed my holiday. I visited my girlfriend’s city in southern China and spent happy three days with her :).

These two days I read about dynamic programming. This is the first time I ever systematically learn its principles, so I thought I should write my thoughts down.

When to use dynamic programming? In my opinion, when the problem fits the following requirements:

- It can be divided into a certain amount of subproblems.
- The subproblems can be solved in a certain order.
- There are not too many variables.

“Dynamic programming” is really a fancy phrase of solving problems in an order. I think it essentially is recursion with memoization. I’ll write solutions of two sample problems here. One is the famous knapsack program, the other is finding the shortest path in a graph with negative edges.

**Knapsack problem: **Given a “bag” that can only hold items weighing less than or equal to *W* and weights *w* and values *v* of n items, how should we pick the items so that we have the largest possible value in the “bag”?

Here we have two variables with which we divide the problem: the number of items we are choosing from and the largest weight we can have. We use these two variables because when they are small, the problem is trivial: when we can only choose from 1 item, we have to choose that one; when the weight available is small, we can only choose from “light” items. Another reason is that we can solve subproblems in an order: from a small number of items to the whole group, and from 0 weight to *W*. It is mandatory that we understand the spirit of dynamic programming, because we use it to divide subproblems.

Now we consider if we choose an item or not. Assume we are looking for the largest possible value for i items and weight t – OPT(i, t) (OPT stands for “optimal”, not optional practical training lol). If the optimal plan includes item i, then OPT(i, t) = OPT(i – 1, t – *W*i) + *V*i. If it does not, OPT(i, t) = OPT(i – 1, t). When t is less than *W*i, it can only be the latter. In this way, we have our recurrence function:

OPT(i, t) = MAX(OPT(i- 1, t – *W*i) + *V*i, OPT(i – 1, t))

The rest is easy.

**Shortest Path Problem:** In a graph with negative edges but without negative cycles, find the shortest path from vertex s to t.

(I should note here that MacBook keyboard sucks. It feels so bad.)

We use OPT(i, v) to represent the least weight from vertex v to t, with at most i edges. If the optimal path P contains at most i – 1 edges, OPT(i, v) = OPT(i – 1, v). If P contains i edges, OPT(i, v) = OPT(i – 1, w) + cost(v to w).

Therefore, OPT(i, v) = MIN(OPT(i – 1, v), OPT(i – 1, w) + cost(v to w))

To be honest, I don’t quite understand how this works. I can see it does the job but I can’t form a logical shortcut of the algorithm in my mind. Anyway, I think with practice I will be able to master dynamic programming.

Comments are closed.