This is the tenth post in an article series about MIT's lecture course "Introduction to Algorithms." In this post I will review lecture fifteen, which introduces the concept of Dynamic Programming and applies it to the Longest Common Subsequence problem.

Dynamic programming is a design technique similar to divide and conquer. Divide-and-conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. Dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems. A dynamic-programming algorithm solves every subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subsubproblem is encountered.

Dynamic programming was systematized by Richard E. Bellman. He began the systematic study of dynamic programming in 1955. The word "programming," both here and in linear programming, refers to the use of a tabular solution method and not to writing computer code.

Dynamic programming is typically applied to optimization problems. In such problems there can be many possible solutions. Each solution has a value, and we want to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution, as opposed to the optimal solution, since there may be several solutions that achieve the optimal value.

Dynamic programming can be effectively applied to solve the longest common subsequence (LCS) problem. The problem is stated as following: given two sequences (or strings) x and y find a maximum-length common subsequence (substring) of x and y.

For example, given two sequences x = "ABCBDAB" and y = "BDCABA", the LCS(x, y) = { "BCBA", "BDAB", "BCAB" }. As you can see there are several optimal solutions.

Lecture fifteen introduces dynamic programming via this longest common subsequence problem. It first gives a brute-force, exponential time algorithm for solving it. The idea of algorithm is to check every subequence of x[1..m] (m is the length of sequence x) to see if it is also a subsequence of y[1..n] (n is the length of sequence y). Checking takes O(n) time, and there are 2m subsequences of x. The running time thus is exponential O(n·2m). It is no good for large sequences and the lecture continues with a simplification - let's look at the length of a longest-common subseq and then extend this algorithm to find the LCS itself. The simplified algorithm is recursive in nature and computes the same subproblems. At this moment two dynamic programming hallmarks are stated:

2. Overlapping subproblems: a recursive solution contains a "small" number of distinct subproblems repeated many times.

As the subproblems are overlapping, the lecture introduces concept of memoization algorithm (note that it's not memorization). A better known word for memoization is caching. The subproblems are cached (memoized) so that they are not recomputed over and over again.

The lecture ends with constructing a dynamic programming table for LCS problem and explains how to find a LCS from this table.

This course is taught from the CLRS book, also called "Introduction to Algorithms". Chapter 15 is called "Dynamic Programming" and covers the topics in this lecture. It also explains the assembly-line scheduling problem, matrix-chain multiplication problem, elements of dynamic programming and optimal binary search trees.

The next post will be about graphs, greedy algorithms and minimum spanning trees.

Thanks for reading my post. If you enjoyed it and would like to receive my posts automatically, you can subscribe to new posts via rss feed or email: