2/3, Dynamic Programming 动态规划


满足下面三个条件之一,则极有可能是使用动态规划求解。

  • 求最大值最小值 int maxOrMin()
  • 判断是否可行 boolean isWin()
  • 统计方案个数 int numOfWays()

动规四要素:

  • 状态State
    灵感,创造力,存储小规模问题的结果

  • 方程Function
    状态之间的联系,怎么通过小的状态,来算大的状态

  • 初始化Initialization
    最极限的小状态是什么,起点

  • 答案Answer
    最大的那个状态是什么,终点

基础类型

  • 坐标型
  • 序列型
  • 双序列型

什么是动态规划?

使用多重循环实现递归和记忆化搜索

some insights from algorithm notes

动态规划问题的种类更杂,一般来讲也更tricky一些。因为这类问题建立在对问题的状态空间和子问题结构有清晰的认识,并且 能正确定义和缓存子问题结果上。即使最后的代码可能只是几个循环,但是从复杂程度上大多数要高于暴力搜索和二叉树。

见到题基本的判断,用DP做的题大多返回boolean / int,求Max /Min,而且不能打乱原输入顺序。

动态规划有两个重要定义,一个叫"optimal substructure",另 一个叫"overlap subproblem". 各种排序/ Tree类问题中,都会用到divide & conquer的思想,去把问题分成若干 个"disjoint"subproblems,然后递归解决。"Disjoint"subproblem在Tree类问题上体现的最为明显,左子树是左子树的问 题,右子树是右子树的问题。因此Tree类问题上,更多的是解决“disjoint subproblem的整合”还有“非连续subproblem的处理”。

而动态规划的中心思想是,面对search tree里都是"overlap subproblem",如何 根据问题结构制定缓存策略,避免重叠问题重复计算。

根据CLRS,动态规划分为两种:

  • top-down with memoization (递归记忆化搜索)

    • 等价于带缓存的,

    • 搜索树上的DFS比较贴近于新问题正常的思考习惯

  • bottom-up (自底向上循环迭代)

    • 以"reverse topological order"处理

    • 每个子问题下面依赖的所有子问题都算完了才开始计算当前

    • 一般依赖于子问题之间天然的"size"联系

动态规划有两个property:

  • overlap subproblem

    • 比较好理解,画出recursion tree, clearly some subproblem should be computed repetitively.
  • optimal substructure

    • A problem is said to have optimal substructure if an optimal solution can be constructed efficiently from optimal solutions of its subproblems

What is optimal/non-optimal substructure? A problem is said to have optimal substructure if an optimal solution can be constructed efficiently from optimal solutions of its subproblems. This property is used to determine the usefulness of dynamic programming and greedy algorithms for a problem What is local and global optimum? Local optimum of an optimization problem is a solution that is optimal (either maximal or minimal) within a neighboring set of candidate solutions. Global optimum - is the optimal solution among all possible solutions, not just those in a particular neighborhood of values. How to prove that Greedy algorithm yields global optimum? Usually, global optimum can be proven by induction.Typically, a greedy algorithm is used to solve a problem with optimal substructure if it can be proved by induction that this is optimal at each step.Otherwise, providing the problem exhibits overlapping subproblems as well, dynamic programming is used. To prove that an optimization problem can be solved using a greedy algorithm, we need to prove that the problem has the following:

  • Optimal substructure property: an optimal global solution contains the optimal solutions of all its subproblems.

  • Greedy choice property: a global optimal solution can be obtained by greedily selecting a locally optimal choise.

  • Matroids can be used as well in some case used to mechanically prove that a particular problem can be solved with a greedy approach.

You will find yourself following a common pattern in discovering optimal substructure:

  1. You show that a solution to the problem consists of making a choice, such as choosing an initial cut in a rod or choosing an index at which to split the matrix chain. Making this choice leaves one or more subproblems to be solved.

  2. You suppose that for a given problem, you are given the choice that leads to an optimal solution. You do not concern yourself yet with how to determine this choice. You just assume that it has been given to you.

  3. Given this choice, you determine which subproblems ensue and how to best characterize the resulting space of subproblems.

  4. You show that the solutions to the subproblems used within an optimal solution to the problem must themselves be optimal by using a “cut-and- paste” technique. You do so by supposing that each of the subproblem solutions is not optimal and then deriving a contradiction. In particular, by “cutting out” the nonoptimal solution to each subproblem and “pasting in” the optimal one, you show that you can get a better solution to the original problem, thus contradicting your supposition that you already had an optimal solution. If an optimal solution gives rise to more than one subproblem, they are typically so similar that you can modify the cut-and-paste argument for one to apply to the others with little effort.

如何在面试中讲清楚DP问题?

This is a dynamic programming[1]question. Usually, solving and fully understanding a dynamic programming problem is a 4 step process:

  1. Start with the recursive backtracking solution
  2. Optimize by using a memoization table (top-down[3] dynamic programming)
  3. Remove the need for recursion (bottom-up dynamic programming)
  4. Apply final tricks to reduce the time / memory complexity

All solutions presented below produce the correct result, but they differ in run time and memory requirements.

results matching ""

    No results matching ""