Are DP problems backtracking with memoization

Top-down Bottom-up Divide & Conquer Dynamic Programming Caching (Memoization) Branch-and-Bound Greedy

Transcript

1 2.2 Design paradigms top-down bottom-up divide & conquer dynamic programming caching (memoization) branch-and-bound greedy 1

2 Top-down break down the given problem into sub-steps Break down sub-steps into sub-sub-steps Break down sub-sub-steps into sub-sub-steps, etc. 2

3 Top-Down When implementing a sub-step, the sub-sub-steps are used as black boxes (specification!) Observe the sub-steps in the order of the smallest cross-references Avoid side effects 3

4 2.2 Design paradigms top-down bottom-up divide & conquer dynamic programming caching (memoization) branch-and-bound greedy 4

5 Bottom-up Solve partial problems and combine them into an overall solution Possible even with incomplete specifications Better test suites in parallel with development (design & implementation) Code reuse !? 5

6 2.2 Design paradigms top-down bottom-up divide & conquer dynamic programming caching (memoization) branch-and-bound greedy 6

7 Divide & Conquer Cont .: Problem of size n Divide: Break down the problem recursively into k sub-problems of size n / k Conquer: Solve small problems directly Combine the partial solutions to form the overall solution 7

8 Divide & Conquer Mostly: k = 2 Recursive implementation Intuitively easy to understand (actual algorithm hidden in the procedure calls) Effort described by recursion equations (master theorem) 8

9 Find MinMax iteratively MinMax (A [1..n]) min 1; max 1; i 2 while in do if A [i] A [max] then max ii i + 1 return (A [min], A [max]) T (n) = 2 n 2 ... (= 3 n 3) 9

10 Find MinMax D&C MinMax (A [a..b]) if ba 1 then return (Min (A [a], A [b]), Max (A [a], A [b])) else (u1, v1) MinMax (A [a ... (a + b) / 2]) (u2, v2) MinMax (A [(a + b) / 2 + 1 ... b]) return (Min (u1, u2 ), Max (v1, v2)) 10

11 Find MinMax D&C T (n) = 1 n = 2 2 T (n / 2) + 2 n> 2 T (n) = 2 T (n / 2) + 2 = 4 T (n / 4) = .. . = n / 2 + n / = 3 n / 2 2 11

12 2.2 Design paradigms top-down bottom-up divide & conquer dynamic programming caching (memoization) branch-and-bound greedy 12

13 Dynamic programming Reduce given problem to smaller sub-problems Solve sub-problems Save sub-solutions in table Combine sub-solutions to total solution 13

14 Dynamic programming Typical pattern ... Wanted: best solution L1, n from 1 to n Find: optimal partial solutions L1, k and Lk + 1, n for k = 1, ..., n 1 Find: best combination L1, k , Lk + 1, n 14

15 Dynamic programming In contrast to Divide & Conquer, no top-down decomposition is carried out, but a bottom-up combination. Iterative implementation Store intermediate results in a table and thereby avoid double calculations 15

16 Example matrix multiplication A = [aij] R p q, B = [bik] R q r C = [cik] = A B R p r cik = j aij bjk To calculate C, p q r scalar multiplications are necessary 16

17 Example matrix multiplication A R p q, B R q r, C R r s D = (A B) C = A (B C) (A B) C ... p q r + p r s multiplications A (B C) ... q r s + p q s multiplications 17

18 Example matrix multiplication A R 10 1, B R 1 10, C R 10 1 D = (A B) C = A (B C) (A B) C = 200 A (B C) = 20 18

19 Example matrix multiplication Ai R r [i 1] r [i], i = 1, ..., n M1, n: minimum number of scalar multiplications to calculate B = A1 A2 ... An Mi, j: number of Multiplications to calculate Ai ... Aj 19

20 Example matrix multiplication M1, n = min1 k

21 MatrixMultiply () MatrixMultiply (r [0..n]) for i 1 to n do M [i, i] 0 for k 1 to n 1 do for i 1 to nk do min i val M [i + 1, i + k] + r [i 1] r [i] r [i + k] for j i + 1 to i + k 1 do if M [i, j] + M [j + 1, i + k] + r [i 1] r [j] r [i + k]

22 MatrixMultiply () MatrixMultiply (r [0..n]) for i 1 to n do initialization M [i, i] 0 for k 1 to n 1 do for i 1 to nk do min i val M [i + 1, i + k] + r [i 1] r [i] r [i + k] for j i + 1 to i + k 1 do if M [i, j] + M [j + 1, i + k] + r [i 1] r [j] r [i + k]

23 MatrixMultiply () MatrixMultiply (r [0..n]) for i 1 to n do M [i, i] 0 for k 1 to n 1 do length index for i 1 to nk do min i val M [i + 1, i + k] + r [i 1] r [i] r [i + k] for j i + 1 to i + k 1 do if M [i, j] + M [j + 1, i + k] + r [i 1] r [j] r [i + k]

24 MatrixMultiply () MatrixMultiply (r [0..n]) for i 1 to n do M [i, i] 0 for k 1 to n 1 do for i 1 to nk do Calculate M [i, i + k] min i val M [i + 1, i + k] + r [i 1] r [i] r [i + k] for j i + 1 to i + k 1 do if M [i, j] + M [j + 1, i + k] + r [i 1] r [j] r [i + k]

25 Computational effort Inner loop: for j i + 1 to i + k 1 do O (i + k 1 i 1) = O (k 2) = O (k) Middle loop: for i 1 to nk do O ((nk) O (k)) = O (nk k 2) = O (nk) Outer loop: for k 1 to n 1 do O (n O (nk)) = O (n 3) 25

26 2.2 Design paradigms top-down bottom-up divide & conquer dynamic programming caching (memoization) branch-and-bound greedy 26

27 Memoization Increase in efficiency in dynamic programming by saving the intermediate results in a table Avoid double calculations General concept (worthwhile if the table access is faster than the recalculation) 27

28 Fibonacci (for the last time) MemoFibonacci (n) new F [0..n] for i 0 to n do F [i] 1 Fibo (F, n) Fibo (F, n) if F [n] <0 then if n> 1 then F [n] Fibo (F, n 1) + Fibo (F, n 2) else F [n] n return F [n] 28

29 2.2 Design paradigms top-down bottom-up divide & conquer dynamic programming caching (memoization) branch-and-bound greedy 29

30 Branch-and-Bound application when searching for an optimal solution in an infinite or very large discrete set of parts dividing the problem set into two or more parts (branching), resulting in the branching tree Last In First Out ó Depth-first search First In First Out ó Breadth-first search 30

31 Branch-and-Bound Recognize suboptimal solutions at an early stage through lower / upper bounds (bounds) e.g. after finding a feasible solution, it can represent a bound Example: Finding the nearest neighbors 31

32 Satisfiability of Boolean Formulas Given a Boolean formula with variables X1, X2, ... as well as, and Is there an assignment of Xi with true or false, so that the formula is true? Example: X1 (X2 X1) Branching based on the value of a variable (i.e. division into two sets) Non-permissible combinations limit the search space Important: Skilful choice of variables! 32

33 2.2 Design paradigms top-down bottom-up divide & conquer dynamic programming caching (memoization) branch-and-bound greedy 33

34 Greedy Heuristically select the best solution from a current point of view Generally only leads to a locally optimal solution In special cases, optimality can be guaranteed Example: backpack problem optimal for bulk goods (fractional knapsack) only locally optimal for piece goods (0-1 knapsack) 34

35 Fractional Knapsack Greedy can select fractions. There are three parts with the following sizes: A (3 units), B (5 units), C (7 units) The capacity of the backpack is 8 units. Greedy first selects part C. The remaining capacity is 1 unit. Greedy picks 1/5 of B. The backpack is optimally filled. 35

36 0-1 Knapsack Greedy can only select whole parts. There are three parts with the following sizes: A (3 units), B (5 units), C (7 units) The capacity of the backpack is 8 units. Greedy first selects part C. The remaining capacity is 1 unit. Greedy can no longer select a part. The backpack is not filled optimally. 36

37 2.2 Design paradigms top-down bottom-up divide & conquer dynamic programming caching (memoization) branch-and-bound greedy 37