Greedy approach and Dynamic programming both are used for solving optimisation problems. However often we need to use DP since optimal solution cannot be guaranteed by a greedy algorithm.

**Consider Knapsack problem**

We could solve it by brute-force, by generating all 2^n possibilities of choosing a subset of n items and selecting the best one. But time complexity of this solution is exponential \Omega{(2^n)} **(Big Omega** means lower bound.)

Greedy algorithm also fails to find the solution for 0/1 Knapsack problem.

For Dynamic programming solution: Knapsack Problem (Dynamic Programming)

###### Dynamic Programming

DP algorithms uses the fact that an optimal solution to a problem of size ** n** is composed of an optimal solution to a problem of size

*and uses this to build the solution bottom up ie from smallest problem to required size.*

**n‘ < n**###### Greedy Algorithm

Greedy algorithm are looking at a local point and doing some choice based with the data at this point. For some problems, this local choice leads to an optimal solution.

**Examples of Dynamic Programming and Greedy Algorithm**

Djikstra’s algorithmUses greedy approach to find the shortest paths between nodes in a graph, which may represent, road networks.

Bellman-ford algorithm

Uses dynamic programming as an algorithm to compute shortest paths from a single source vertex to all of the other vertices in a weighted digraph.

##### Dynamic Programming (Two approaches)

DP ~ “careful brute force”

DP ~ subproblems + reuse

**Bottom up** (Tabulation) : When we first look at the smaller subproblems and then solve large subproblems using solution to smaller sub-problems.

**Top-down** (Memoization) : It is similar to recursion but store the solutions to subproblems. It solves the problem in a natural manner (recursion) and check if you have calculated the solution to subproblem before.

###### Example – Fibonacci

**Fibonacci sequence** is a sequence such that each number is the sum of the two preceding ones, starting from 0 and 1.

1 1 2 3 5 8 13 21….and so on

**Bottom Up (Tabulation)** **approach**

In this approach, we successively build the solution from result of the smaller problems. We solve smaller problems and save their result (tabulation) and then compute the solution of larger problem using the stored results.

Optimal substructure:

F_{n} = F_{n-1} + F_{n-2}

**Code Implementation**

```
int dp[N+1];
int solve(int N) {
//Base cases
dp[0] = 0;
dp[1] = 1;
for (int i=2; i<=N; i++) {
dp[i] = dp[i-1] + dp[i-2];
}
return dp[N];
}
```

**Top down** (**Memoization**) **approach**

This approach is similar to **Bottom-Up** approach but here we will start from a large problem and save the solutions to subproblems we encounter, as we proceed**.** Thus, when we run into the same subproblem again, we can use our saved solution instead of re-calculating it. This way we compute each subproblem exactly once.

**Code Implementation**

```
int dp[N+1] = {0};
int solve(int N) {
if (N <= 1) {
return N;
}
if (dp[N] == 0) {
dp[N] = solve(N-1) + solve(N-2);
}
return dp[N];
}
```