Problem Statement
You are given an m x n grid of non-negative integers. Starting at the top-left cell (0, 0), you must reach the bottom-right cell (m-1, n-1) by moving only right or down at each step. Your task is to return the minimum possible sum of the values along any valid path, including both the starting and ending cells.
Constraints:
- m $==$
grid.length - n $==$
grid[i].length - $1 \leq$ m, n $\leq 200$
- 0$ \leq$
grid[i][j]$\leq 200$
Examples
| Input | Output | Explanation |
|---|---|---|
| [[2,1,3],[4,0,2],[1,5,1]] | 6 | One optimal path is 2 → 1 → 0 → 2 → 1, which totals 6. |
| [[1,2,1,3],[4,1,0,2]] | 6 | Dropping down at the right moment allows using 0, giving total 6. |
| [[5,1,2],[4,3,1],[2,1,8],[0,2,1]] | 13 | The best route avoids the 8 and reaches the end with sum 13. |
Why Recursive Approach is Impractical
A path’s sum depends only on the cells you step on, and every move reduces the remaining distance to the goal, so the problem naturally suggests comparing different routes.
A straightforward brute-force idea is: from each cell, recursively try moving right and down, compute the minimum sum to the end, and take the smaller result. This works, but it repeats the same subproblems many times.
This solution has an exponential, roughly $O(2^{m+n})$ time complexity because at most positions you branch into two choices (right/down), generating a huge number of paths. The space complexity of this solution is $O(m+n)$ for recursion depth (the longest path length), ignoring output.
Because this repeated work is the main bottleneck, we switch to a method that reuses results instead of recomputing them.
Optimal Solution using Dynamic Programming
We use the idea that to arrive at the cell (i, j), the previous step must be either from (i-1, j) (from above) or (i, j-1) (from left). If we already know the cheapest cost to reach those neighbors, the cheapest cost to reach (i, j) is just the current cell value plus the smaller of the two neighbor costs. This leads to the standard update: grid[i][j] += min(grid[i-1][j], grid[i][j-1]) once the borders are prepared.
Algorithmic Steps
This algorithm is implemented as follows:
- Store the number of rows and columns of the grid in variables
mandnrespectively. - Iterate through the grid, row by row, using an outer loop over
i.- For each row, iterate through its columns using an inner loop over
j.- When you are at the starting cell
(0, 0),- Leave it unchanged and move to the next iteration.
- If the cell is in the first row (
i == 0andj > 0),- It can only be reached from the left, so add the value of the left cell to it.
- If the cell is in the first column (
j == 0andi > 0),- It can only be reached from above, so add the value of the above cell to it.
- For all other cells,
- Take the smaller of the accumulated sums from the top and left neighbors, add it to the current cell.
- When you are at the starting cell
- For each row, iterate through its columns using an inner loop over
- After filling the grid, return the value in the bottom-right cell, since it represents the minimum path sum to the destination.
The brute-force approach is slow because it repeatedly recomputes the best cost from the same cell. Dynamic programming addresses this issue by storing, for each cell, the minimum path sum required to reach it, ensuring that every cell is solved only once.
Here is a visual representation of the algorithm discussed above:
Python Implementation
Let’s implement the algorithm as discussed above:
Time Complexity
The time complexity of the solution is $O(m⋅n)$ because each cell in the grid is updated exactly once.
Space Complexity
The space complexity of the solution is $O(1)$ because the input grid is reused to store the running minimum sums, and no auxiliary space is used.