Level Up Your Coding Skills & Crack Interviews — Save up to 50% or more on Educative.io Today! Claim Discount

Arrow
Table of contents

70. Climbing Stairs

Counting paths on a staircase is a classic way to practice turning a simple scenario into a clean algorithmic solution. Even though the rules are minimal, only 1-step or 2-step moves, the number of possible move sequences increases fast as the staircase gets taller. This makes the problem a great example of why smart counting (rather than generating every path) matters in coding interviews and real programs.

Problem statement

You’re climbing a staircase with n steps. On each move, you can climb either 1 step or 2 steps. How many different sequences of moves are there that will get you exactly to the top?

The tricky part is that the number of valid sequences grows quickly, and a naive “try all possibilities” approach ends up recomputing the same intermediate counts many times, which makes it far slower than it needs to be.

Constraints:

  • 1 ≤ n ≤ 45

Examples

Input(n)Output (ways)Explanation (all sequences using 1 or 2)
111
221+1, 2
331+1+1, 1+2, 2+1
451+1+1+1, 1+1+2, 1+2+1, 2+1+1, 2+2

Why trying every path doesn’t scale

A naive solution treats the problem like a branching choice at every step. From any stair, you can either:

  • take 1 step, or
  • take 2 steps.

So the algorithm explores both options repeatedly, creating a decision tree of possibilities. As n grows, that tree grows exponentially because each position spawns up to two more calls.

The bigger issue is repeated work. Different paths often lead you to the same step, and brute force recomputes the answer from that step again and again. For example, the question:

“How many ways are there to climb from step 10 to the top?”

might get computed from multiple branches that all eventually land on step 10. That duplication compounds across the tree, and runtime quickly becomes dominated by solving the same subproblems repeatedly.

This is why brute force becomes impractical even for moderately large n.

Optimized approach using Dynamic Programming

Instead of exploring every possible path, Dynamic Programming counts the number of ways by reusing results.

To reach step i, the final move must be:

  • from i - 1 (a 1-step move), or
  • from i - 2 (a 2-step move)

So the recurrence is:

ways(i) = ways(i – 1) + ways(i – 2)

To do this in O(1) space, you do not store a full list of results for every step. Instead, you keep only the two most recent counts:

  • the number of ways to reach the previous step, and
  • the number of ways to reach the step before that.

For each new step, you compute the number of ways by adding those two values, since any way to reach the current step must come from one of those two positions. After computing the new value, you move forward by updating your two stored values: the previous step becomes the “two steps back,” and the newly computed value becomes the new “previous step.”

Because you always store only two numbers, the space stays constant, and because you process each step once, the runtime stays linear.

Solution steps

  1. If n is 1, return 1.
  2. Initialize two variables for the base cases:
    1. prev2 = 1 (ways to reach step 1)
    2. prev1 = 2 (ways to reach step 2)
  3. For each step from 3 through n:
    1. Compute current as prev1 + prev2 (ways to reach the current step)
    2. Shift forward:
      1. set prev2 to the old prev1
      2. set prev1 to current
  4. Return prev1, which holds the number of ways to reach step n.

Take a look at the illustration below to understand the solution more clearly.

1 / 7

Python Implementation

Let’s look at the code for the solution we just discussed.

Time complexity

The time complexity is O(n) because the algorithm computes the number of ways for each step from 3 to nnn exactly once, doing constant work per step.

Space complexity

The space complexity is O(1) because the solution stores only two previous results and one current value, regardless of how large nnn is.

Edge cases

  • Scenario: Smallest valid staircase
    • Example: n = 1
    • Why the solution handles it: It returns 1 immediately, matching the single valid move (1 step).
  • Scenario: First step where recurrence begins
    • Example: n = 2
    • Why the solution handles it: The initialization encodes ways(2) = 2, so the loop is skipped and 2 is returned.
  • Scenario: Larger n within bounds
    • Example: n = 45
    • Why the solution handles it: The loop remains linear and uses constant space, so it stays fast and within memory limits.

Common pitfalls

  • Off-by-one errors in initialization, like setting base cases for ways(0) without aligning the loop and return value correctly.
  • Writing a recursive solution without caching, which recomputes the same subproblems and can time out.
  • Using an array unnecessarily, which works but is more memory than needed for this recurrence.
  • Returning the wrong variable after the loop (mixing up one_step_before and two_steps_before).

Frequently Asked Questions

How do I recognize this as a Dynamic Programming problem?

If you can define the answer for n in terms of answers for smaller values (here n-1 and n-2) and many sub-results would repeat in a naive exploration, it’s a strong DP signal.

Why does it look like Fibonacci numbers?

Because the recurrence is the same: each value is the sum of the two previous ones. The only difference is the starting values based on the staircase interpretation.

Is recursion with memoization acceptable?

Yes. It achieves the same O(n) time, but iterative DP is often preferred for simplicity and to avoid recursion depth concerns.

Can we do better than O(n) time?

For this constraint range, O(n) is ideal and simplest. There are mathematical techniques to compute Fibonacci-like sequences faster, but they’re overkill here.

Share with others:

Leave a Reply

Your email address will not be published. Required fields are marked *