Level Up Your Coding Skills & Crack Interviews — Save up to 50% or more on Educative.io Today! Claim Discount

Arrow
Table of contents

1292. Maximum Side Length of a Square with Sum Less than or Equal to Threshold

At first glance, Maximum Side Length of a Square with Sum Less than or Equal to Threshold looks like a routine matrix problem. But in FAANG and similar top-tier interviews, it is really a test of whether you can recognize hidden inefficiencies in brute-force grid traversal.

Problem statement

You’re given a 2D grid of non-negative integers, mat and a number called threshold. Your task is to find the largest possible square (same number of rows and columns) you can take from the grid such that the sum of all values inside that square is less than or equal to threshold. Return the square’s side length, or 0 if no valid square exists.

Constraints:

  • m == mat.length
  • n == mat[i].length
  • 1 <= mn <= 300
  • 0 <= mat[i][j] <= 104
  • 0 <= threshold <= 105

Examples

matthresholdOutputExplanation
[[2,1,1,0],[1,1,0,1],[1,0,1,1]]32The maximum side length is 2 because a 2×2 square like:[[1,1],[1,0]]The above matrix has a sum = 3 (≤ 3), but every 3×3 square has sum > 3.
[[5, 1, 1],[1, 1, 1],[1, 1, 1]]31A 1×1 square containing 1 works (sum = 1 ≤ 3). But every 2×2 square includes at least three 1s plus something else, so the smallest 2×2 sum is 4, which is > 3.
[[3, 3, 3, 3],[3, 3, 3, 3],[3, 3, 3, 3]]20Every 1×1 cell is 3, and since 3 > threshold (2), no square can satisfy the condition, so the answer is 0.

Why checking every square is inefficient

A direct approach would try every top-left corner and every possible side length, then compute each square’s sum by walking all of its cells. The problem is that this repeats an enormous amount of work: neighboring squares overlap heavily, but the naive method still recalculates almost the same totals from scratch each time. On a grid as large as 300×300, the number of candidate squares grows quickly, and once you factor in the cost of re-summing each interior, the runtime becomes unrealistic for interview constraints.

If we first build a 2D prefix sum array, we can compute the sum of any sub-rectangle (including any square) in constant time. Instead of adding up L×L cells every time we test a square, we use four prefix values to “include–exclude” the area we want.

This eliminates the repeated work found in brute force: each square’s sum becomes an O(1) query, so checking a square no longer depends on its size.

After that, the remaining problem is to find the largest side length that can work. Because all numbers in the grid are non-negative, feasibility has a monotonic behavior: if there exists any square of side length L with sum ≤ threshold, then every smaller length is also feasible (smaller squares contain fewer or equal total value). This monotonic “works / doesn’t work” pattern lets us apply binary search over possible side lengths from 0 to min(m, n).

For each candidate side length L chosen by binary search, we perform a feasibility check: we scan all possible top-left positions (r, c) where an L×L square fits in the matrix, and for each position we compute its square sum using the prefix sums in O(1). If we find even one square whose sum is ≤ threshold, then L is feasible and we try a larger L; otherwise we reduce L. In the end, binary search converges to the maximum side length that satisfies the condition efficiently.

Solution steps

  1. Let m and n be the number of rows and columns in mat.
  2. Build a (m + 1) × (n + 1) prefix-sum array ps (with an extra top row and left column of zeros) where ps[i][j] stores the sum of the submatrix from (0, 0) to (i - 1, j - 1).
  3. Define a helper square_sum(r, c, L) that returns the sum of the L×L square starting at (r, c) using: ps[r+L][c+L] - ps[r][c+L] - ps[r+L][c] + ps[r][c].
  4. Define feasible(L):
    1. If L == 0, return True.
    2. For every top-left position (r, c) where an L×L square fits (0 ≤ r ≤ m-L, 0 ≤ c ≤ n-L), compute square_sum(r, c, L).
    3. If any square sum ≤ threshold, return True; otherwise return False.
  5. Binary search the answer L between 0 and min(m, n):
    1. Compute mid = (lo + hi + 1) // 2.
    2. If feasible(mid) is True, set lo = mid (try bigger squares).
    3. Else set hi = mid - 1 (try smaller squares).
  6. Return lo as the maximum side length that satisfies the threshold.

Let’s look at the following illustration to get a better understanding of the solution.

1 / 10

Solution implementation

Time complexity

Building the 2D prefix sum takes O(m * n) time because each cell is processed once. The binary search performs O(log(min(m, n))) checks, and each feasibility check scans all possible LxL top-left positions, which is O(m * n) in the worst case. Overall, the runtime is O(m * n * log(min(m, n))).

Space complexity

The prefix sum array uses O(m * n) extra space (technically (m+1) * (n+1)), which dominates the memory usage. Other variables use only constant extra space, so worst-case space is O(m * n).

Edge cases

  • Threshold too small to fit any cell
    Example: mat = [[2]], threshold = 1
    Why it works: the feasibility check for L=1 fails, so binary search returns 0.
  • Threshold is 0 with zeros present
    Example: mat = [[0, 1], [0, 0]], threshold = 0
    Why it works: prefix sums correctly compute zero-sum squares; feasibility finds the largest all-zero square.
  • Single row or single column matrix
    Example: mat = [[1, 1, 1]], threshold = 2
    Why it works: the maximum square side cannot exceed 1, and the search bounds handle that (min(m, n)).
  • Maximum possible square fits
    Example: mat = [[1,1],[1,1]], threshold = 4
    Why it works: feasibility will succeed for L=2, and binary search returns the full size.

Common pitfalls

  • Off-by-one errors in prefix sum indexing: Using 1-based padding avoids negative indices and makes rectangle queries consistent.
  • Incorrect square sum formula: The inclusion-exclusion must subtract the top and left rectangles and add back the overlap.
  • Binary search mid calculation: Using the upper-mid (lo + hi + 1) // 2 prevents infinite loops when moving lo up.
  • Not handling L=0: Treating L=0 as feasible keeps the binary search logic clean and guarantees a valid baseline.
  • Scanning bounds wrong: The last valid top-left is (m - L, n - L); missing the +1 drops candidates.

Frequently Asked Questions

Why not just compute sums for every square directly?

Because summing each square cell-by-cell repeats work massively. Even moderate grid sizes become too slow when you do nested loops plus an inner loop for summing.

How do I recognize that 2D prefix sums are a good fit here?

Anytime you need many submatrix sums over the same grid, and you want each query to be fast, 2D prefix sums are a strong signal.

Why is binary search on the side length valid?

With non-negative cell values, increasing the side length can only increase (or keep) the sum. So if no square of size L can fit the threshold, larger sizes won’t fit either.

Would this still work if the matrix had negative values?

The prefix sums would still compute square sums correctly, but the monotonic property for binary search can fail with negatives, so the “binary search on size” reasoning would not be reliable.

Share with others:

Leave a Reply

Your email address will not be published. Required fields are marked *