This problem trips up many candidates because it sits at the intersection of linked list manipulation and efficient multi-source merging. While merging two sorted lists is straightforward, scaling that idea to k lists without blowing up your runtime requires careful thought about how to structure the merging process. The core pattern here is the k-way merge, a technique that appears frequently in interviews whenever you need to combine multiple sorted sequences into one. Mastering this problem gives you a reusable framework for tackling streaming data, external sorting, and priority-based selection problems.

Problem statement

You are given an array called lists that contains k singly linked lists. Each linked list is already sorted in ascending (non-decreasing) order on its own.

Your job is to combine all k linked lists into one single linked list that is also sorted in ascending order. Return the head of this merged linked list.

If the input array is empty or all lists within it are empty, you should return an empty result.

Constraints

  • k == lists.length
  • 0 ≤ k ≤ 10^3
  • 0 ≤ lists[i].length ≤ 500
  • -10^3 ≤ lists[i][j] ≤ 10^3
  • Each lists[i] is sorted in ascending order
  • The total number of nodes across all lists will not exceed 10^3

Examples

InputOutputExplanation
lists = [[2, 5, 7], [3, 6, 8], [1, 4, 9]][1, 2, 3, 4, 5, 6, 7, 8, 9]Three sorted lists are merged by repeatedly selecting the smallest available head across all lists, producing a fully sorted combined list.
lists = [][]No linked lists are provided, so the result is an empty list.
lists = [[]][]A single empty linked list is given, so the merged output is also empty.
lists = [[-3, -1, 2], [-5, 0, 4]][-5, -3, -1, 0, 2, 4]Negative values are handled correctly; the merge picks the smallest value at each step regardless of sign.
lists = [[1], [1], [1], [1]][1, 1, 1, 1]Four single-element lists with identical values merge into a list of four ones, verifying that duplicates across lists are preserved.

Why enumerating all possibilities doesn’t scale

The most naive way to approach this is to gather every node value from all k lists, dump them into a single collection, sort that collection, and build a new linked list from the result. While this works for small inputs, the sorting step alone costs O(n log n), where n is the total number of nodes. More importantly, this approach discards the valuable information that each input list is already sorted.

Another tempting approach is to scan all k list heads on every step, pick the minimum, and advance that pointer. This “linear scan” merge processes n total nodes, and each step requires examining up to k heads. That gives O(n * k) time, which becomes expensive when k is large.

Both approaches either ignore the existing structure or perform redundant comparisons. We need a strategy that leverages the sorted property of each list while keeping the number of comparisons per node low.

Optimized approach

The divide and conquer strategy works beautifully here. Instead of trying to merge all k lists simultaneously, we pair them up and merge each pair. This is the same principle behind merge sort: split the work in half, solve each half, and combine.

In the first round, we merge list 0 with list 1, list 2 with list 3, and so on. This produces roughly k/2 merged lists. In the next round, we merge those results pairwise again, producing k/4 lists. We continue until only one list remains.

Merging two sorted linked lists is a classic two-pointer operation that runs in time proportional to the combined length of both lists. By structuring the overall process as a binary tree of merges, we ensure that each node participates in at most log k merge rounds.

Why the optimized approach works well

Each pairwise merge is linear in the total number of nodes involved, and we only perform log k rounds of merging. This means every node is touched O(log k) times across the entire process, giving us O(n log k) total work.

The approach also uses O(1) extra space because we rearrange existing linked list nodes in place rather than creating new ones. The dummy node used during each two-list merge is a constant-space trick that simplifies pointer management without allocating memory proportional to the input.

This divide-and-conquer structure naturally avoids the redundant comparisons that plague the linear scan approach. Instead of comparing against k candidates at every step, each merge operation only compares two candidates at a time, and the hierarchical pairing ensures we converge quickly.

Step-by-step algorithm

  1. Check if the input lists array is non-empty. If it is empty, return None.
  2. Initialize a variable step to 1. This controls which pairs of lists get merged in the current round.
  3. While step is less than the length of lists, perform pairwise merging:
    • Iterate through indices from 0 to len(lists) - step, incrementing by step * 2.
    • For each index i, merge the list at position i with the list at position i + step using the merge_2_lists helper.
    • Store the merged result back at position i.
  4. After completing one full pass of pairwise merges, double the step value.
  5. Once the loop finishes, the fully merged list is stored at lists[0]. Return it.
  6. The merge_2_lists helper creates a dummy node to anchor the merged result.
  7. A pointer prev starts at the dummy node and tracks where to attach the next node.
  8. While both input heads are non-null, compare their values. Attach the smaller node to prev.next, advance that input head, and move prev forward.
  9. When one list is exhausted, attach the remainder of the other list to prev.next.
  10. Return dummy.next as the head of the merged list.
1 / 8

Python implementation

Code for Merge K Sorted Lists

Time complexity

Let n be the total number of nodes across all k linked lists. The divide and conquer approach merges lists in log k rounds. During each round, every node is visited exactly once across all the pairwise merges happening in that round, contributing O(n) work per round. Therefore, the overall time complexity is O(n log k).

Space complexity

The space complexity is O(1) in terms of auxiliary memory. The algorithm rearranges existing linked list nodes by updating their next pointers rather than allocating new nodes proportional to the input size. The only extra allocations are the dummy nodes created during each pairwise merge, which are constant-size and discarded after each merge completes. There is no recursion stack involved since the implementation uses an iterative loop.

Edge cases

  • Empty input array (lists = []): No lists to merge, so the function returns None. This must be handled before accessing lists[0].
  • Array containing a single empty list (lists = [[]]): The single list head is None, so the function returns None directly without entering the merge loop.
  • All lists contain a single element with the same value: The merge should preserve all duplicates and produce a flat list of repeated values, verifying that equal comparisons do not drop nodes.
  • One very long list and several empty lists: The algorithm should efficiently skip empty lists during pairwise merging and return the long list without unnecessary traversals.

Common pitfalls

  • Forgetting to handle the empty lists array: Accessing lists[0] without checking whether the array is empty causes an index error.
  • Not using a dummy node in the two-list merge: Without a dummy node, you need special-case logic to determine the head of the merged list, which leads to messy and error-prone code.
  • Mutating list pointers incorrectly: Since we are rearranging existing nodes, forgetting to update prev after attaching a node causes nodes to be lost or creates cycles.
  • Confusing the step-based iteration bounds: The inner loop range range(0, len(lists) - step, step * 2) is critical. Getting this wrong can cause out-of-bounds access or missed merges.
  • Assuming all lists are non-empty: Any list in the input can be None (empty), and the merge_2_lists function must handle this gracefully by returning the other list.