The Sort List problem brings together two fundamental ideas in algorithm design: linked list manipulation and efficient sorting. Unlike arrays, linked lists don’t support random access, which immediately eliminates many common sorting techniques such as quicksort or heapsort. Instead, the problem challenges you to apply a sorting algorithm that works naturally with linked list structure.
This question commonly appears in technical interviews because it evaluates whether you understand linked list traversal patterns and can choose an algorithm appropriate to the data structure.
Problem
You are given the head of a singly linked list. Your task:
Sort the list in ascending order and return the head of the sorted list.
Constraints:
- You must achieve \( O(n log n) \) time complexity.
- You must use constant extra space, aside from recursion stack.
- The list may contain positive, negative, or duplicate values.
Example
Input:
head = [4, 2, 1, 3]
Output:
[1, 2, 3, 4]
Input:
head = [-1, 5, 3, 4, 0]
Output:
[-1, 0, 3, 4, 5]
Input:
head = []
Output:
[]
Solution
Arrays can be sorted with many techniques, but linked lists cannot be indexed directly. That means algorithms relying on random access—like quicksort—are inefficient or overly complex for this structure.
Instead, the optimal approach is to use merge sort, because:
- It works naturally with linked lists.
- Splitting a list into halves is efficient using slow/fast pointers.
- Merging two sorted lists is linear and pointer-friendly.
- It satisfies the required \( O(n log n) \) runtime.
Key idea
Use merge sort:
- Split the list into two halves using the slow/fast pointer technique.
- Recursively sort each half.
- Merge the two sorted halves.
This recursive divide-and-conquer structure ensures the time complexity remains \( O(n log n) \).
Why merge sort works on linked lists
- You don’t need contiguous memory blocks; the list can be sliced simply by redirecting pointers.
- Merging two sorted linked lists is extremely efficient—just pointer manipulation.
- Unlike quicksort, merge sort does not require random access or complex pivot operations.
This makes merge sort the most natural and optimal sorting algorithm for linked lists.
Algorithm outline
Step 1: Base case
If the list is empty or contains a single node, it is already sorted.
Step 2: Split the list
Use:
- slow pointer → moves one step at a time
- fast pointer → moves two steps at a time
When fast reaches the end, slow is at the midpoint.
Split the list into:
- Left half: head → mid
- Right half: mid.next → end
Set mid.next = None to break the list.
Step 3: Recursively sort each half
Call the sort function on both halves.
Step 4: Merge the two sorted halves
Use a helper function to merge like you would in the classic merge step of merge sort:
- Compare node values
- Append the smaller node to the result
- Repeat until one list is exhausted
Return the merged sorted list.
Sample implementation (Python)
Time and space complexity
- Time: \( O(n log n) \) due to repeated splitting and merging.
- Space: \( O(1) \) auxiliary space (the recursion stack adds \( O(log n) \) overhead, which is allowed).
Interview insight
The Sort List problem highlights your ability to:
- Understand algorithm/data structure compatibility
- Implement linked list merges cleanly
- Apply divide-and-conquer thinking
- Recognize when merge sort is the correct approach
It’s not just about sorting—it’s about seeing why some algorithms naturally fit certain data structures and why others do not.