How it works
Instead of recomputing a subarray from scratch every time (O(n²)), a sliding window keeps two pointers — left and right — that define the current window. You expand right, and shrink left only when the window violates a constraint.
There are two flavours:
- Fixed size — window is always k elements wide.
- Variable size — window grows until it breaks a rule, then left pointer catches up.
Classic problems
def length_of_longest_substring(s: str) -> int: seen = {} # char → last seen index left = 0 best = 0 for right, ch in enumerate(s): # If char is already in window, shrink left if ch in seen and seen[ch] >= left: left = seen[ch] + 1 seen[ch] = right best = max(best, right - left + 1) return best # "abcabcbb" → 3 ("abc") # "pwwkew" → 3 ("wke")
def max_sum_subarray(nums: list, k: int) -> int: window = sum(nums[:k]) best = window for i in range(k, len(nums)): # Slide: add right element, drop left element window += nums[i] - nums[i - k] best = max(best, window) return best # [2,1,5,1,3,2], k=3 → 9 (5+1+3)
How it works
With a sorted array you can eliminate possibilities cheaply. Start left = 0, right = n-1. Check the pair sum:
- Too small → move left right (increase sum)
- Too large → move right left (decrease sum)
- Match → record and move both
Also used on same-direction problems where a slow pointer tracks a "write head" while a fast pointer scans ahead (e.g. remove duplicates in-place).
Classic problems
def two_sum(numbers: list, target: int) -> list: left, right = 0, len(numbers) - 1 while left < right: s = numbers[left] + numbers[right] if s == target: return [left+1, right+1] elif s < target: left += 1 else: right -= 1 # [2,7,11,15], target=9 → [1,2]
def max_area(height: list) -> int: left, right = 0, len(height) - 1 best = 0 while left < right: h = min(height[left], height[right]) best = max(best, h * (right - left)) # Move the shorter side inward if height[left] < height[right]: left += 1 else: right -= 1 return best
How it works
Slow moves one node at a time. Fast moves two nodes at a time. If there is a cycle, fast will eventually lap slow and they will meet. If there is no cycle, fast reaches None first.
- Cycle detection — do slow & fast ever point to the same node?
- Middle of list — when fast hits the end, slow is at the midpoint.
- Cycle start — after meeting, reset one pointer to head; both move at speed 1. They meet at the cycle entrance.
Classic problems
def has_cycle(head) -> bool: slow = fast = head while fast and fast.next: slow = slow.next fast = fast.next.next if slow is fast: return True return False
def middle_node(head): slow = fast = head while fast and fast.next: slow = slow.next fast = fast.next.next # For even-length lists, slow lands on # the second of the two middle nodes. return slow # 1→2→3→4→5 → returns node 3 # 1→2→3→4 → returns node 3
How it works
Maintain a search space [lo, hi]. Each iteration compute mid = lo + (hi - lo) // 2. Depending on whether nums[mid] is too small, too large, or exact, you discard half the space.
The real power: Binary search works on any problem where you can write a predicate function that is false for all values below the answer and true for all values at or above (or vice-versa).
- Find target — classic; return index or -1.
- Find leftmost / rightmost — adjust which boundary you move.
- Search on answer — "what is the minimum X such that condition holds?" Binary search on X.
Classic problems
def binary_search(nums: list, target: int) -> int: lo, hi = 0, len(nums) - 1 while lo <= hi: mid = lo + (hi - lo) // 2 if nums[mid] == target: return mid elif nums[mid] < target: lo = mid + 1 else: hi = mid - 1 return -1
# "Koko Eating Bananas" — min eating speed import math def min_eating_speed(piles: list, h: int) -> int: lo, hi = 1, max(piles) while lo < hi: mid = lo + (hi - lo) // 2 hours = sum(math.ceil(p / mid) for p in piles) if hours <= h: hi = mid # speed mid is feasible else: lo = mid + 1 # need faster return lo
How it works
DFS dives deep along one branch before exploring siblings. The call stack (or an explicit stack) tracks where to backtrack to. On trees it naturally produces pre-order, in-order, and post-order traversals.
- Tree DFS — check a node, recurse left, recurse right.
- Graph DFS — use a
visitedset to avoid revisiting nodes. - Backtracking — DFS + undoing the last choice (permutations, subsets, n-queens).
Classic problems
def num_islands(grid: list) -> int: rows, cols = len(grid), len(grid[0]) count = 0 def dfs(r, c): if r < 0 or r >= rows or c < 0 \ or c >= cols or grid[r][c] != "1": return grid[r][c] = "0" # mark visited (sink it) dfs(r+1,c); dfs(r-1,c) dfs(r,c+1); dfs(r,c-1) for r in range(rows): for c in range(cols): if grid[r][c] == "1": dfs(r, c) count += 1 return count
def max_depth(root) -> int: if not root: return 0 return 1 + max( max_depth(root.left), max_depth(root.right) )
How it works
BFS uses a queue (FIFO). Start by enqueuing the source node. At each step dequeue one node, process it, then enqueue all unvisited neighbours. Nodes at distance 1 are fully processed before distance 2, and so on.
- Level-order tree traversal — all nodes at depth d before depth d+1.
- Shortest path — the first time BFS reaches a node is via the shortest route.
- Multi-source BFS — seed the queue with multiple starting nodes simultaneously (e.g. "rotting oranges").
Classic problems
from collections import deque def oranges_rotting(grid: list) -> int: q = deque() fresh = 0 for r, row in enumerate(grid): for c, val in enumerate(row): if val == 2: q.append((r, c, 0)) if val == 1: fresh += 1 minutes = 0 while q: r, c, t = q.popleft() for dr, dc in [(1,0),(-1,0),(0,1),(0,-1)]: nr, nc = r+dr, c+dc if 0 <= nr < len(grid) and \ 0 <= nc < len(grid[0]) and \ grid[nr][nc] == 1: grid[nr][nc] = 2 fresh -= 1 minutes = t + 1 q.append((nr, nc, t+1)) return minutes if fresh == 0 else -1
How it works
A doubly linked list (DLL) solves the singly-linked-list problem of needing to know the previous node to delete. With node.prev and node.next you can splice out a node in O(1).
The killer interview application is the LRU Cache: combine a DLL (ordered by recency) with a HashMap (O(1) lookup) to get O(1) get and put.
- Most-recently-used sits at the tail.
- Least-recently-used sits at the head — evict it when over capacity.
- On every access: remove the node from its current position, re-insert at tail.
Classic problems
class Node: def __init__(self, key=0, val=0): self.key = key; self.val = val self.prev = self.next = None class LRUCache: def __init__(self, capacity: int): self.cap = capacity self.map = {} # key → node # Sentinel head / tail (dummy nodes) self.head = Node() self.tail = Node() self.head.next = self.tail self.tail.prev = self.head def _remove(self, node): node.prev.next = node.next node.next.prev = node.prev def _insert_tail(self, node): node.prev = self.tail.prev node.next = self.tail self.tail.prev.next = node self.tail.prev = node def get(self, key: int) -> int: if key not in self.map: return -1 node = self.map[key] self._remove(node) self._insert_tail(node) return node.val def put(self, key: int, value: int): if key in self.map: self._remove(self.map[key]) node = Node(key, value) self._insert_tail(node) self.map[key] = node if len(self.map) > self.cap: lru = self.head.next # evict LRU self._remove(lru) del self.map[lru.key]
How it works
DP requires two properties: optimal substructure (the optimal solution contains optimal sub-solutions) and overlapping subproblems (the same sub-problem is solved repeatedly in a naive recursion).
Two implementation styles:
- Top-down (memoization) — recursive function + a cache. Write the natural recursion, then add
@lru_cacheor a dict. - Bottom-up (tabulation) — fill a table iteratively from base cases up. Usually better space and avoids stack overflow.
Classic problems
def coin_change(coins: list, amount: int) -> int: # dp[i] = min coins to make amount i dp = [float("inf")] * (amount + 1) dp[0] = 0 # base case for i in range(1, amount + 1): for coin in coins: if coin <= i: dp[i] = min(dp[i], dp[i - coin] + 1) return dp[amount] if dp[amount] != float("inf") else -1 # coins=[1,5,6,9], amount=11 → 2 (5+6)
def lcs(text1: str, text2: str) -> int: m, n = len(text1), len(text2) dp = [[0]*(n+1) for _ in range(m+1)] for i in range(1, m+1): for j in range(1, n+1): if text1[i-1] == text2[j-1]: dp[i][j] = dp[i-1][j-1] + 1 else: dp[i][j] = max(dp[i-1][j], dp[i][j-1]) return dp[m][n] # "abcde", "ace" → 3
How it works
Python's heapq module is a min-heap. Push elements and the smallest always floats to the top. For a max-heap, negate the values when pushing.
- K largest elements — maintain a min-heap of size k. If new element > heap top, replace top.
- K smallest elements — maintain a max-heap of size k.
- Merge K sorted lists — push (val, list_index, element_index) into heap; always pop the smallest and push its successor.
- Median streaming — two heaps: max-heap for lower half, min-heap for upper half.
Classic problems
import heapq def k_closest(points: list, k: int) -> list: # Max-heap of size k (negate dist for max-heap) heap = [] for x, y in points: dist = x*x + y*y # skip sqrt — monotone heapq.heappush(heap, (-dist, x, y)) if len(heap) > k: heapq.heappop(heap) # evict the farthest return [[x, y] for _, x, y in heap]
class MedianFinder: def __init__(self): self.lo = [] # max-heap (lower half, negated) self.hi = [] # min-heap (upper half) def addNum(self, num: int): heapq.heappush(self.lo, -num) # Balance: lo top ≤ hi top if self.lo and self.hi and \ -self.lo[0] > self.hi[0]: heapq.heappush(self.hi, -heapq.heappop(self.lo)) # Keep sizes equal or lo has one extra if len(self.lo) > len(self.hi) + 1: heapq.heappush(self.hi, -heapq.heappop(self.lo)) elif len(self.hi) > len(self.lo): heapq.heappush(self.lo, -heapq.heappop(self.hi)) def findMedian(self) -> float: if len(self.lo) > len(self.hi): return -self.lo[0] return (-self.lo[0] + self.hi[0]) / 2
How it works
The divide-and-conquer template is always three steps:
- Divide — split the problem at the midpoint.
- Conquer — recurse on both halves (base case: single element).
- Combine — merge (or otherwise combine) the two solved halves.
Because the tree of recursion has log n levels and each level does O(n) work during the merge, the total is O(n log n).
The same pattern drives counting inversions, sorting linked lists in O(n log n) with O(1) space (better than arrays!), and external sort for data too large for RAM.
Classic problems
def merge_sort(arr: list) -> list: if len(arr) <= 1: return arr mid = len(arr) // 2 left = merge_sort(arr[:mid]) right = merge_sort(arr[mid:]) return _merge(left, right) def _merge(a: list, b: list) -> list: result = [] i = j = 0 while i < len(a) and j < len(b): if a[i] <= b[j]: result.append(a[i]); i += 1 else: result.append(b[j]); j += 1 return result + a[i:] + b[j:]
def sort_list(head): if not head or not head.next: return head # Split in half using fast & slow pointers slow, fast = head, head.next while fast and fast.next: slow = slow.next fast = fast.next.next mid = slow.next slow.next = None # cut the list left = sort_list(head) right = sort_list(mid) return merge_lists(left, right) def merge_lists(l1, l2): dummy = cur = ListNode() while l1 and l2: if l1.val <= l2.val: cur.next = l1; l1 = l1.next else: cur.next = l2; l2 = l2.next cur = cur.next cur.next = l1 or l2 return dummy.next