How Data Structures Support Algorithms
Algorithms rarely live in isolation. In real code (and in interviews), algorithms rely on data structures to make their operations fast and expressive.
This article connects common algorithm patterns to the data structures that support them.
Sliding Window and Two Pointers
Sliding window and two pointers mainly rely on:
- Arrays / strings – random access, contiguous memory.
- Sometimes a HashMap / HashSet – to track counts or membership inside the window.
Examples:
- Longest substring without repeating characters:
- Use a
HashSetorHashMap<char, count>to maintain which characters are inside the window.
- Use a
- Minimum window substring:
- Two hash maps: one for required counts, one for current window counts.
Without hash‑based structures, you would need to rescan the window on every move, losing the
O(n) time complexity.
BFS and DFS
- Adjacency lists (graph representation) – typically
List<List<Integer>>orMap<Node, List<Node>>. - [Queue](/blog/data-structures/linear-structures/queue) – for BFS.
- [Stack](/blog/data-structures/linear-structures/stack) / recursion stack – for DFS.
- Visited set – usually a boolean array or
HashSet.
Examples:
- Shortest path in an unweighted graph:
- BFS on adjacency lists + queue + visited.
- Topological sort (Kahn’s algorithm variant):
- Queue for nodes with in‑degree 0, adjacency list for outgoing edges.
The combination of graph representation + queue/stack + visited is the backbone of most graph algorithms.
Dynamic Programming
DP typically uses plain arrays or matrices:
dp[i]for 1D DP (e.g. climbing stairs, house robber).dp[i][j]for 2D DP on strings/grids (e.g. edit distance, LCS).
Sometimes additional structures are helpful:
- HashMap for memoization when state space is sparse or keyed by multiple parameters.
- Deque / monotonic queue for optimized DP transitions (sliding window DP).
But at its core, DP is about organizing subproblems into a table or cache and reusing results.
Greedy and Sorting
Greedy algorithms often use:
- Arrays / lists + sorting as a preprocessing step.
- Priority queues (heaps) for “always pick the current best” behavior.
Examples:
- Interval scheduling:
- Sort intervals by end time; then one pass over an array.
- Task scheduling / load balancing:
- Use a min‑heap keyed by current load to always pick the least‑loaded machine.
The efficiency of greedy strategies depends heavily on having cheap select‑best and update operations, which heaps provide.
Graph Algorithms and Advanced Structures
More advanced algorithms lean on more specialized structures:
- Union‑find (disjoint set):
- For dynamic connectivity, Kruskal's MST, counting components.
- Heaps / indexed heaps:
- [Dijkstra](/blog/algorithms/bfs-graph-traversal/dijkstra) and [Prim's Algorithm](/blog/algorithms/minimum-spanning-tree/prim) for shortest paths / MST.
- Segment trees / Fenwick trees:
- For range queries and point updates at scale.
- Tries:
- For prefix search, autocomplete, word dictionary problems.
These structures are often the difference between O(n log n) and O(n²) — or between
“passes large inputs” and “times out”.
System Design Connections
In backend systems, many familiar components are built on core data structures:
- Database indexes – trees (often B‑trees or variants).
- Caches – hash maps + linked lists (e.g. LRU), plus heaps for eviction policies.
- Message queues – queue / priority queue abstractions on top of logs and storage.
- Search engines – inverted indexes built on hash tables and trees.
Understanding data structures at this level makes it easier to:
- Reason about performance and scaling behavior.
- Choose the right component or design for a given workload.
Takeaways
- Algorithms and data structures are two sides of the same coin.
- Each algorithm pattern leans on a small set of core structures:
- Arrays / strings, lists, queues, stacks, hash tables, trees, graphs, heaps, union‑find, and a few advanced range/query structures.
- When approaching a new problem, think in both directions:
- “Which algorithm pattern fits?” and
- “Which data structure makes that pattern efficient and clean?”
The rest of the Data Structures section will dive into each structure type in more detail, with operations, complexity, and real‑world use cases.