A comprehensive reference of terminology used throughout the StructWeave algorithm repository. Terms are organized by category to help you build a systematic understanding of algorithmic concepts.
Mathematical notation that describes how an algorithm's runtime or space requirements grow relative to input size. For example, O(n) means linear growth, O(n²) means quadratic growth.
Measure of how an algorithm's execution time increases as input size grows. Helps predict performance and compare algorithm efficiency.
Measure of how much additional memory an algorithm requires relative to input size. Includes auxiliary space for variables, recursive call stacks, and temporary data structures.
Technique for analyzing the average performance of operations over a sequence of executions. Used when occasional operations are expensive but most are cheap (e.g., dynamic array resizing).
The minimum time/space an algorithm requires on the most favorable input. Often less useful than average or worst case for practical analysis.
Expected time/space complexity across all possible inputs of a given size. Requires understanding input distribution and probability.
Maximum time/space an algorithm requires on the most challenging input. Most commonly used for algorithm analysis as it guarantees upper bounds.
Operations that take the same amount of time regardless of input size. Examples include array access by index or hash table lookup.
Operations where time grows proportionally with input size. Examples include iterating through an array or linked list traversal.
Operations where time grows logarithmically with input size. Common in divide-and-conquer algorithms like binary search or balanced tree operations.
Operations where time grows with the square of input size. Often seen in nested loops processing all pairs of elements.
Contiguous block of memory storing elements of the same type. Provides O(1) random access but fixed size in most languages.
Resizable array that automatically grows when capacity is exceeded. Amortized O(1) append operation despite occasional O(n) resizing.
Linear collection of nodes where each node contains data and a reference to the next node. Enables O(1) insertion/deletion but O(n) access.
Linked list where each node has references to both next and previous nodes. Allows bidirectional traversal and O(1) removal when you have a node reference.
Last-In-First-Out (LIFO) data structure supporting push and pop operations. Used for recursion, parsing, and depth-first traversal.
First-In-First-Out (FIFO) data structure supporting enqueue and dequeue operations. Used for breadth-first traversal and task scheduling.
Double-ended queue allowing insertion and removal from both ends. Useful for sliding window problems and maintaining monotonic sequences.
Data structure mapping keys to values using a hash function for O(1) average-case lookup, insertion, and deletion. Critical for complement search patterns.
Collection of unique elements using hashing for O(1) membership testing. Used to track visited elements or detect duplicates.
Binary tree structure where parent nodes satisfy a priority relationship with children. Min-heap keeps smallest element at root, max-heap keeps largest.
Abstract data structure serving elements by priority rather than insertion order. Typically implemented with a heap for O(log n) operations.
Hierarchical data structure with a root node and child nodes forming a parent-child relationship. No cycles allowed.
Tree where each node has at most two children (left and right). Foundation for many efficient data structures and algorithms.
Binary tree where left subtree contains smaller values and right subtree contains larger values. Supports O(log n) search in balanced cases.
Tree maintaining height bounds through rotations or restructuring. Examples include AVL trees and Red-Black trees with guaranteed O(log n) operations.
Tree structure for storing strings where each path from root represents a prefix. Enables efficient prefix matching and autocomplete functionality.
Collection of nodes (vertices) connected by edges. Can be directed (one-way) or undirected (two-way), weighted or unweighted.
Graph where edges have direction, meaning edge (u,v) doesn't imply edge (v,u). Used to model one-way relationships or dependencies.
Graph where edges are bidirectional. If node A connects to B, then B connects to A.
Graph where edges have associated numerical values (weights) representing cost, distance, or capacity.
Graph representation using an array of lists where each index stores neighbors of that vertex. Space-efficient for sparse graphs.
Graph representation using a 2D array where matrix[i][j] indicates edge presence/weight. Enables O(1) edge lookup but uses O(V²) space.
Technique using two indices to traverse data structure, often from opposite ends or at different speeds. Reduces nested loops from O(n²) to O(n).
Pattern maintaining a contiguous subrange (window) that slides through data. Optimizes subarray/substring problems from O(n²) to O(n).
Two-pointer variant where pointers move at different speeds. Used for cycle detection and finding middle elements.
Divide-and-conquer algorithm that repeatedly halves search space on sorted data. Achieves O(log n) search time.
Graph/tree traversal exploring all nodes at current depth before moving deeper. Uses queue, finds shortest paths in unweighted graphs.
Graph/tree traversal exploring as deep as possible before backtracking. Uses stack (or recursion), useful for path finding and cycle detection.
Optimization technique breaking problems into overlapping subproblems, solving each once and storing results. Trades space for time efficiency.
Algorithmic technique for exploring all possible solutions by building candidates incrementally and abandoning those that fail constraints.
Approach making locally optimal choices at each step hoping to find global optimum. Works when local choices lead to global solution.
Strategy dividing problem into smaller independent subproblems, solving recursively, then combining results. Examples include merge sort and quicksort.
Linear ordering of directed graph vertices where every edge goes from earlier to later in sequence. Only possible for acyclic graphs.
Data structure tracking elements partitioned into disjoint sets. Supports efficient union and find operations for connectivity problems.
Precomputed array where each index stores sum of all elements up to that point. Enables O(1) range sum queries.
Stack maintaining elements in monotonic (increasing or decreasing) order. Solves next greater/smaller element problems in O(n).
Algorithm processing events in sorted order (usually by coordinate). Used for interval problems and computational geometry.
Straightforward approach checking all possibilities without optimization. Often O(n²) or worse but useful as starting point.
Most efficient solution minimizing time and/or space complexity. Often the target after developing brute force approach.
Unusual or extreme input scenario that may break normal logic. Examples include empty input, single element, all duplicates, or maximum constraints.
Simplest problem instance in recursion that can be solved directly without further recursive calls. Prevents infinite recursion.
Problem instance solved by reducing to smaller instances of same problem. Core of recursive problem solving.
Property or condition that remains true throughout algorithm execution. Useful for proving correctness and designing loops.
Smaller instance of original problem. Foundation of divide-and-conquer and dynamic programming approaches.
Characteristic where same subproblem appears multiple times. Indicates dynamic programming may be applicable.
Property where optimal solution contains optimal solutions to subproblems. Required for dynamic programming to work.
Top-down dynamic programming technique caching results of expensive function calls. Typically implemented with hash maps or arrays.
Bottom-up dynamic programming technique filling table of subproblem solutions. Builds from smallest subproblems to final answer.
Current configuration of problem at any point. In DP, represents unique subproblem identified by parameters.
Relationship between states showing how to compute one state from others. Defines recurrence relation in dynamic programming.
Mathematical equation expressing solution in terms of solutions to smaller instances. Foundation of recursive and DP approaches.
Algorithm modifying input directly without allocating proportional auxiliary space. Typically O(1) space complexity.
Sorting algorithm preserving relative order of equal elements. Important when sorting by multiple criteria.
Algorithm making decisions only through comparing elements. Limited to O(n log n) for sorting.
Limits on input size, value ranges, or problem parameters. Critical for choosing appropriate algorithm and data structures.
Compromise between competing objectives like time vs. space, simplicity vs. performance, or memory vs. speed.
Common optimization choice between using more memory for faster execution or using less memory with slower execution.
Additional question exploring variations, optimizations, or extensions of original problem. Tests depth of understanding.
Question asked to interviewer to understand problem requirements, constraints, or expected behavior. Shows careful problem analysis.
Specific input-output pair used to verify algorithm correctness. Should cover normal cases, edge cases, and corner cases.
Rare combination of edge conditions. More specific than edge case, like empty input with special flag set.
Manual step-through of algorithm with sample input to verify logic. Essential debugging and understanding technique.
High-level description of algorithm using informal programming syntax. Helps plan before implementing.
Pattern in code suggesting potential problems, inefficiency, or poor design. Indicates refactoring may be needed.
Restructuring code without changing external behavior to improve readability, maintainability, or performance.
Part of algorithm consuming most resources (time or space). Primary target for optimization efforts.
Extra space used beyond input storage. Distinguishes algorithm's space usage from space needed to hold input.
Practical approach using experience or rules of thumb to find good-enough solutions when optimal solutions are impractical.
Optimization eliminating unnecessary branches in search tree. Dramatically improves backtracking and search algorithm performance.
Contiguous subrange of data structure. Central concept in sliding window pattern for substring/subarray problems.
Value that when combined with current element produces target. Key concept in two-sum style problems using hash tables.
Element used to partition data in divide-and-conquer algorithms. Choice affects performance in algorithms like quicksort.
Special marker value simplifying boundary conditions. Examples include dummy head nodes or infinity values.
Path in graph or linked list that returns to starting point. Detection is common interview problem.
Maximal set of vertices in graph where each pair is connected by some path. Found using BFS or DFS traversal.
Set of vertices in directed graph where every vertex is reachable from every other vertex. Requires specialized algorithms like Tarjan's.
For directed graphs, in-degree counts incoming edges, out-degree counts outgoing edges. Important for topological sort and dependency graphs.
This glossary is designed to be referenced while working through problems in the repository. Each term is explained with practical context to help you understand not just what concepts mean, but how they apply to real problem-solving.
When encountering unfamiliar terms in problem descriptions or strategy guides, check this glossary for clear definitions and usage examples.