Transcript for:
Key Patterns for Coding Interviews

These are the eight most important patterns for coding interviews, and this is backed by data. These patterns are split into the two major categories of leak code problems. Linear, which work with linear data structures like arrays, linked lists, and strings. And nonlinear, which work with nonlinear data structures like trees and graphs. In this video, we're not only going to learn these patterns, but we're going to use pre-built code templates for these patterns that you can use for any leak code problem. There are, of course, lesser-use patterns we will not be addressing in this video. To see those, as well as a lot more in-depth content explaining everything you need to know about leetcode, check out our website at algo.monster. All patterns for linear structures build off of two-pointers, which is why we're starting with it. The two-pointers pattern is important because it allows you to significantly reduce the time complexity of problems that involve traversing a linear structure, like arrays, linked lists, or strings. Instead of the brute force approach, where you might check every combination of elements, which could lead to an O time complexity, two-pointers often allows you to solve these problems in linear time, O , making it incredibly efficient. There are two main methods of using two-pointers. The two-pointers can go in the same direction, or opposite directions, and each are used for different kinds of problems. When the pointers move in the same direction, This technique is ideal for problems where you're processing or scanning the data in a single pass. A common use case is the fast and slow pointer approach for detecting cycles in linked lists or for finding the middle of a list. The slow pointer moves one step at a time, while the fast pointer moves two steps. This allows you to efficiently check for cycles or find key elements without re-traversing the structure. When the pointers move in opposite directions, This is typically used in problems that involve finding pairs or comparing elements from opposite ends of the data structure. A classic example of this is finding two numbers in a sorted array that sum up to a target value. One pointer starts at the beginning of the array and the other at the end. By adjusting the pointers inward based on the sum, you can quickly zero in on the correct pair, eliminating unnecessary comparisons. The sliding window pattern is an extension of the two-pointers pattern, but with a more focused purpose. While two-pointers involves using two separate pointers to traverse and manipulate a data structure, sliding window refines this by maintaining a window of elements within the data, dynamically adjusting its size as you progress through the structure. Essentially, you're using two pointers, but the goal is to manage a range or subset of elements that satisfy a specific condition, like a subarray or substring. In the sliding window approach, one pointer marks the start of the window and the other marks the end. As the end pointer advances, the window slides over the data structure. Depending on the problem, you either expand the window by moving the end pointer, or contract it by moving the start pointer to meet certain criteria, like maximizing the length of a substring or maintaining a sum within a target value. For example, in a problem where you need to find the longest substring without repeating characters, the window expands by moving the end pointer to include new characters. If the window encounters a repeated character, the start pointer moves forward to shrink the window and eliminate the duplicate. This dynamic process allows you to efficiently solve problems that require analyzing continuous segments of an array or string without re-traversing the structure multiple times. Sliding Window is extremely powerful. for problems that require tracking contiguous sequences, whether it's for strings, subarrays, or linked lists. It adds more versatility to the two-pointers pattern by giving you control over a range of elements at any given point, and it's often combined with hash maps to keep track of the elements within the window. For a deeper dive into common problems solved with sliding window, check out algo.monster. Binary search is a fundamental search algorithm that allows you to find a target value in a sorted array by repeatedly dividing the list in half. It is also technically an expansion of the two-pointers pattern because it uses a left and right pointer that halve the list step by step. Binary search starts by looking at the middle element and checking if that's the target. If it is the target success, you can return it. If it's smaller than the target, you know the target is somewhere to the right, so we repeat this process but only on the right half. If it's larger than the target, we repeat the process on the left. Eventually, we converge on the target. This is an efficient algorithm because it allows us to find our target in logarithmic time, o log n, rather than searching each element, which is o n. However, Binary search is far more applicable than only being used on sorted numerical lists. What makes binary search incredibly powerful is it can be used on any list where there is a monotonic function. A monotonic function is any type of pattern where there is a consistent increase or decrease which extends beyond just numbers. Put more simply, we are looking for any type of condition that can be used for sorting, not just strictly increasing numbers. For example, If we had a sorted list of only true and false, we can actually use binary search to find the first true occurrence. The binary search would work like this. You look at the middle of the list. If the middle value is true, you know that the first true must either be here or somewhere to the left, so you move your search to the left half. If the middle value is false, you know that the first true must be to the right, so you focus on the right half. This type of searching, based on where a condition is true and where it is false, can be reused for every single binary search problem. In fact, we actually have this free template for this on our website, Algomonster. All we need to do is find out what makes something feasible. For example, the common leetcode problem, minimum in rotated sorted array, doesn't appear to follow binary search at first. After all, it's not sorted. However, there's a monotonic condition we can use to apply binary search. The condition is that any element that is less than the last element of the array is part of the rotated section, while any element greater than the last element is part of the original sorted section. We can mentally convert this to trues and falses, and just like that, we have the same true and false question we just solved. This can be applied to a ton of different questions. For more practice and an in-depth explanation, check out algo.monster. Now we'll take a look at nonlinear structure problems which work with trees and graphs. In fact, instead of treating trees and graphs as two entirely different structures, you should imagine that a tree is just a graph with no cycles. The only difference between the two is that for a graph that could have cycles, you need some visited set to keep track of nodes you've already visited. This makes the patterns we will talk about applicable to both. For nonlinear problems, we'll start with breadth-first search, as it's the easiest algorithm to understand and build off of. Breadth-first search is an algorithm used to explore nodes in a graph or tree level by level. The idea behind this is that it starts at a given node, commonly called the root, and explores all its immediate neighbors first before moving on to the neighbor's neighbors and so on. To keep track of the nodes that have already been visited so that we don't revisit them, BFS uses a queue, a data structure that works like a line, first in, first out. This ensures that nodes are visited in the correct order, starting from the closest to the root and moving outward. BFS is such a common algorithm that you will use all the time. Luckily for you, instead of having to derive it from scratch each time, we have this template that we can reuse. If you want to see more about how we got this template, check it out at algo.monster. Now let's see BFS in action. We're going to use this template to solve a common leetcode problem. Level Order Traversal of a Binary Tree The problem asks us to give all of the levels of the tree at each depth. This means we need to visit the nodes of the tree level by level, starting from the root and moving down, visiting all the nodes at each level before proceeding to the next. For example, given a tree with the root node 3 and children 9 and 20, where 20 has children 15 and 7, We need to return the values of the nodes level by level in the following order. First 3, then 9, 20, and finally 15, 16 for a central kill. We can solve this problem using BFS, as it's ideal for level by level traversal. The idea is to use a queue to help explore the tree one level at a time. First, we add the root node to the queue. Then for each level, we process all nodes currently in the queue by removing them and adding their values to the result. For each node we process, we also add its left and right children to the queue, so they can be processed in the queue. the next level. This ensures that we visit all nodes on one level before moving to the next. By repeating this process until the queue is empty, we can traverse the entire tree level by level and collect the node values in the correct order. Finally, we return the list of node values grouped by levels. Depth-first search is an algorithm that builds on the foundation of BFS but explores the tree or graph in a different way. While BFS focuses on exploring nodes level by level, DFS dives deep into one path as far as possible before exploring the next path. In BFS, we use a queue to process nodes in a breadth-first manner, ensuring all nodes at one level are explored before moving to the next. In contrast, DFS uses a stack to explore nodes in a depth-first manner, where you keep going down one branch until you reach the end, and then you come back to explore other branches. Most often, The stack DFS uses is the call stack, as most DFS implementations use recursion. Where BFS is ideal for finding the shortest path or exploring nodes layer by layer, DFS is more suited for problems where you need to explore all paths or check every possible configuration, such as finding all paths in a tree, searching for a specific condition deep in the structure, or detecting cycles. The depth-first nature of DFS makes it more memory efficient in some cases, but it's less suited for problems where the shortest path or shallow exploration is required, as it doesn't guarantee finding the closest solution first. Like with BFS, DFS is a common algorithm that you will use all the time, so we have a template that we can reuse. Again, if you're curious to how this works, check it out at algo.monster. Now, let's solve a leetcode problem. with DFS. A common leak code problem that uses this pattern is called number of islands. This problem asks us to count the number of islands in a 2D grid where 1 represents land and 0 represents water. An island is formed by connecting adjacent ones horizontally or vertically. We need to determine how many distinct islands exist by identifying connected components of ones in the grid. For example, in a grid where some ones are grouped together, Each connected group is considered an island. Our goal is to traverse the grid, find all islands, and count them. We can solve this problem using DFS to explore each island. The idea is to iterate through each cell in the grid. When we encounter a 1, land, it indicates the start of an island. From there, we perform DFS to visit all connected ones all parts of the same island by exploring the neighboring cells in all four directions as deep as they go. During the DFS, we mark each visited cell by changing its value to zero to ensure we don't count it again. Once all connected cells for that island are visited, we increment the island count and continue searching the rest of the grid. This process ensures that we explore each island once and count it correctly. By the end, the total number of initializing DFS calls we make, not including the recursive calls, equals the number of islands. Backtracking is a pattern that many people struggle with, but in reality, it's actually just an extension of depth-first search. In DFS, we typically traverse through a pre-built structure like a tree or graph where the nodes and connections are already defined. With backtracking, however, you often have to build the solution tree yourself as you explore different options, especially in combinatorial problems. where the structure of the tree isn't given explicitly, but is generated dynamically as you make decisions. In a backtracking problem, you explore all possible solutions by making a series of decisions. Each decision represents a node in the tree, and each potential decision forms a branch. As you explore a decision, you either reach a valid solution, or you hit a dead end. When a decision leads to a dead end, you backtrack to the previous decision. Undo it and try another path. Just like with all of our other patterns, we also have a template we are going to use here, which is this. Now, let's walk through it by solving a leetcode problem, letter combinations of a phone number. In this problem, we're given a string of digits, typically from 2 to 9, and we need to return all the possible letter combinations that the digits could represent using the mapping of a phone keypad. For example, if we have the digits 23, each digit corresponds to a set of letters, two maps to A, B, C, and three maps to D, E, F. Our goal is to generate every possible combination of these letters. So, for 23, some outputs could be a D, AE, AF, BD, BE, BF, and so on. Now, our goal is to find all of these combinations. Backtracking is the optimal approach for solving this problem, because it allows us to explore each possible letter one at a time, while pruning invalid configurations early. Rather than brute forcing through every possible combination, backtracking lets us build the solution letter by letter, discarding incomplete paths that no longer make sense. We start by taking the input digits and starts by checking if the input is empty, returning an empty list if so. It initializes an empty list ANES to store the results and defines a recursive helper function DFS to perform depth-first search, DFS. The DFS function tracks the current position in the digits string using start index and builds combinations step by step by adding letters to the path list. If the start index reaches the length of the digits, the current path is a complete combination, so it's joined into a string and added to ans. For each digit, the function iterates over its possible letters retrieved from a digit-to-char dictionary, adds each letter to path, and calls dfs recursively for the next digit. After exploring a letter, it backtracks by removing the last letter from path to explore the next possibility. The DFS starts at the first digit with DFF0, empty list, and once all combinations are generated it returns the result list, ans. The overall time complexity is O4 to the power of n, where n is the length of the input digits, as each digit can map to up to four letters. This approach efficiently explores all possible letter combinations and ensures that we backtrack after each combination is formed. to explore the next one. If you want to see more problems solved with backtracking, or a more in-depth explanation of the algorithm, make sure to check out our website at algo.monster. You may notice a lot of questions titled like the following. Anytime you see a question related to top k, k smallest, or k largest, you always use a priority queue. The most common implementation of priority queues are called Heaps can be used to solve all of these questions. A heap is a special type of tree where the elements are organized in a specific way. There are two kinds of heaps. In a min heap, the smallest value is always at the top aka the root, and each parent is smaller than its children. In a max heap, the largest value is at the root, and each parent is larger than its children. One thing that is counterintuitive is that we use a min heap to find the k largest values. and a max heap to find the k smallest values. The reason is best explained with an example. Let's say we want to find the three smallest values in this list of seven. We first create a max heap of size three. We go through and add the first three values to it, three, six, and one to fill it out. Then once we hit the fourth value, how do we know whether to add it or not? Well, in a max heap, the largest value is at the root. And because we want the three smallest values, all we need to know is whether or not this new value is smaller than our previous largest value. In this case, it is, so we remove the value at the root and add our new value. As you can see, this is why we use a max heap for finding the k smallest values and the vice versa for k largest values. The reason heaps are efficient is because we always know where the largest or smallest value is, at the root. Heaps allow for efficient access to the smallest or largest element in O time, and inserting or removing elements takes O log n time, both of which are better than having to go through every element in a list. Lastly, dynamic programming, or DP, is considered the hardest pattern to learn. This is mostly because there are so many different variations and nuances to the problems that fall under this category. DP problems typically involve optimizing a solution by breaking it down into overlapping subproblems, storing the results of these subproblems, and reusing them to avoid redundant computations. There are two major ways to do DP. top-down and bottom-up. In a top-down approach, you start from the main problem and recursively solve its subproblems, storing the results of those subproblems to avoid solving the same subproblem multiple times. This may seem complicated, but you actually already know how to do most of this because top-down is basically just backtracking with the added step of memoization. Memoization is just a fancy word for a simple concept. It means saving the previous function call result in a dictionary and reading from it when we do the exact same call again. In a bottom-up approach, you solve the smallest subproblems first and use their solutions to build up to the solution for the larger problem. Instead of relying on recursion, bottom-up dynamic programming typically involves filling out a table where each entry represents the solution to a smaller subproblem. You start with the base cases and iteratively compute the solutions to progressively larger subproblems until you reach the final solution. This approach avoids the overhead of recursion and memoization, often resulting in more efficient time and space usage. Bottom-up is commonly used when you know the dependencies between subproblems and can structure the solution iteratively. Dynamic programming is far too complex to dive into in this video. Luckily, we have a full section dedicated to it on our website at algo.monster, as well as a free video dedicated to it. Both links are in the description.