ReducibleIn this video, we take a look at one of the most beautiful algorithms ever created: the Fast Fourier Transform (FFT). This is a tricky algorithm to understand so we take a look at it in a context that we are all familiar with: polynomial multiplication. You will see how the core ideas of the FFT can be "discovered" through asking the right questions. The key insights that are presented in this video is that polynomial multiplication can be improved significantly by multiplying polynomials in a special value representation. The challenge that presents itself is the problem of converting a polynomial from a standard coefficient representation to value representation.
We see that the FFT is an incredibly efficient recursive algorithm that performs this task, and we also discover that a slightly tweaked FFT (Inverse FFT) can also solve the reverse problem of interpolation. If this video doesn't blow your mind, I don't know what will.
0:00 Introduction 2:19 Polynomial Multiplication 3:36 Polynomial Representation 6:06 Value Representation Advantages 7:07 Polynomial Multiplication Flowchart 8:04 Polynomial Evaluation 13:49 Which Evaluation Points? 16:30 Why Nth Roots of Unity? 18:28 FFT Implementation 22:47 Interpolation and Inverse FFT 26:49 Recap
Also a subtle mistake that a commenter made me aware of -- at 26:40 instead of replacing w with (1/n * e^{-2 * pi i/ n}), the actual right way to do this is by taking the final output of the IFFT at the end of the recursion and dividing by n.
So the full change is w = e^{-2 pi i / n} And then somewhere outside the scope of the IFFT function ifft_result = 1/n * IFFT(values)
The treatment of the FFT in this video is inspired by several well known references, mainly Introduction to Algorithms (Cormen et al.) and Algorithms (Papadimitriou et al.).
Music: Lift Motif by Kevin MacLeod is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/...) Source: http://incompetech.com/music/royalty-... Artist: http://incompetech.com
The Fast Fourier Transform (FFT): Most Ingenious Algorithm Ever?Reducible2020-11-14 | In this video, we take a look at one of the most beautiful algorithms ever created: the Fast Fourier Transform (FFT). This is a tricky algorithm to understand so we take a look at it in a context that we are all familiar with: polynomial multiplication. You will see how the core ideas of the FFT can be "discovered" through asking the right questions. The key insights that are presented in this video is that polynomial multiplication can be improved significantly by multiplying polynomials in a special value representation. The challenge that presents itself is the problem of converting a polynomial from a standard coefficient representation to value representation.
We see that the FFT is an incredibly efficient recursive algorithm that performs this task, and we also discover that a slightly tweaked FFT (Inverse FFT) can also solve the reverse problem of interpolation. If this video doesn't blow your mind, I don't know what will.
0:00 Introduction 2:19 Polynomial Multiplication 3:36 Polynomial Representation 6:06 Value Representation Advantages 7:07 Polynomial Multiplication Flowchart 8:04 Polynomial Evaluation 13:49 Which Evaluation Points? 16:30 Why Nth Roots of Unity? 18:28 FFT Implementation 22:47 Interpolation and Inverse FFT 26:49 Recap
Also a subtle mistake that a commenter made me aware of -- at 26:40 instead of replacing w with (1/n * e^{-2 * pi i/ n}), the actual right way to do this is by taking the final output of the IFFT at the end of the recursion and dividing by n.
So the full change is w = e^{-2 pi i / n} And then somewhere outside the scope of the IFFT function ifft_result = 1/n * IFFT(values)
The treatment of the FFT in this video is inspired by several well known references, mainly Introduction to Algorithms (Cormen et al.) and Algorithms (Papadimitriou et al.).
Music: Lift Motif by Kevin MacLeod is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/...) Source: http://incompetech.com/music/royalty-... Artist: http://incompetech.com
Chapters: 0:00 Introduction and Graph Representation 1:37 Greedy Approach 4:12 Uniform Cost Search (UCS) 7:13 Greedy vs UCS 8:52 A* Search 11:09 Optimality of A* Search 14:45 Sponsorship
In this video, we explore the algorithms behind how modern mapping applications find routes that optimize time, distance, and cost. We motivate A* search from a first principle approach, building up to showing how it is a harmonious combination of both a careful, rigorous approach and a greedy, opportunistic approach.
Animations created jointly by Nipun Ramakrishnan and Jesús Rascón
This video wouldn't be possible without the open source library manim created by 3blue1brown and maintained by Manim Community. The Manim Community Developers. (2024). Manim – Mathematical Animation Framework (Version v0.18.1) [Computer software]. https://www.manim.community/
Here is link to the repository that contains the code used to generate the animations in this video: github.com/nipunramk/Reducible
Special Thanks to the Following Patreons: Brian Cloutier George Sharabidze kerrytazi Maggie Nguyen Adam Dřínek Andreas justin Matt Q Ram Kanhirotentavida Rocky Winston Durand Asha Ramakrishnan Eugene Tulushev Michael Nawenstein Richard Wells Zac Landis Zac LandisZac LandisThe Discrete Fourier Transform: Most Important Algorithm Ever?Reducible2023-01-23 | Go to nordvpn.com/reducible to get the two year plan with an exclusive deal PLUS 1 bonus month free! It’s risk free with NordVPN’s 30 day money back guarantee!
The Discrete Fourier Transform (DFT) is one of the most essential algorithms that power modern society. In this video, we go through a discovery-oriented approach to deriving the core ideas of the DFT.
We start by defining ideal conditions and properties of our transform. We define a similarity measure and come up with an idea that the transform we are looking for is fundamentally a matrix multiplication. Within the context of simple cosine waves, we develop an initial version of our transform using cosine wave analysis frequencies that seems to fit the parameters of what we are looking for. But we discover some key issues with that transform involving the phase of the signal.
To solve the phase problem, we take a look a sine wave analysis frequencies and observe how using a combination of sine and cosine wave analysis frequencies perfectly solves the phase problem. The final step involves representing these sine and cosine wave analysis frequencies as complex exponentials. We finish the video by analyzing some interesting properties of the DFT and their implications.
Chapters: 0:00 Intro 1:50 Sampling Continuous Signals 3:41 Shannon-Nyquist Sampling Theorem 4:36 Frequency Domain Representations 5:38 Defining Ideal Behavior 6:00 Measuring SImilarity 6:57 Analysis Frequencies 8:58 Cosine Wave Analysis Frequency Transform 9:58 A Linear Algebraic Perspective 13:51 Sponsored Segment 15:20 Testing our "Fake Fourier Transform" 18:33 Phase Problems 19:18 Solving the Phase Problem 21:26 Defining the True DFT 28:21 DFT Recap/Outro
Animations created jointly by Nipun Ramakrishnan and Jesús Rascón.
This video wouldn't be possible without the open source library manim created by 3blue1brown and maintained by Manim Community. The Manim Community Developers. (2022). Manim – Mathematical Animation Framework (Version v0.11.0) [Computer software]. https://www.manim.community/
Here is link to the repository that contains the code used to generate the animations in this video: github.com/nipunramk/Reducible
Music in this video comes from Jesús Rascón and Aaskash Gandhi
Big thanks to the community of Patreons that support this channel. Special thanks to the following Patreons: Nicolas Berube kerrytazi Brian Cloutier Andreas Matt Q Winston Durand Adam Dřínek Burt Humburg Ram Kanhirotentavida Jorge Dan Eugene Tulushev Mutual Information Sebastian Gamboa Zac Landis Richard Wells Asha RamakrishnanThe Traveling Salesman Problem: When Good Enough Beats PerfectReducible2022-07-26 | Use the code "reducible" to get CuriosityStream for less than $15 a year! curiositystream.com/reducible
The Traveling Salesman Problem (TSP) is one of the most notorious problems in all of computer science. In this video, we dive into why the problem presents such a challenge for computer scientists and some of the clever methods used to solve the problem.
We start with showing why all brute force solutions and even optimizations to get exact solutions can't reliably be used for large instances of the problem. We then proceed to discuss some heuristic based approaches such as nearest neighbors, greedy, and Christofides to get solutions that are reasonably close to the optimal solution.
But after finding a candidate solution, we also show how one might improve this solution via local search. We discuss some interesting algorithms for tour improvements including 2-opt, random swapping, and 3-opt improvements. Finally, we show some clever ways to analyze the search space, including simulated annealing and ant colony optimization.
This video wouldn't be possible without the open source library manim created by 3blue1brown and maintained by Manim Community. The Manim Community Developers. (2022). Manim – Mathematical Animation Framework (Version v0.11.0) [Computer software]. https://www.manim.community/
Here is link to the repository that contains the code used to generate the animations in this video: github.com/nipunramk/Reducible
Music in this video comes from Jesús Rascón and Aaskash Gandhi
Big thanks to the community of Patreons that support this channel. Special thanks to the following Patreons: Andjela Arsic Andreas Adam Dřínek Burt Humburg Brian Cloutier Eugene Tulushev kerrytazi Matt Q Mutual Information Ram K Richard Wells Sebastian Gamboa Winston Durand Zac LandisPageRank: A Trillion Dollar AlgorithmReducible2022-05-23 | Visit brilliant.org/Reducible to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription.
In the late 1990's two PhD Students Larry Page and Sergey Brin came up with an algorithm that revolutionized search called PageRank. In this video we discuss some of the beautiful mathematical ideas and complexities of PageRank. Fundamentally, PageRank is all about calculating stationary distributions of Markov chains. We talk about some of the challenges of computing these distributions as well as the adjustments that PageRank made to these ideas to make it dominate the search landscape.
Animations created jointly by Nipun Ramakrishnan and Jesús Rascón.
References:
Original PageRank Paper: http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf
Proof of the Ergodic Theorem: https://math.uchicago.edu/~may/VIGRE/VIGRE2007/REUPapers/FINALFULL/Casarotto.pdf
General inspiration/further reading for Markov chains: Ch 1 of Probability in Electrical Engineering and Computer Science by Jean Walrand
Markov chains and PageRank: https://www2.math.upenn.edu/~kazdan/312F12/JJ/MarkovChains/markov_google.pdf
This video wouldn't be possible without the open source library manim created by 3blue1brown and maintained by Manim Community. The Manim Community Developers. (2022). Manim – Mathematical Animation Framework (Version v0.11.0) [Computer software]. https://www.manim.community/
Here is link to the repository that contains the code used to generate the animations in this video: github.com/nipunramk/Reducible
Music in this video comes from Jesús Rascón and Aaskash Gandhi
Big thanks to the community of Patreons that support this channel. Special thanks to the following Patreons: Andreas Adam Dřínek Burt Humburg Eugene Tulushev Matt Q Winston Durand Andjela Arsic Mutual Information Richard Wells Sebastian Gamboa Zac LandisHow PNG Works: Compromising Speed for QualityReducible2022-03-29 | Visit brilliant.org/Reducible to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription.
Chapters: 0:00 Introduction 1:35 Exploiting redundancy 2:09 Huffman Codes 4:22 Run Length Encoding 5:23 Lempel-Ziv Schemes (LZSS) 13:50 Transforming the problem 14:36 Filtering 17:46 How PNG Picks Filters 18:58 Heuristics for Filtering 21:06 PNG Encoding/Decoding 23:04 The Compromise: PNG Is Slow 23:31 Quite OK Image (QOI) Format 28:26 Benchmark Results 29:27 Final Thoughts 30:36 Brilliant Sponsorship
Developed in the mid 1990's, the PNG encoding standard for images is now the most used image format on the web. One aspect that makes PNG quite attractive for users is image quality is perfectly preserved as a result of lossless compression. In this video, we dive into how exactly PNG is able to compress images so well, while also retaining all the information in the image.
One thing we discover during this journey is how slow PNG is in practice. Towards the end, we discuss a relatively new image format called QOI that is somewhat comparable to PNG, but 20-50x faster. But what's perhaps the most remarkable aspect of QOI is how a simple set of rules lead to surprisingly good compression on a variety of images.
Animations created jointly by Nipun Ramakrishnan and Jesús Rascón.
*Minor error in QOI section: QOI_DIFF_SMALL should have a tag of 01 not 10 (QOI_DIFF_MED has the tag 10.*
This video wouldn't be possible without the open source library manim created by 3blue1brown and maintained by Manim Community. The Manim Community Developers. (2022). Manim – Mathematical Animation Framework (Version v0.11.0) [Computer software]. https://www.manim.community/
Here is link to the repository that contains the code used to generate the animations in this video: github.com/nipunramk/Reducible
Music in this video comes from Jesús Rascón, Aaskash Gandhi, and Myuu.
Big thanks to the community of Patreons that support this channel. Special thanks to the following Patreons: Andreas Adam Dřínek Burt Humburg Matt Q Winston Durand Andjela Arsic Mutual Information Richard Wells Sebastian Gamboa Zac LandisThe Unreasonable Effectiveness of JPEG: A Signal Processing ApproachReducible2022-01-19 | Visit brilliant.org/Reducible to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription.
Chapters: 00:00 Introducing JPEG and RGB Representation 2:15 Lossy Compression 3:41 What information can we get rid of? 4:36 Introducing YCbCr 6:10 Chroma subsampling/downsampling 8:10 Images represented as signals 9:52 Introducing the Discrete Cosine Transform (DCT) 11:32 Sampling cosine waves 12:43 Playing around with the DCT 17:38 Mathematically defining the DCT 21:02 The Inverse DCT 22:45 The 2D DCT 23:49 Visualizing the 2D DCT 24:35 Introducing Energy Compaction 26:05 Brilliant Sponsorship 27:23 Building an image from the 2D DCT 28:20 Quantization 30:23 Run-length/Huffman Encoding within JPEG 32:56 How JPEG fits into the big picture of data compression
The JPEG algorithm is rather complex and in this video, we break down the core parts of the algorithm, specifically color spaces, YCbCr, chroma subsampling, the discrete cosine transform, quantization, and lossless encoding. The majority of the focus is on the mathematical and signal processing insights that lead to advancements in image compression and the big themes in compression as a whole that we can take away from it.
Animations created jointly by Nipun Ramakrishnan and Jesús Rascón.
This video wouldn't be possible without the open source library manim created by 3blue1brown and maintained by Manim Community. The Manim Community Developers. (2021). Manim – Mathematical Animation Framework (Version v0.11.0) [Computer software]. https://www.manim.community/
Here is link to the repository that contains the code used to generate the animations in this video: github.com/nipunramk/Reducible
All music in the video is from Aakash GandhiHow Computers Draw Weird Shapes (Marching Squares)Reducible2021-11-05 | In this video, we start with an interesting animation of blobby objects which we introduce as metaballs. There's a lot of surprisingly intricate ideas behind making these objects render on a screen. We'll see how folks in computer graphics attempted to solve this problem through a really elegant algorithm called marching squares. Marching squares is a really powerful algorithm that allows you to render any implicit function. But what's even more impressive in my opinion is the many clever shifts in perspective that allowed a vague problems such as this one to be transformed into a clear, well-defined, and solvable problem.
0:00 Introduction 3:29 Circles and Ellipses 4:57 Defining the Problem 6:00 A Guessing Game 8:29 Contours around Two Points 10:35 Sampling The Space 12:32 Breaking Down Cases 15:00 A Clever Optimization 17:20 How Marching Squares Works 18:59 Parallel Marching Squares 20:21 How Do Metaballs Work? 24:59 Marching Cubes 25:58 Some Parting Thoughts
This video wouldn't be possible without the open source library manim created by 3blue1brown and maintained by Manim Community. The Manim Community Developers. (2021). Manim – Mathematical Animation Framework (Version v0.11.0) [Computer software]. https://www.manim.community/
Here is link to the repository that contains the code used to generate the animations in this video: github.com/nipunramk/Reducible
All music in the video is from Aakash GandhiHuffman Codes: An Information Theory PerspectiveReducible2021-07-30 | Huffman Codes are one of the most important discoveries in the field of data compression. When you first see them, they almost feel obvious in hindsight, mainly due to how simple and elegant the algorithm ends up being. But there's an underlying story of how they were discovered by Huffman and how he built the idea from early ideas in information theory that is often missed. This video is all about how information theory inspired the first algorithms in data compression, which later provided the groundwork for Huffman's landmark discovery.
0:00 Intro 2:02 Modeling Data Compression Problems 6:20 Measuring Information 8:14 Self-Information and Entropy 11:03 The Connection between Entropy and Compression 16:47 Shannon-Fano Coding 19:52 Huffman's Improvement 24:10 Huffman Coding Examples 26:10 Huffman Coding Implementation 27:08 Recap
This video is supported by a community of Patreons Special Thanks to the following Patreons: Burt Humburg Michael Nawenstein Richard Wells Sebastian Gamboa Zac Landis
Corrections: At 9:34, the entropy was calculated with log base 10 instead of the expected log base 2,. The correct values should be H(P) = 1.49 bits and H(P) = 0.47 bits.
At 16:23, all logarithms should be negated, I totally forgot about the negative sign.
At 27:24, I should have said the least likely symbols should have the *longest encoding.
This video wouldn't be possible without the open source library manim created by 3blue1brown: github.com/3b1b/manim
Here is link to the repository that contains the code used to generate the animations in this video: github.com/nipunramk/Reducible
Music attributions: Midsummer Sky by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/... Source: http://incompetech.com/music/royalty-... Artist: http://incompetech.com Luminous Rain by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/... Source: http://incompetech.com/music/royalty-... Artist: http://incompetech.com
This video put together a variety of great resources on information theory, data compression, Shannon-Fano codes, and Huffman codes. Here are some links that I found most helpful while doing research.
A really nice resource on data compression and information theory: http://www.ece.virginia.edu/~ffh8x/moi/compression.html
Implementation of Huffman Codes: techiedelight.com/huffman-codingA Strange But Elegant Approach to a Surprisingly Hard Problem (GJK Algorithm)Reducible2021-03-24 | In 1988, three engineers came together and developed one of the most clever solutions to the problem of detecting when two complex objects collide. Their solution, the Gilbert Johnson Keerthi (GJK) algorithm, named after the authors, made an incredible impact in the fields of robotics, control, and computer graphics. This video is about understanding this ingenious algorithm from first principles.
The video covers a broad range of topics from Minkowski sums and differences to support functions to the full implementation of the 2D GJK algorithm. But what I hope you get out of this is an appreciation of the incredible shifts in perspective that lead to the final algorithm. Coming up with the algorithm is an amazing feat and useful for specific applications, but the overarching problem solving techniques that come through in the journey to the solution is truly invaluable.
0:00 Introducing the Problem 2:02 Convexity 3:15 Infinite Point Perspective 4:07 Minkowski Sums and Differences 6:37 Triangles inside Minkowski Differences 7:41 Simplexes 8:57 Support Functions 13:05 Core GJK Algorithm: Broad Perspective 17:15 Remaining Key Questions 17:56 How to determine if a point passed the origin? 19:10 The line case 20:41 The triangle case 26:25 GJK Implementation 29:38 Recap and quick note about original GJK paper
This video is supported by a community of Patreons Special Thanks to the following Patreons: Burt Humburg Justin Hiester Michael Nawenstein Richard Wells Sebastian Gamboa Zac Landis
We start off with the basics of animation and then branch off into ideas in discrete and continuous collision detection. We break them down in the context of some simple simulations of a small number of particles, but scaling up these simulations is another challenge entirely. We present big ideas in broad phase optimization schemes for speeding up collision detection including the sweep and prune algorithm, uniform grids, K-D trees, and bounding volume hierarchies.
0:00 Introduction 1:25 Intro to Animation 2:46 Discrete Collision Detection and Response 5:50 Implementation 6:50 Discrete Collision Detection Limitations 8:10 Continuous Collision Detection 11:42 Two Particle Simulations 13:13 Scaling Up Simulations 15:42 Sweep and Prune Algorithm 19:05 Uniform Grid Space Partitioning 20:43 KD Trees 23:51 Bounding Volume Hierarchies 27:12 Recap
Correction: At 9:02, the linear interpolation equations should be x(t) = t * x(1) + (1 - t) * x(0) and y(t) = t * y(1) + (1 - t) * y(0). All subsequent derivations have the x(0) switched with x(1). All y(0) should also be switched with y(1) for the same reason.
Post-correction 2: I ran the experiment with the naive solution of checking every pair of particles with an added inefficiency in rendering the animations so the comparison wasn't fair and that's why the number was so high. The actual speed up is still fairly significant, but not THAT significant.
Minor correction: p.vel is updated and used in the next line at 6:28, p.vel and p.pos should be updated simultaneously
This video is supported by a community of Patreons Special Thanks to the following Patreons: Burt Humburg Justin Hiester Michael Nawenstein Richard Wells Zac Landis
This video wouldn't be possible without the open source manim library created by 3blue1brown: github.com/3b1b/manim
Here is link to the repository that contains the code used to generate the animations in this video: github.com/nipunramk/Reducible
Music: All music by Aakash GandhiBreadth First Search (BFS): Visualized and ExplainedReducible2020-09-26 | In this video we break down the BFS algorithm in a visual manner with examples and key intuition. We then show the implementation of the algorithm with code and then finish off the video by demonstrating how you can use the BFS algorithm to solve the Flood Fill problem.
0:00 Introduction 0:45 BFS Intuition/Examples 2:39 BFS Implementation 5:19 Flood Fill Problem
All other music by Aakash Gandhi5 Simple Steps for Solving Dynamic Programming ProblemsReducible2020-08-16 | In this video, we go over five steps that you can use as a framework to solve dynamic programming problems. You will see how these steps are applied to two specific dynamic programming problems: the longest increasing subsequence problem and optimal box stacking. The five steps in order are as follows:
1. Visualize examples 2. Find an appropriate subproblem 3. Find relationships among subproblems 4. Generalize the relationship 5. Implement by solving subproblems in order
After taking an in depth look at these problems, at the end of the video we will also have a discussion about common subproblems that you may encounter while solving dynamic programming problems.
Error correction: for the box problem, using dictionary solutions only works if we are given unique boxes -- using a list of subproblems would be a better way to solve it if we wanted to handle duplicate boxes (similar to how we did the longest increasing subsequence).
All other music by Aakash GandhiDepth First Search (DFS) Explained: Algorithm, Examples, and CodeReducible2020-07-05 | In this video, I explain the fundamental ideas behind the Depth First Search (DFS) graph algorithm. We first introduce the concept of a graph traversal. We then go through several examples of DFS to provide intuition. Afterwards, we then go through both a recursive and iterative implementation with provided code. We discuss the differences between the implementation and also make a distinction between a preorder and post order DFS traversal. We then finish the video off with some practical and fun applications of depth first search in graph theory.
0:00 Intro and Preview 0:50 Graph Traversal 1:20 DFS Walkthrough and Examples 6:26 Recursive Implementation 11:08 Iterative Implementation 15:06 Preorder vs Postorder DFS 17:01 DFS Applications
This video wouldn't be possible without the open source manim library created by 3blue1brown: github.com/3b1b/manim
Here is link to the repository that contains the code used to generate the animations in this video: github.com/nipunramk/ReducibleIntroduction to Graph Theory: A Computer Science PerspectiveReducible2020-06-14 | In this video, I introduce the field of graph theory. We first answer the important question of why someone should even care about studying graph theory through an application perspective. Afterwards, we introduce definitions and essential terminology in graph theory, followed by a discussion of the types of graphs you may encounter. We then define several ways to represent graphs as a data structure and finish off the video with a discussion of what types of interesting problems you can ask about graphs to help motivate the ideas in future videos.
Typo correction: at 5:12 the vertex set V should be {0, 1, 2, 3, 4} instead of {0, 1, 2, 3, 4, 5} (there is no vertex 5).
Big thanks to Dániel László Bertalan for making the closed captions for this video!
This video wouldn't be possible without the open source manim library created by 3blue1brown: github.com/3b1b/manim
Here is link to the repository that contains the code used to generate the animations in this video: github.com/nipunramk/Reducible
Music: October by Kai Engel https://freemusicarchive.org/music/Ka... November by Kai Engel https://freemusicarchive.org/music/Ka... Cobweb Morning by Kai Engel https://freemusicarchive.org/music/Ka...Towers of Hanoi: A Complete Recursive VisualizationReducible2020-05-26 | This video is about an in depth look at one of the most challenging recursive problems for computer science students: Towers of Hanoi. We first take the perspective of how we would solve it if it was just a puzzle, where we look specifically at developing a general strategy. Afterwards, we then convert this strategy into a complete recursive solution to the problem. On the way to this solution, we learn a framework to think about and solve tough recursive problems like this one. We finish the video by take a step back and analyzing the recursive solution and how the recursion unravels.