Convolution and polynomial multiplication: FFT roadmap
The final arc gives the deep structural payoff: convolving coefficient lists is equivalent to multiplying polynomials and collecting like powers [1]. If
$$f(x)=\sum_i a_i x^i,\quad g(x)=\sum_j b_j x^j,$$
then the coefficient of $x^n$ in $f(x)g(x)$ is exactly
$$\sum_{i+j=n} a_i b_j = (a \ast b)_n.$$
So diagonal sums in the product table are not a coincidence; they are the algebraic definition of how like terms combine.

This immediately explains the baseline complexity story. Direct convolution uses many pairwise products and additions, giving $O(n^2)$ work for length-$n$ inputs. The lecture then reframes the problem in value space: evaluate polynomials at selected points, multiply values pointwise (cheap), and recover coefficients. Naively that is still not better, but roots of unity create enough algebraic redundancy that FFT and inverse FFT perform both direction changes in $O(n\log n)$ [1][4].

This is why fftconvolve can beat direct convolution by orders of magnitude on long arrays while producing the same mathematical output. For short inputs, constant factors may favor direct methods. For large inputs in probability, signal processing, image pipelines, or big-integer arithmetic, the FFT route is usually the professional default. The lecture closes by pointing to the continuous case as the next conceptual step.
Sources
Related cards
Video Content
Tasks
Card Info
- Topic: Mathematics
- Difficulty: Advanced
- Completed: 0 users