Kernels in action: smoothing and edge detection

Intermediate Mathematics
Created by Best · 12.04.2026 at 20:18 UTC

The middle of the lecture generalizes the same overlap rule from short number lists to long signals and then to images. A small list (or small grid) of weights is called a kernel. If its entries are nonnegative and sum to 1, then each output is a local weighted average, so high-frequency wiggles are damped and the signal smooths out [1].

In 2D, that same averaging principle becomes image blur. A uniform $3\times3$ kernel gives box blur; a Gaussian-shaped $5\times5$ kernel gives a more lens-like blur because central pixels get larger weight than far neighbors. The lecture then contrasts this with signed kernels whose entries sum to 0. Those suppress constant regions but react strongly to left-right or up-down changes, which is exactly edge detection [1][3].

Two practical edge cases from this section are important for real projects. First, output size: pure convolution expands size, but image APIs often crop or pad to preserve dimensions. Second, orientation: convolution flips the kernel by definition, while cross-correlation does not. For symmetric kernels blur looks identical either way, but for directional kernels (like edge detectors) the sign and direction response depend on this distinction.


Sources

University approvals: 0
Related cards
Other cards linking here
Video Content
Tasks
Question 1

Why does a kernel with nonnegative entries summing to 1 act like smoothing?

Question 2

What is the signature behavior of a kernel whose entries sum to 0?

Question 3

When does convolution vs cross-correlation matter most in practice?

Card Info
  • Topic: Mathematics
  • Difficulty: Intermediate
  • Completed: 0 users
Creator
Best
Best
BestBuddy