🧠

Regularization in Neural Networks

Jun 15, 2025

Overview

This lecture explains why regularization helps reduce overfitting and variance in neural networks, with intuition, examples, and implementation tips.

Why Regularization Reduces Overfitting

  • Overfitting occurs when a large, complex neural network models training data too closely, capturing noise.
  • Regularization adds a penalty to the cost function, discouraging large weights in the network.
  • A high regularization parameter (lambda) pushes weights toward zero, simplifying the network.
  • Simpler networks are less likely to overfit, behaving more like shallow models (e.g., logistic regression).

Intuition From Activation Functions

  • For tanh activation, small weights mean pre-activation values (Z) remain in a near-linear region.
  • With linear activations, networks can only model simple (linear) functions.
  • Small weights across layers keep activations in the linear range, reducing the network’s ability to fit complex, nonlinear boundaries.
  • This limits overfitting by preventing the model from capturing intricate patterns in noise.

Implementation Tips for Regularization

  • The regularized cost function J includes both the loss and the regularization penalty.
  • When plotting the cost during gradient descent, use the new (regularized) definition of J.
  • Plotting only the original loss may make it seem like the cost is not decreasing monotonically.

Key Terms & Definitions

  • Overfitting β€” When a model learns noise and details in the training data, reducing its generalization ability.
  • Regularization β€” Technique that penalizes large model weights to simplify the model and prevent overfitting.
  • L2 Regularization (Frobenius norm) β€” Adds the sum of squared weights to the cost function.
  • Lambda (Ξ») β€” The regularization parameter controlling penalty strength.
  • Variance β€” Model sensitivity to small fluctuations in the training set.
  • Tanh Activation Function β€” A nonlinear function used in neural networks, near-linear around zero.

Action Items / Next Steps

  • When implementing regularization, always plot the cost including the penalty term.
  • Explore dropout regularization as covered in the next lecture.