PAC Bound

Creator
Creator
Seonglae Cho
Created
Created
2025 Mar 23 23:28
Editor
Edited
Edited
2025 Jun 2 16:29

Generalization bounds are a safety check

They give a theoretical guarantee on the performance of a learning algorithm on any unseen data.
Just because a model fits well on training data doesn't guarantee it will perform well in practice. However, we can mathematically bound how well it generalizes.
The PAC-Bayes Bound provides a probabilistic guarantee for this, showing that the difference between training error and test error can be expressed as a complexity term based on the
Divergence Distance
between the
Prior
and
Posterior
. This can be simply expressed as follows:
where is sample count and is the probability of being misled by the training set.
With high probability, the generalization error of an hypothesis is at most something we can control and even compute. There is a probabilistic guarantee that the true error can be upper bounded by adding a margin to the training error.
The generalization error of an hypothesis is at most something we can control and even compute for any .
  • Prior
    here means exploration mechanism of
  • Posterior
    here means the twisted prior after confronting with data

Prototypical bound (McAllester, 1998, 1999)

Analysis of expected error over probability distribution Q instead of single hypothesis .
when think of as .
  • Bound depends on the distance between prior and posterior
  • Better prior (closer to posterior) would lead to tighter bound
  • Learn the prior P with part of the data
  • Introduce the learnt prior in the bound
Dziugaite and Roy (2017), Neyshabur et al. (2017) have derived some of the tightest deep learning bounds in this way
 
 
 
 
2017
2024
 
 

Recommendations