Learning Algorithm Generalization Error Bounds via Auxiliary Distributions

10/02/2022
by   Gholamali Aminian, et al.
15

Generalization error boundaries are essential for comprehending how well machine learning models work. In this work, we suggest a creative method, i.e., the Auxiliary Distribution Method, that derives new upper bounds on generalization errors that are appropriate for supervised learning scenarios. We show that our general upper bounds can be specialized under some conditions to new bounds involving the generalized α-Jensen-Shannon, α-Rényi (0< α < 1) information between random variable modeling the set of training samples and another random variable modeling the set of hypotheses. Our upper bounds based on generalized α-Jensen-Shannon information are also finite. Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distributional mismatch is modeled as α-Jensen-Shannon or α-Rényi (0< α < 1) between the distribution of test and training data samples. We also outline the circumstances in which our proposed upper bounds might be tighter than other earlier upper bounds.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset