An Information-Theoretic Analysis for Transfer Learning: Error Bounds and Applications

07/12/2022
by   Xuetong Wu, et al.
0

Transfer learning, or domain adaptation, is concerned with machine learning problems in which training and testing data come from possibly different probability distributions. In this work, we give an information-theoretic analysis on the generalization error and excess risk of transfer learning algorithms, following a line of work initiated by Russo and Xu. Our results suggest, perhaps as expected, that the Kullback-Leibler (KL) divergence D(μ||μ') plays an important role in the characterizations where μ and μ' denote the distribution of the training data and the testing test, respectively. Specifically, we provide generalization error upper bounds for the empirical risk minimization (ERM) algorithm where data from both distributions are available in the training phase. We further apply the analysis to approximated ERM methods such as the Gibbs algorithm and the stochastic gradient descent method. We then generalize the mutual information bound with ϕ-divergence and Wasserstein distance. These generalizations lead to tighter bounds and can handle the case when μ is not absolutely continuous with respect to μ'. Furthermore, we apply a new set of techniques to obtain an alternative upper bound which gives a fast (and optimal) learning rate for some learning problems. Finally, inspired by the derived bounds, we propose the InfoBoost algorithm in which the importance weights for source and target data are adjusted adaptively in accordance to information measures. The empirical results show the effectiveness of the proposed algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset