Analyzing Training Using Phase Transitions in Entropy—Part I: General Theory

12/02/2020
by   Kang Gao, et al.
0

We analyze phase transitions in the conditional entropy of a sequence caused by a change in the conditional variables. Such transitions happen, for example, when training to learn the parameters of a system, since the transition from the training phase to the data phase causes a discontinuous jump in the conditional entropy of the measured system response. For large-scale systems, we present a method of computing a bound on the mutual information obtained with one-shot training, and show that this bound can be calculated using the difference between two derivatives of a conditional entropy. The system model does not require Gaussianity or linearity in the parameters, and does not require worst-case noise approximations or explicit estimation of any unknown parameters. The model applies to a broad range of algorithms and methods in communication, signal processing, and machine learning that employ training as part of their operation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset