Supervised learning with probabilistic morphisms and kernel mean embeddings

05/10/2023
โˆ™
by   Hรดng Vรขn Lรช, et al.
โˆ™
0
โˆ™

In this paper I propose a concept of a correct loss function in a generative model of supervised learning for an input space ๐’ณ and a label space ๐’ด, both of which are measurable spaces. A correct loss function in a generative model of supervised learning must accurately measure the discrepancy between elements of a hypothesis space โ„‹ of possible predictors and the supervisor operator, even when the supervisor operator does not belong to โ„‹. To define correct loss functions, I propose a characterization of a regular conditional probability measure ฮผ_๐’ด|๐’ณ for a probability measure ฮผ on ๐’ณร—๐’ด relative to the projection ฮ _๐’ณ: ๐’ณร—๐’ดโ†’๐’ณ as a solution of a linear operator equation. If ๐’ด is a separable metrizable topological space with the Borel ฯƒ-algebra โ„ฌ (๐’ด), I propose an additional characterization of a regular conditional probability measure ฮผ_๐’ด|๐’ณ as a minimizer of mean square error on the space of Markov kernels, referred to as probabilistic morphisms, from ๐’ณ to ๐’ด. This characterization utilizes kernel mean embeddings. Building upon these results and employing inner measure to quantify the generalizability of a learning algorithm, I extend a result due to Cucker-Smale, which addresses the learnability of a regression model, to the setting of a conditional probability estimation problem. Additionally, I present a variant of Vapnik's regularization method for solving stochastic ill-posed problems, incorporating inner measure, and showcase its applications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset