On Regression in Extreme Regions

03/06/2023
by   Nathan Huet, et al.
0

In the classic regression problem, the value of a real-valued random variable Y is to be predicted based on the observation of a random vector X, taking its values in ℝ^d with d≥ 1 say. The statistical learning problem consists in building a predictive function f̂:ℝ^d→ℝ based on independent copies of the pair (X,Y) so that Y is approximated by f̂(X) with minimum error in the mean-squared sense. Motivated by various applications, ranging from environmental sciences to finance or insurance, special attention is paid here to the case of extreme (i.e. very large) observations X. Because of their rarity, they contribute in a negligible manner to the (empirical) error and the predictive performance of empirical quadratic risk minimizers can be consequently very poor in extreme regions. In this paper, we develop a general framework for regression in the extremes. It is assumed that X's conditional distribution given Y belongs to a non parametric class of heavy-tailed probability distributions. It is then shown that an asymptotic notion of risk can be tailored to summarize appropriately predictive performance in extreme regions of the input space. It is also proved that minimization of an empirical and non asymptotic version of this 'extreme risk', based on a fraction of the largest observations solely, yields regression functions with good generalization capacity. In addition, numerical results providing strong empirical evidence of the relevance of the approach proposed are displayed.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset