Algorithmic stability is an important notion that has proven powerful fo...
This paper deals with the problem of efficient sampling from a stochasti...
Mirror descent, introduced by Nemirovski and Yudin in the 1970s, is a
pr...
Heavy-tail phenomena in stochastic gradient descent (SGD) have been repo...
Injecting noise within gradient descent has several desirable features. ...
Recent studies have shown that heavy tails can emerge in stochastic
opti...
Uncertainty sampling in active learning is heavily used in practice to r...
Online forecasting under a changing environment has been a problem of
in...
Model selection requires repeatedly evaluating models on a given dataset...
Stein discrepancies (SDs) monitor convergence and non-convergence in
app...
The problem of inferring the direct causal parents of a response variabl...
We consider stochastic gradient methods under the interpolation regime w...
Given a loss function F:X→R^+ that can be
written as the sum of losses o...
We present a novel single-stage procedure for instrumental variable (IV)...
The problem of inferring the direct causal parents of a response variabl...
Kernel two-sample testing is a useful statistical tool in determining wh...
We introduce Regularized Kernel and Neural Sobolev Descent for transport...
In recent years, many variance reduced algorithms for empirical risk
min...
Two popular examples of first-order optimization methods over linear spa...
We propose a new Integral Probability Metric (IPM) between distributions...
We propose a new framework for deriving screening rules for convex
optim...
The goal of domain adaptation is to adapt models learned on a source dom...
In this paper, we propose subspace alignment based domain adaptation of ...
Domain adaptation techniques aim at adapting a classifier learnt on a so...
The general perception is that kernel methods are not scalable, and neur...