We propose a novel algorithm for solving the composite Federated Learnin...
This paper proposes a locally differentially private federated learning
...
We present an efficient algorithm for regularized optimal transport. In
...
Facing the upcoming era of Internet-of-Things and connected intelligence...
In scalable machine learning systems, model training is often paralleliz...
A theoretical, and potentially also practical, problem with stochastic
g...
We develop a fast and reliable method for solving large-scale optimal
tr...
We introduce novel convergence results for asynchronous iterations which...
Many popular learning-rate schedules for deep neural networks combine a
...
The convergence of stochastic gradient descent is highly dependent on th...
Stochastic gradient algorithms are often unstable when applied to functi...
Motivated by large-scale optimization problems arising in the context of...
The increasing scale of distributed learning problems necessitates the
d...
Stochastic gradient methods with momentum are widely used in application...
Anderson acceleration is a well-established and simple technique for spe...
We present StochasticPrograms.jl, a user-friendly and powerful open-sour...
This paper introduces an efficient algorithm for finding the dominant
ge...
This paper introduces an efficient second-order method for solving the
e...
The event-driven and elastic nature of serverless runtimes makes them a ...
We present POLO --- a C++ library for large-scale parallel optimization
...
Distributed training of massive machine learning models, in particular d...
Asynchronous computation and gradient compression have emerged as two ke...
Motivated by the success of reinforcement learning (RL) for discrete-tim...
This paper presents an asynchronous incremental aggregated gradient algo...
Mini-batch optimization has proven to be a powerful paradigm for large-s...
We generalize stochastic subgradient descent methods to situations in wh...