This paper proposes a computational framework for the design optimizatio...
Skip connections and normalisation layers form two standard architectura...
Compiler optimization level recognition can be applied to vulnerability
...
Training very deep neural networks is still an extremely challenging tas...
The use of deep neural network (DNN) models as surrogates for linear and...
Our ability to know when to trust the decisions made by machine learning...
Annealed importance sampling (AIS) and related algorithms are highly
eff...
This paper establishes a central limit theorem under the assumption that...
Minimax optimization has recently gained a lot of attention as adversari...
This paper presents a consistent computational framework for multiscale ...
The theory of integral quadratic constraints (IQCs) allows the certifica...
Smooth game optimization has recently attracted great interest in machin...
Overparameterization has been shown to benefit both the optimization and...
Many tasks in modern machine learning can be formulated as finding equil...
Increasing the batch size is a popular way to speed up neural network
tr...
Model-based reinforcement learning (MBRL) is widely seen as having the
p...
Natural gradient descent has proven effective at mitigating the effects ...
Reducing the test time resource requirements of a neural network while
p...
A novel computational framework for designing metamaterials with negativ...
Variational Bayesian neural networks (BNNs) perform variational inferenc...
The choice of batch-size in a stochastic optimization algorithm plays a
...
Variational Bayesian neural networks combine the flexibility of deep lea...
Weight decay is one of the standard tricks in the neural network toolbox...
The generalization properties of Gaussian processes depend heavily on th...
Combining the flexibility of deep learning with Bayesian uncertainty
est...
Convolutional neural networks (CNNs) are inherently limited to model
geo...