research
∙
03/20/2023
How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function – Part II: the Multi-D Case of Two Layers with Random First Layer
Randomized neural networks (randomized NNs), where only the terminal lay...
research
∙
12/31/2021
Infinite width (finite depth) neural networks benefit from multi-task learning unlike shallow Gaussian Processes – an exact quantitative macroscopic characterization
We prove in this paper that optimizing wide ReLU neural networks (NNs) w...
research
∙
02/26/2021
NOMU: Neural Optimization-based Model Uncertainty
We introduce a new approach for capturing model uncertainty for neural n...
research
∙
11/07/2019