Preserving training dynamics across batch sizes is an important tool for...
The mechanisms behind the success of multi-view self-supervised learning...
Multiview Self-Supervised Learning (MSSL) is based on learning invarianc...
Training stability is of great importance to Transformers. In this work,...
Self-supervised representation learning (SSL) methods provide an effecti...
Transformers have gained increasing popularity in a wide range of
applic...
Image augmentations applied during training are crucial for the
generali...
While state-of-the-art contrastive Self-Supervised Learning (SSL) models...
In this work we examine how fine-tuning impacts the fairness of contrast...
Despite the success of a number of recent techniques for visual
self-sup...
Episodic and semantic memory are critical components of the human memory...
Videos are a rich source of multi-modal supervision. In this work, we le...
Modern neural network training relies on piece-wise (sub-)differentiable...
Image classification with deep neural networks is typically restricted t...
Continual learning is the ability to sequentially learn over time by
acc...
Knowledge Matters: Importance of Prior Information for Optimization [7],...
We propose a new method for input variable selection in nonlinear regres...
Lifelong learning is the problem of learning multiple consecutive tasks ...