As Large Language Models quickly become ubiquitous, it becomes critical ...
This paper describes our zero-shot approaches for the Visual Word Sense
...
Neural networks for computer vision extract uninterpretable features des...
Instruction tuning is an effective technique to align large language mod...
With the rise of Large Language Models (LLMs) and their ubiquitous deplo...
As LLMs become commonplace, machine-generated text has the potential to ...
Images generated by diffusion models like Stable Diffusion are increasin...
Watermarking the outputs of generative models is a crucial technique for...
In an era of widespread web scraping, unlearnable dataset methods have t...
Self-supervised learning, dubbed the dark matter of intelligence, is a
p...
Recently developed text-to-image diffusion models make it easy to edit o...
Typical diffusion models are trained to accept a particular form of
cond...
The strength of modern generative models lies in their ability to be
con...
Potential harms of large language models can be mitigated by watermarkin...
Recent trends in language modeling have focused on increasing performanc...
Cutting-edge diffusion models produce images with high quality and
custo...
Sharpness-Aware Minimization (SAM) has recently emerged as a robust tech...
As industrial applications are increasingly automated by machine learnin...
Federated learning is particularly susceptible to model poisoning and
ba...
Despite the clear performance benefits of data augmentations, little is ...
Many applications require robustness, or ideally invariance, of neural
n...
Standard diffusion models involve an image transform – adding Gaussian n...
The prevalence of data scraping from social media as a means to obtain
d...
Imperceptible poisoning attacks on entire datasets have recently been to...
Federated learning (FL) has rapidly risen in popularity due to its promi...
A central tenet of Federated learning (FL), which trains models without
...
Federated learning has quickly gained popularity with its promises of
in...
It is widely believed that the implicit regularization of stochastic gra...
Differentiable architecture search (DARTS) is a widely researched tool f...
The adversarial machine learning literature is largely partitioned into
...
Many applications require the robustness, or ideally the invariance, of ...
Data poisoning and backdoor attacks manipulate training data to induce
s...
Data poisoning is a threat model in which a malicious actor tampers with...
Large organizations such as social media companies continually release d...
Data poisoning and backdoor attacks manipulate victim models by maliciou...
Data Poisoning attacks involve an attacker modifying training data to
ma...
Matching and partitioning problems are fundamentals of computer vision
a...
Data poisoning–the process by which an attacker takes control of a model...
The idea of federated learning is to collaboratively train a neural netw...
State-of-the-art adversarial attacks on neural networks use expensive
it...
We empirically evaluate common assumptions about neural networks that ar...
Energy minimization methods are a classical tool in a multitude of compu...
Many tasks in imaging can be modeled via the minimization of a nonconvex...
The idea of video super resolution is to use different view points of a
...