Sign Language Translation (SLT) is a challenging task that aims to gener...
This work addresses 3D human pose reconstruction in single images. We pr...
Hand pose estimation from a single image has many applications. However,...
In natural language processing (NLP) of spoken languages, word embedding...
Capturing and annotating Sign language datasets is a time consuming and
...
Self-supervised monocular depth estimation (SS-MDE) has the potential to...
Graph convolutional networks (GCNs) enable end-to-end learning on graph
...
This paper discusses the results for the second edition of the Monocular...
Autonomous agents that drive on roads shared with human drivers must rea...
We present a new approach for synthesizing novel views of people in new
...
This paper summarizes the results of the first Monocular Depth Estimatio...
Most of the vision-based sign language research to date has focused on
I...
This paper presents an open and comprehensive framework to systematicall...
To plan safe maneuvers and act with foresight, autonomous vehicles must ...
Motion estimation approaches typically employ sensor fusion techniques, ...
Recent approaches to multi-task learning (MTL) have focused on modelling...
Estimating a semantically segmented bird's-eye-view (BEV) map from a sin...
Sign languages are visual languages, with vocabularies as rich as their
...
Parameter estimation in the empirical fields is usually undertaken using...
Visual Odometry (VO) estimation is an important source of information fo...
Visual Odometry (VO) is used in many applications including robotics and...
Recent approaches to Sign Language Production (SLP) have adopted spoken
...
We present a new approach for synthesizing novel views of people in new
...
We approach instantaneous mapping, converting images to a top-down view ...
Attention is an important component of modern deep learning. However, le...
It is common practice to represent spoken languages at their phonetic le...
The visual anonymisation of sign language data is an essential task to
a...
Deep neural networks have demonstrated their capability to learn control...
Imitation learning has been widely used to learn control policies for
au...
In the context of self-driving vehicles there is strong competition betw...
Computational sign language research lacks the large-scale datasets that...
Predicting 3D human pose from a single monoscopic video can be highly
ch...
Signed languages are visual languages produced by the movement of the ha...
An important goal across most scientific fields is the discovery of caus...
The use of neural networks and reinforcement learning has become increas...
Accurate extrinsic sensor calibration is essential for both autonomous
v...
Disentangled representations support a range of downstream tasks includi...
Sign languages are multi-channel visual languages, where signers use a
c...
Causal reasoning is a crucial part of science and human intelligence. In...
To be truly understandable and accepted by Deaf communities, an automati...
Undertaking causal inference with observational data is extremely useful...
Sign languages use multiple asynchronous information channels (articulat...
Sign Languages are rich multi-channel languages, requiring articulation ...
The goal of automatic Sign Language Production (SLP) is to translate spo...
Prior work on Sign Language Translation has shown that having a mid-leve...
In the current monocular depth research, the dominant approach is to emp...
"Like night and day" is a commonly used expression to imply that two thi...
Deep learning has become an increasingly common technique for various co...
Fair and unbiased machine learning is an important and active field of
r...
Designing a controller for autonomous vehicles capable of providing adeq...