Much of the previous work towards digital agents for graphical user
inte...
The remarkable capabilities of large language models have been accompani...
In-context learning (ICL), teaching a large language model (LLM) to perf...
Dynamic evaluation of language models (LMs) adapts model parameters at t...
Language models (LMs) now excel at many tasks such as few-shot learning,...
A common recent approach to semantic parsing augments sequence-to-sequen...
Despite their strong performance on many tasks, pre-trained language mod...
Generic unstructured neural networks have been shown to struggle on
out-...
The dominant paradigm for semantic parsing in recent years is to formula...
Sequence-to-sequence models excel at handling natural language variation...
Model-based reinforcement learning (RL) is appealing because (i) it enab...
Language model pre-training has been shown to capture a surprising amoun...
We consider the task of mapping pseudocode to long programs that are
fun...
Semantic parsing using hierarchical representations has recently been
pr...
The web provides a rich, open-domain environment with textual, structura...
Reinforcement learning (RL) agents improve through trial-and-error, but ...
To learn a semantic parser from denotations, a learning algorithm must s...
Our goal is to learn a semantic parser that maps natural language uttera...
A core problem in learning semantic parsers from denotations is picking ...
We consider the task of learning a context-dependent mapping from uttera...
Two important aspects of semantic parsing for question answering are the...