Interactive Grounded Language Understanding in a Collaborative Environment



Humans have the remarkable ability to adapt to new tasks and environments quickly. Starting from a very young age, humans acquire new skills and learn how to solve new tasks either by imitating the behavior of others or by following provided natural language instructions. Studies in developmental psychology have shown evidence of natural language communication being an effective method for transmission of generic knowledge between individuals as young as infants. This form of learning can even accelerate the acquisition of new skills by avoiding trial-and-error when learning only from observations. 

Inspired by this, the AI research community attempts to develop grounded interactive embodied agents that are capable of engaging in natural back-and-forth dialog with humans to assist them in completing real-world tasks. Notably, the agent needs to understand when to initiate feedback requests if communication fails or instructions are not clear and requires learning new domain-specific vocabulary.

Despite all these efforts, the task is far from solved.

For that reason, we propose the IGLU competition, which stands for Interactive Grounded Language Understanding (IGLU) in a collaborative environment.

Specifically, the goal of our competition is to approach the following scientific challenge: 

How to build interactive embodied agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment?

By "interactive agent" we mean that the agent can: (1) follow the instructions correctly, (2) ask for clarification when needed, and (3) quickly adapt newly acquired skills. The IGLU challenge is naturally related to two fields of study that are highly relevant to the NeurIPS community: Natural Language Understanding and Generation (NLU / NLG) and Reinforcement Learning (RL). 

Please consider IGLU NeurIPS 2022 proposal for a more detailed description of the task and application scenario.

📅 Timeline

🏆 Prizes

The challenge features a Total Cash Prize Pool of $15,000 USD.

This prize pool is divided as follows:

Task Winners: For each task, we will evaluate submissions as described in the Evaluation section. The three teams that score highest on this evaluation will receive prizes of $4,000, $1,500, and $500.

Research prizes: We have reserved $3,000 of the prize pool to be given out at the organizers’ discretion to submissions that we think made a particularly interesting or valuable research contribution. If you wish to be considered for a research prize, please include some details on interesting research-relevant results in the README for your submission. We expect to award around 2-5 research prizes in total.

Authorship: In addition to the cash prizes, we will invite the top three teams from both the RL and NLP tasks for authorship summary manuscript at the end of the competition. At our discretion, we may also include honorable mentions for academically interesting approaches. Honorable mentions will be invited to contribute a shorter section to the paper and have their names included inline.