IGLU
Interactive Grounded Language Understanding in a Collaborative Environment
News:
REACT: Redefining Embodied Agents Capabilities through Interactive Grounded Language Instructions. IGLU datasets and data collection tools are published here
SOVLE: RL Baseline is published here
SAFE: Simple and Fast Environment for Embodied Dialog Agents is published here
We are excited to announce winners of NLP and RL track here.
Want to know how data was collected for the competition? Check out our paper, Collecting Interactive Multi-modal Datasets for Grounded Language Understanding that will be presented at NeurIPS InterNLP workshop on Dec 3, 2022!
The current baseline for the IGLU RL task is explained in our preprint paper: "Learning to Solve Voxel Building Embodied Tasks from Pixels and Natural Language Instructions". Check it out!
Join our Slack workspace for discussions and asking questions!
IGLU RL environment has been accepted at Embodied AI workshop at CVPR! ( Arxiv Preprint )
IGLU has been accepted for the second year in NeurIPSConf competitions! This year we are serving a new NLP task as well as an RL task (NeurIPS2022 proposal).
About
Humans have the remarkable ability to adapt to new tasks and environments quickly. Starting from a very young age, humans acquire new skills and learn how to solve new tasks either by imitating the behavior of others or by following provided natural language instructions. Studies in developmental psychology have shown evidence of natural language communication being an effective method for transmission of generic knowledge between individuals as young as infants. This form of learning can even accelerate the acquisition of new skills by avoiding trial-and-error when learning only from observations.
Inspired by this, the AI research community attempts to develop grounded interactive embodied agents that are capable of engaging in natural back-and-forth dialog with humans to assist them in completing real-world tasks. Notably, the agent needs to understand when to initiate feedback requests if communication fails or instructions are not clear and requires learning new domain-specific vocabulary.
Despite all these efforts, the task is far from solved.
For that reason, we propose the IGLU competition, which stands for Interactive Grounded Language Understanding (IGLU) in a collaborative environment.
Specifically, the goal of our competition is to approach the following scientific challenge:
How to build interactive embodied agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment?
By "interactive agent" we mean that the agent can: (1) follow the instructions correctly, (2) ask for clarification when needed, and (3) quickly adapt newly acquired skills. The IGLU challenge is naturally related to two fields of study that are highly relevant to the NeurIPS community: Natural Language Understanding and Generation (NLU / NLG) and Reinforcement Learning (RL).
Please consider IGLU NeurIPS 2022 proposal for a more detailed description of the task and application scenario.
📅 Timeline
July: Releasing materials: IGLU framework and baselines code.
25th July: The warm-up phase of the competition begins! Participants are invited to start submitting their solutions.
13th August: End of warming up phase! The official competition begins.
22nd October: Submission deadline for RL task. Submissions are closed and organizers begin the evaluation process.
1st November: Submission deadline for NLP task. Submissions are closed and organizers begin the evaluation process.
November: Winners are announced and are invited to contribute to the competition writeup.
2nd-3rd of December: Presentation at NeurIPS 2022 (online/virtual).
🏆 Prizes
The challenge features a Total Cash Prize Pool of $15,000 USD.
This prize pool is divided as follows:
NLP Task
1st place: $4,000 USD
2nd place: $1,500 USD
3st place: $1000 USD
RL Task
1st place: $4,000 USD
2nd place: $1,500 USD
3st place: $1000 USD
Research prizes: $3,500 USD
Task Winners: For each task, we will evaluate submissions as described in the Evaluation section. The three teams that score highest on this evaluation will receive prizes of $4,000, $1,500, and $500.
Research prizes: We have reserved $3,000 of the prize pool to be given out at the organizers’ discretion to submissions that we think made a particularly interesting or valuable research contribution. If you wish to be considered for a research prize, please include some details on interesting research-relevant results in the README for your submission. We expect to award around 2-5 research prizes in total.
Authorship: In addition to the cash prizes, we will invite the top three teams from both the RL and NLP tasks for authorship summary manuscript at the end of the competition. At our discretion, we may also include honorable mentions for academically interesting approaches. Honorable mentions will be invited to contribute a shorter section to the paper and have their names included inline.