Guest Lecture by
Speaker: Jianwei Yang
Title: Learning Visual Curiosity for an Agent through Language and Embodiment
Abstract: Nowadays, deep neural networks becomes a prevalent choice in the domain and NLP, CV and generic AI. However, most of these models are trained offline and then perform passive and one-shot predictions. In the real world, the model needs to interact with humans and the environment frequently in a process. In this talk, I will mainly talk about our two works about how to empower the agent the “curiosity” when interacting with human through language and the environment through embodiment. To understand the visual world, the agent requires to continuously interact with humans to acquire useful information. Also, the agent needs to move around the target object as humans to understand it better in a 3D environment. These two entail the “curiosity” – the ability of active information acquisition for a deep learning model in realistic scenarios. This talk will introduce our two primitive efforts towards this goal and hopefully can inspire more thoughts and explorations along this direction.
When: September 2nd, 9 am - 10 am PST
You can find recording below: