HAZARD Challenge: Embodied Decision Making in Dynamically Changing Environments

1 UMass Amherst 2 Tsinghua University 3 Peking University 4 MIT 5 MIT-IBM Watson AI Lab
* Equal Contribution

Abstract

Recent advances in high-fidelity virtual environments serve as one of the major driving forces for building intelligent embodied agents to perceive, reason and interact with the physical world. Typically, these environments remain unchanged unless agents interact with them. However, in real-world scenarios, agents might also face dynamically changing environments characterized by unexpected events and need to rapidly take action accordingly. To remedy this gap, we propose a new simulated embodied benchmark, called HAZARD, specifically designed to assess the decision-making abilities of embodied agents in dynamic situations. HAZARD consists of three unexpected disaster scenarios, including fire, flood, and wind, and specifically supports the utilization of large language models (LLMs) to assist common sense reasoning and decision-making. This benchmark enables us to evaluate autonomous agents' decision-making capabilities across various pipelines, including reinforcement learning (RL), rule-based, and search-based methods in dynamically changing environments. As a first step toward addressing this challenge using large language models, we further develop an LLM-based agent and perform an in-depth analysis of its promise and challenge of solving these challenging tasks.


Overview

The HAZARD challenge includes fire, flood, and wind scenarios, presenting embodied agents with dynamically changing environments.


Task

In HAZARD challenge, an embodied agent needs to rescue a given set of objects from disasters.
The agent observations include RGB-D signals, temperature or water level signals, target object information, and segmentation masks. To address the challenge in perception, we also provide a perceptional version of HAZARD which excludes segmentation mask from observations. The action space consists of four high-level actions: Pick Up, Explore, Drop, and Walk To, each representing a compression of multiple low-level actions. The final performance of agents is measured by value, step, and damage.

More Agent Examples

Rescue in fire scenario
Targets: bag, pocketbook, purse, hairbrush
Rescue in fire scenario
Targets: bag, pocketbook, purse, chocolate candy, apple, key
Rescue in flood scenario
Targets: bag, pocketbook, purse, chocolate candy, apple, key
Rescue in flood scenario
Targets: toothbrush, bowl, plate, banana
Rescue in wind scenario
Targets: suitcase, bag, pocketbook, purse, basket, box, bottle, ipod
Rescue in wind scenario
Targets: money, box, backpack, bag, pocketbook, purse

Dataset Preview

* You can select different samples on different scenarios and observe the environment changes.
* We only provide preview of 10 samples.

Scenario

Sample


Submit Result



This webpage template was recycled from here.