Large Language Models (LLMs) trained on massive corpora have shown remarkable success in knowledge-intensive tasks. Yet, most of them rely on pre-stored knowledge. Inducing new general knowledge from a specific environment and performing reasoning with the acquired knowledge---situated inductive reasoning, is crucial and challenging for machine intelligence. In this paper, we design Mars, an interactive environment devised for situated inductive reasoning. It introduces counter-commonsense game mechanisms by modifying terrain, survival setting and task dependency while adhering to certain principles. In Mars, agents need to actively interact with their surroundings, derive useful rules and perform decision-making tasks in specific contexts. We conduct experiments on various RL-based and LLM-based methods, finding that they all struggle on this challenging situated inductive reasoning benchmark. Furthermore, we explore Induction from Reflection, where we instruct agents to perform inductive reasoning from history trajectory. The superior performance underscores the importance of inductive reasoning in Mars. Through Mars, we aim to galvanize advancements in situated inductive reasoning and set the stage for developing the next generation of AI systems that can reason in an adaptive and context-sensitive way.
Imagine a scenario: in the United States, you drive on the right side of the road. When you travel to the UK, you might initially find it strange how people drive. However, you soon realize that driving on the left is the norm here and adapt yourself to the new rule. Inductive reasoning, a capacity that identifies underlying rules, mechanisms, or general claims of observations experience based on past observations.
Mars, an open-world environment for situated inductive reasoning, involves inductive reasoning through active interaction and applying newly acquired rules to make context-sensitive decisions.
First, built on Crafter, we introduce counter-commonsense elements to design Mars. Agents interact with the environment and accumulate historical trajectories. For example, an agent might observe that regardless of time or location, mining stone always yields diamonds; using 2 diamonds can craft a table. Consequently, the agent can induce rules "Mining stone yields diamond" and "Placing table consumes 2 diamonds". When tasked with making a wooden pickaxe, the agent can apply these rules to plan and execute specific actions in different contexts.
To challenge the agent with an environment that deviates from prior (parametric) knowledge and necessitates situated inductive reasoning, we introduce targeted modifications to typical commonsense elements on the foundation of Crafter, classified into three categories: (1) Terrain: altering the predictable terrain distributions; (2) Survival: modifying the behavior of non-player characters, which effects the agents status level (e.g., health); (3) Task Dependency: changing the dependency between tasks.
While we can sample numerous new worlds following the above procedure, we carefully designed several strict principles so that they are not completely fantastical and are always playable. We guarantee that each collected item has at least one obtainable method and each tool has a practical use, motivating the agent to engage in crafting; for every resource that can be increased by some event, there must be a corresponding event that can decrease the resource, maintaining a balance; we also develop an automated program to ensure that each achievement is achievable; the quantity of items required for task achievements must be greater than what the world provides.
Building on the JARVIS-1 framework, we further introduce the induction from reflection (IfR) module in Controller. Given the selected task and the agent's observation, planner decomposes the task into a sequence of subgoals. Controller then outputs specific actions to accomplish these subgoals. Successful plans are stored in the skill library, while failed plans prompt the agent to perform self-explanation and replan. When the controller finishes a subgoal (including "succeed", "failed" or "timeout"), we force LLM to engage in reflective thinking to induce possible game mechanisms based on the agent’s historical trajectory. The derived rules are then stored in a rule library, which the task proposer, planner, and controller can use.
We systematically evaluate the performance of various RL-based and LLM-based methods on Mars. For LLMs that cannot accept image inputs, we provide a wrapper that gives text descriptions of gameplay screen. Quantitative results on Mars are depicted as follows.
We find that all baseline models exhibit a performance decline when transitioning from the Default to Mars scenarios, with the extent of the decline dependent on the type (e.g., terrain, survival, and task dependency) and the number of modifications. This underscores that Mars presents significant challenges for current methodologies. We also explore the performance of the Induction from Reflection module, which outperforms the baseline models, demonstrating the importance of inductive reasoning in a counter-commonsense environment.
@inproceedings{tang2024mars,
title={Mars: Situated Inductive Reasoning in an Open-World Environment},
author={Tang, Xiaojuan and Li, Jiaqi and Liang, Yitao and Zhu, Song-chun and Zhang, Muhan and Zheng, Zilong},
booktitle={38th Conference on Neural Information Processing Systems (NeurIPS 2024) Track on Datasets and Benchmarks},
year={2024}
}