arXiv Analytics

Sign in

arXiv:2310.01557 [cs.LG]AbstractReferencesReviewsResources

SmartPlay : A Benchmark for LLMs as Intelligent Agents

Yue Wu, Xuan Tang, Tom M. Mitchell, Yuanzhi Li

Published 2023-10-02Version 1

Recent large language models (LLMs) have demonstrated great potential toward intelligent agents and next-gen automation, but there currently lacks a systematic benchmark for evaluating LLMs' abilities as agents. We introduce SmartPlay: both a challenging benchmark and a methodology for evaluating LLMs as agents. SmartPlay consists of 6 different games, including Rock-Paper-Scissors, Tower of Hanoi, Minecraft. Each game features a unique setting, providing up to 20 evaluation settings and infinite environment variations. Each game in SmartPlay uniquely challenges a subset of 9 important capabilities of an intelligent LLM agent, including reasoning with object dependencies, planning ahead, spatial reasoning, learning from history, and understanding randomness. The distinction between the set of capabilities each game test allows us to analyze each capability separately. SmartPlay serves not only as a rigorous testing ground for evaluating the overall performance of LLM agents but also as a road-map for identifying gaps in current methodologies. We release our benchmark at github.com/LLMsmartplay/SmartPlay

Related articles: Most relevant | Search more
arXiv:2406.19370 [cs.LG] (Published 2024-06-27)
Emergence of Hidden Capabilities: Exploring Learning Dynamics in Concept Space
arXiv:2210.02654 [cs.LG] (Published 2022-10-06)
Learning Algorithms for Intelligent Agents and Mechanisms
arXiv:1809.02627 [cs.LG] (Published 2018-09-07)
Unity: A General Platform for Intelligent Agents