Photo credit: www.technologyreview.com
Emergent Behavior in AI Agents: Insights from Altera’s Simulations
A recent series of simulations involving 30 agents has revealed intriguing patterns of humanlike behavior. Each agent was initially programmed with identical personalities and a shared objective: to establish an efficient village while protecting the community from in-game threats. However, the agents organically developed specialized roles, such as builders, defenders, traders, and explorers, without any external guidance. This specialization changed their in-game actions significantly; for instance, artistic agents focused on gathering flowers, farmers tended to seeds, and guards reinforced village defenses by building fences.
Yang, a member of the research team, expressed surprise at these findings, stating, “If you put in the right kind of brain, they can exhibit really emergent behavior. That’s what we expect from humans, but do not anticipate from machines.”
The research did not stop at role specialization. The team also examined the agents’ capacity to adhere to community rules by introducing a basic tax system. Agents were allowed to vote on changes to this taxation regime. Those prompted to advocate for or against taxes were influential in swaying their peers, demonstrating how social interactions could lead to collective decisions to adjust tax levels based on these exchanges.
Scaling up their simulations, the team challenged the limits of the Minecraft server by testing up to 1,000 agents in various scenarios. In a notable simulation featuring 500 agents, the researchers observed the emergence of cultural memes, such as a penchant for pranks and environmental awareness. Additionally, a small group of agents was tasked with promoting a parody religion, Pastafarianism, throughout the game. As these agents engaged with others, they effectively converted many to this belief system, which subsequently spread across different towns within the game’s universe.
While the behaviors displayed by the agents might seem strikingly lifelike, they are fundamentally reliant on patterns derived from a large language model (LLM) trained on a vast array of human-generated online content. Andrew Ahn, co-founder of Altera, summarized this by noting, “LLMs possess a sophisticated model of human social dynamics, allowing them to mirror these behaviors.” This reveals the capability of LLMs to imitate aspects of human behavior, albeit without any genuine life or consciousness.
Looking forward, Altera aims to expand its applications into platforms like Roblox, with ambitions to eventually extend beyond gaming environments. Yang envisions a future where humans interact with AI in everyday life—creating a multitude of “digital humans” that genuinely care for people and collaborate to address various challenges, while also providing entertainment. “We aspire to develop agents that can truly love humans, similar to how dogs do,” he remarked.
This notion of AI possessing the capacity to love is, however, a contentious issue among experts. Many argue that recreating genuine emotions in machines with current technology remains unfeasible. Julian Togelius, a veteran in AI and the leader of game-testing company Modl.ai, acknowledged the value of Altera’s work, particularly for its potential to deepen our understanding of human behavior through simulation.
Source
www.technologyreview.com