Inside UW Madison’s AI Terrarium: Recreating Human Belief in a Digital World

Our modern world is defined by perpetual change, with new developments, products and technologies emerging to reshape how people communicate and form opinions every day. Inspired by this ever-evolving environment, UW-Madison researchers have created a controlled artificial intelligence (AI) ecosystem, designed to better understand human beliefs and biases toward persuasive messages.

Led by Yun-Shiuan Chuang, the system, known as the “AI Terrarium,” is a simulated world of AI agents made to mimic human thought processes. With the help of the AI Terrarium, researchers hope to use their findings to better understand social influence, social media dynamics and message effects

Where the AI Terrarium began:

What started as a “risky” idea from Chuang, has since evolved into an interdisciplinary research project with real potential for positive impacts on society. Sijia Yang, assistant professor at the School of Journalism and Mass Communication (SJMC) and a co-principal investigator of the AI terrarium project, applauds Chuang’s vision and his efforts thus far.

“I could say this whole grand project wouldn’t really exist without his inspiration,” Yang said.

Intrigued by the chaos and aftermath of the COVID-19 pandemic, Chuang realized how an artificial environment made to simulate real-world interactions could help in the future.

“There was a lot of information—and misinformation—being spread on social media, and people were arguing past each other,” said Chuang, “That’s when I began to think about whether it’s possible to predict and simulate how information spreads and devise better intervention strategies.”

So, as a PhD student, Chuang proposed his idea to various professors and researchers in the hopes of getting a team together. To his relief, the project began gaining traction among psychology and communication scholars and soon took off. And Chuang’s influence and drive didn’t stop there.

“He’s been really one of the major architects,” Yang said, “He thinks about ‘how do we manage the whole data structure on the back end? What kind of projects do we run?’”

After completing his PhD at UW-Madison, Chuang continues to guide the AI Terrarium’s technical direction as an unpaid Honorary Fellow at UW–Madison.

What is the AI Terrarium, and how does it work?

Other studies and projects have looked at similar ideas; however, the AI Terrarium combines the power of previous AI agents with large-scale ChatGPT-like models. But before the large-language-model AI agents can communicate or engage in persuasive conversations, the research team must first build them with human characteristics and demographics, creating what they call “digital twins.” These AI counterparts are designed to reflect the ways real humans think, disagree, and respond to information. Chuang defines these AI-human twins as “an AI agent powered by large language models (LLMs) such as ChatGPT, designed to stand in for a real person in a simulation. Instead of being a generic chatbot, each digital twin role-plays a specific individual with their own background, political leanings, and a rich web of beliefs learned from human data and then updates those beliefs using psychologically grounded rules.”

Once created, the model agents role-play people with different decision-making processes, including distinct cognitive or conformity biases. The researchers can then place these agents into different situations, with different visual and written messages, to observe how beliefs and persuasion evolve through interaction.

Why is the AI Terrarium important?

So why doesn’t the research team just test these persuasive messages on real people? By creating the controlled environment of the AI Terrarium, the researchers can filter through effective persuasive messages faster and for a lower cost. For public health campaigns, the process of testing strategies and gathering human data can be rigorous, time-consuming and inefficient.  Yang believes that the work they’re doing could help organizations exponentially.

“[Public health agencies and organizations] don’t have the money, the funding or, really, the personnel to help with this type of rigorous message testing and the campaign planning skills and experiences. So partially, if this technology or this whole approach is useful, I can imagine it could be a very scalable approach to really improve their messaging capacity,” Yang said.

By testing messages about tobacco control, the HPV vaccine and abortion within the AI Terrarium, researchers can narrow hundreds of potential public health strategies down to the 10 most effective. And public health isn’t the only opportunity for application. Policymakers, brands, newsrooms and even the everyday human posting on social media can use the findings of the AI Terrarium in the creation of their future messages.

“This is a quintessential Wisconsin story,” SJMC Professor Dhavan Shah, another co-principal investigator, said, “We combined cognitive theory, computational modeling and communication science to make AI societies more human‑like and therefore more useful for the public good.”

The future of the AI Terrarium:

Now the team hopes to fine-tune the concept, creating smaller-group discussions between agents, and programming them with more specific human traits. The AI Terrarium has the potential to test visual persuasive images as well, which would offer an even deeper evaluation of human conversation exchanges.

With the help from departments across campus, this ecosystem of AI agents is capable of making real change to our modern world.

“It’s a huge bet, and it was also very risky in the start, so I really appreciate Dhavan, Sijia, Tim and everyone’s support in this vision,” said Chuang.

As the AI Terrarium continues to evolve, its potential extends far beyond the lab. From improving public health campaigns to helping policymakers and communicators understand human behavior, this AI ecosystem could transform the way messages shape beliefs in the real world.