AI Moderated Research: Why Humans Stay in the Loop
- Juan Jose (JJ) Ayala

- 5 days ago
- 2 min read
I recently tested the Team Percepto AI Moderator in a real-world setting using the controversy surrounding César Chávez. My goal was to see if an automated agent could handle a heavy and current topic. I ran eight sessions with real participants, each lasting between 12 and 14 minutes. This test proves that AI-led sessions are legitimate, but as humans, we must spend time training our agents. It takes work to make sure they are exactly what you need them to be. I see AI as a calculator; it is a tool I designed, coded, and trained to gather precise emotions. The feedback from participants was fantastic, even though they knew they were talking to a machine.
This research matters to your business because it shows a strong alternative to all-human interviews. By keeping a human in the loop, I can use tech to find deep truths that often stay hidden. I wanted to see if people could still support the farm workers' movement even if they lost respect for the man who started it. I also looked for whether the community felt the government acted out of justice or just to avoid a scandal. To get these answers, I used a mix of ways to bridge the gap between what people say and what they believe. The AI agent mapped how specific events connect to deep values.
The numbers show a clear story. 100% of the adults I spoke with want the Chávez name removed from public honors immediately. Around 75% felt a sense of shock or betrayal. The data proves that the name is no longer a symbol of labor but a trigger of trauma. In the real world, this means people are performing a "surgical amputation" of a tainted icon. One participant said, "Cesar Chavez will be erased off the map." Another shared, "Public recognition of someone who's done horrible things like him is not a good way to acknowledge those horrible things." This shift moves people toward collective symbols instead of individual ones. I found that people are skeptical of successor icons. When I asked about Dolores Huerta, one person said, "Hell, no. She's just as complicit as he was."
My research found that common mistakes often involve moving too fast just to avoid backlash. One person told me, "They're only doing it because... it would make them look bad." Another error is assuming you can just swap one leader for another. The data shows a fear that any person-based symbol will eventually fail. I believe this project brings value by clarifying questions and finding insights that help teams understand shifts in public sentiment. This confirms that while I use AI to gather data, the human touch in training and analysis is what makes the results precise.
AI Moderated Research: Why Humans Stay in the Loop
By Juan Jose (JJ) Ayala
Team Percepto, A Consumer Research Insights & AI Adoption Company, April 2026




