AI with personality

How AI Surveys with Personality Boost Participant Responses

“I feel like we clicked.” That’s not something anyone says about a survey. But one research participant said exactly that after completing an interview with a conversational AI bot. What made the difference? The AI chatbot had a personality.

Conversational AI has no doubt optimized research, generating 8-10 times more input than standard open-end questions. But most platforms use the same neutral tone for every participant. The next evolution of conversational AI? Making those conversations feel genuinely human.

“Engaged people have more to say and they think deeper on your issues instead of giving sort of an offhand answer,” saysFrank Kelly, Market Research Practice Lead atVirtual Incentives. “So if engagement is the ultimate goal, then it leads me to wonder if there are ways to adapt that interview style.”

 

Virtual Incentives partnered with outset.ai to answer that question, testing three distinct AI personalities against the standard neutral voice. The findings reveal a fundamental shift in how conversational AI should work.

How Personality-Adapted AI Gets Participants Talking

The experiment was straightforward: 250 participants across four groups, all answering the same 12 questions about where they choose to live. Same topic, same incentive, same questions. The only variable was how the AI interviewer communicated.

We tested three distinct personalities – authoritative, empathetic, and humorous – against a standard neutral control. The control group experienced what most conversational AI platforms offer today: polite, neutral, and as one participant put it, “just like a machine, I guess.” These standard interviews averaged 15 minutes and about 1,000 words per respondent. Here’s what happened when we gave the AI a personality.

Download the infographic here.

 

The Humorous Interviewer was witty, relatable, and engaging, using light humor and casual language to keep the conversation flowing.

  • 53% increase in engagement over control
  • 1,553 words (vs. 1,012 control)
  • 22 minutes (vs. 15 minutes control)

The Authoritative Interviewer was direct, calm, and professional, getting straight to the point without small talk.

  • 31% increase in engagement over control
  • 1,321 words (vs. 1,012 control)
  • 22 minutes (vs. 15 minutes control)

The Empathetic Interviewer was warm, conversational, and receptive, making participants feel heard and comfortable.

  • 21% increase in engagement over control
  • 1,226 words (vs. 1,012 control)
  • 19 minutes (vs. 15 minutes control)

The numbers are compelling, but the qualitative feedback reveals something deeper. One participant in the humorous group said, “I feel like we clicked. Everything I said was understood very easily. It flowed freely, and I enjoyed it.”

Compare that to the control group’s “just like a machine” response, and it’s easy to see what personality adaptation accomplishes: it turns surveys into genuine conversations.

The Path Forward: Choice or Algorithm?

The results prove that giving AI chatbots a personality works. The question now is, how do we scale it? Frank Kelly sees two distinct paths forward.

The first approach puts control in participants’ hands. Before starting an interview, they’d design their own interviewer by setting personality preferences ahead of time – formal to casual, serious to humorous, direct to conversational, reserved to enthusiastic. The AI would immediately adjust its communication style to match.

“You could have continuums for different personality traits,” Frank explains. “The system scores each dimension and tells the AI: adjust your discussion in this direction to optimize the interview with that person.”

The benefit is transparency and consent. People feel empowered when they control their experience. 

But there’s a catch: what people think they want and what actually engages them might be two different things. This leads us to our second approach: letting data make the call.

This would consist of AI analyzing panel profiles, previous interview patterns, and behavioral data to automatically predict the respondent’s communication style. It’s friction-free: no setup, no decisions, just continuous improvement with each conversation.

But the most effective solution may be letting both work together. Participants start by choosing their preferred style, establishing trust, and buy-in. Then, behind the scenes, the AI tracks what actually drives engagement and gradually refines the experience. It’s the best of both worlds: human agency meets machine learning, creating interviews that feel personal because they are.

The AI Personality Advantage

The research makes a clear case for moving beyond one-size-fits-all approaches. When AI adapts to how people actually want to communicate, engagement skyrockets — and that means better data quality. 

Participants think deeper about questions, share nuances that closed-end surveys miss, and stay engaged through the end. Perhaps most tellingly, they describe the experience as feeling like a genuine connection rather than a chore.

The technology to make this happen already exists. Conversational AI platforms can adjust tone, vocabulary, and communication style right now. The opportunity isn’t waiting to be built; it’s waiting to be applied.

Of course, the conversation itself is only part of the equation. If we focus on improving respondent experience through engaging surveys, fair incentives, and respect for participants’ time and input, we create the conditions for quality insights. 

This is how we solve the engagement crisis that’s plagued research for decades. Not with unnecessarily high incentives or stricter quality controls, but by making research feel less like a transaction and more like a conversation worth having.

How does adding personality to AI surveys improve data quality?

When the AI’s tone feels more human, participants stay engaged longer and share more. In the study, personality-driven interviews produced more words, longer conversations, and feedback that described the experience as enjoyable and natural.

What does Virtual Incentives and Outset.AI’s research reveal about different AI personalities?

The study tested three AI personalities, humorous, authoritative, and empathetic, against a neutral control. All three outperformed the neutral tone in engagement and word count. The humorous interviewer led with a 53% increase in engagement, the authoritative one by 31%, and the empathetic by 21%. Participants described these interviews as more natural and conversational, compared to the control group’s “just like a machine” experience.

What are the two main paths for scaling AI personality in research?

One option is to let participants choose their preferred interviewer style before starting. The other is for AI to analyze behavior and automatically adjust tone for each respondent. The most effective approach may combine both: starting with participant choice, then letting AI refine its style based on engagement data over time.

Share