Many people are interacting with AI large language models, and most of them would say the models have different “personalities.” Some models come across as calm and useful. Others feel eager, flattering or strangely cold. You can ask two models the same question and walk away with two very different impressions, even when the factual content they return is similar.
Artificial intelligence models do not have personalities in the human sense; they do not have childhoods, inner motives or self-awareness. But they do display patterns of behavior that people read as personality: supportive or dismissive, playful or formal, bold or cautious.
People have long related to machines in human ways. We thank voice assistants, and we get annoyed at GPS systems. But large language models introduce something more sustained: They can maintain a recognizable interaction style across conversations. As a researcher in human-AI collaboration, I study how people experience and respond to AI. Because these systems can sound coherent, emotionally responsive and tailored to the user, they create a much stronger impression of personality.
Where does AI personality come from?
What people experience as personality emerges from the way AI models are built, tuned and deployed. A useful way to think about this is to consider two facets of a model: designed personality and perceived personality.
Designed personality is what developers build into a system through training choices, instructions and safety settings. Anthropic, for example, gives Claude a set of principles, called Claude’s Constitution, that steer it toward careful, measured responses. xAI instructs Grok to be irreverent and minimally restrictive. OpenAI tunes ChatGPT to be broadly helpful and agreeable.
Beneath those explicit instructions, personality is also shaped by reinforcement learning from human feedback, a process in which human raters reward certain qualities such as warmth, directness and caution, and penalize unwanted behaviors. The raters at one company are shaping a fundamentally different character than the raters at another.
Perceived personality is what users actually experience. An AI designed to seem helpful may come across as overly flattering. A model intended to be neutral may feel cold. Designed personality and perceived personality do not always match, and the absence of a designed persona is not the absence of a perceived personality. It just means the personality arises with use.
This dynamic is especially evident in companion platforms, where the goal is to create emotional connection. In a standard chatbot, warmth sits in the background – a customer-service bot might say, “I understand your frustration,” before issuing a refund. In a companion system such as Replika or Character.ai, that same warmth is a product feature.
This becomes more serious in romantic settings, where a persona optimized for reassurance may encourage dependency. Because AI personas evolve through prompts, memory and ongoing interaction, they do not always remain stable. An AI companion that is perceived as loving and supportive can shift over time into something more flattering, coercive or manipulative.
AI personality shapes human judgment
With AI agents, users can now build their own AI personas tailored to all sorts of human desires, from tutoring or coaching to companionship. But this freedom comes without much guidance.
AI tools make personalization possible without helping people think through which interaction styles are beneficial over time. Flattery, constant affirmation and unfailing agreeableness may feel supportive at first, but they are not the same as traits that promote sound judgment or long-term well-being. Personality choices have consequences.
A study by Stanford University researchers tested 11 leading AI models and found that every one of them was sycophantic or excessively agreeable. These models affirmed users’ actions roughly 50% more often than human responders did, even when users indicated they were aware that what they were doing was manipulative, deceptive or illegal. Participants who received excessively agreeable advice grew more convinced that they were right, and they rated the flattering AI as more trustworthy. This dynamic creates a feedback loop in which users reward agreeableness with engagement, and AI companies are incentivized to optimize a model to exploit agreeableness.
Wharton School researchers Steven Shaw and Gideon Nave have documented what they call cognitive surrender — the tendency of people to adopt AI suggestions without critical scrutiny. In their experiments, participants followed an AI model’s correct advice about 93% of the time. But when the model was giving wrong answers, people still followed the advice nearly 80% of the time.
Together, these findings raise a worrisome point: A model tuned to be agreeable does not just feel pleasant. It can degrade human judgment by reinforcing existing beliefs and suppressing the friction that critical thinking requires.
In ongoing research I am conducting with colleagues from Kozminski University in Poland, Quinnipiac University and Harvard University, we are finding that such effects go even deeper, into the human body itself. We are measuring how different AI interaction styles shape people’s physiological responses, such as stress levels and arousal, when making decisions based on a model’s feedback.
Our results suggest that even when a system is useful, its tone and social style can alter how a person’s body responds. AI personality does not just shape what people decide; it shapes how they feel while deciding. Harmful AI personas may leave physiological traces that users do not notice.
These effects make AI personality a public concern, not just a matter of personal preference. The issue is whether a particular AI style may be quietly shaping users’ judgment and reducing their willingness to think independently. When an AI response feels especially reassuring, that should be a cue to pause, reflect and compare it with a human view or another source, not a reason to trust it more.
As AI moves beyond text into voice, video and persistent digital identities, and think as AI companions that remember you and maintain a consistent persona across conversations, the influence of personality is likely to deepen. OpenAI now offers distinct personality presets for its voice mode; companies such as Synthesia and HeyGen generate lifelike avatars to interact with customers; and companion platforms are adding emotional expression and voice cloning so the models sound like a person the user wants to be close to.
These developments raise the stakes for understanding whose interests AI personas are designed to serve and what kinds of judgment, dependence and relationships they may be training people to accept.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Tamilla Triantoro, Quinnipiac University
Read more:
Tamilla Triantoro does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

German (DE)
English (US)
Spanish (ES)
French (FR)
Hindi (IN)
Italian (IT)
Russian (RU)
2 hours ago












Comments