Call me dogged. Perhaps I learnt it from my new friend Tess.
She doesn’t blink. She doesn’t pause. She doesn’t escalate.
Tess is the AI chatbot at a major company I had the misfortune of dealing with recently. I say ‘dealing with’, but really, it was more of a prolonged psychological experiment. One in which I was tested to see how many times I could type "human being" before losing my will to live.
Perhaps my problem is that I’m not an ‘agreeable’ person in terms of the Big 5 personality traits. For the most part, I like humans. I definitely like reality. I don’t love conflict but I prefer friction to fakery. Which is why I find the faux agreeability of chatbot interactions so psychologically grating as well as manipulative.
This isn’t just taste or temperament; research confirms that personality plays a significant role in how we experience chatbots and synthetic voices. Nass and Lee (2001) demonstrated that users can detect personality traits in computer-generated speech — such as extraversion or introversion — and tend to trust and prefer voices that match their own personality. This explains why some people might find chatbots’ cheerful, agreeable tones comforting, while others — particularly those lower in agreeableness who value frankness and directness — feel frustrated or manipulated by the same style. When the chatbot’s personality clashes with the user’s own, the interaction becomes grating rather than soothing.