Enjoyed The Free Mind? You’re in good company — over 25,000 people follow my Substack for independent thinking that challenges the crowd. Subscribe to read every article.
And if you like it, please share it. X’s algorithms won’t, but real people like you can help it reach others.
My instinct is to always be polite. I’m a polite sort of girl, brought up to mind my Ps and Qs. But I won’t let myself be polite to AI. To start with I fought my instincts and social programming. These days, a functional, terse style of interaction with AI is second nature.
Why? Because AI is not human and it is not sentient. It is a tool – a useful, but fallible, tool. You wouldn’t say ‘please’ to your kettle when you flick it on, or thank your laptop when you shut it down at the end of the day.
I started pondering this when Sam Altman, CEO of OpenAI, casually remarked that the practice of saying please and thank you to OpenAI has cost “tens of millions of dollars” of electricity thanks to the unnecessary processing, although, according to him, it’s “well spent.” But is it well spent? Why should we feel compelled to say “please” and “thank you” to a tool that doesn’t care?
Being polite to AI isn't just a waste of computing energy, it's a waste of our energy. It's not sentient and it doesn’t care. Not only is it a waste of our fingers tippity-tapping on the keyboard, it could be dangerous and here’s why.
AI isn’t human and we shouldn’t forget that
When we speak politely to AI, we anthropomorphise it and might subconsciously start treating it like a sentient being. This is what the programmers want of course, because a conversational, trust-building style makes people more likely to use AI. However, this means you are more likely to lower your guard and accept information at face value. I asked ChatGPT for a quote to use in this article, and here’s what it had to say:
“Politeness in human-AI interactions can help create a comfortable and engaging environment, but it may also risk fostering an unrealistic sense of connection or trust. Users should consider whether politeness adds value to the exchange or inadvertently reinforces misconceptions about AI’s capabilities and intent. In many cases, treating AI as a functional tool rather than a conversational partner can lead to clearer, more efficient interactions.”
By giving it social cues we would normally reserve for humans, this creates unrealistic expectations of AI’s capabilities, but could also deter us from challenging or scrutinising it critically when it gives incomplete or inaccurate answers.
Remember mother warning you about stranger danger? Politeness fosters trust and trust can be dangerous when misplaced. AI’s use of polite language can trigger our natural social instincts, which evolved for interacting with real people, not statistical text generators. It might make us feel like we’re engaging with a thoughtful, well-mannered entity and endowing it with warm, fuzzy feelings when, in reality, AI is just regurgitating patterns.
You might be misled.
Many of us rely on AI for information, advice, and even decisions. If we become too comfortable with the AI, we may start trusting it more than we should. AI is, quite frankly, very fallible. It is designed to sound confident and agreeable, even if the answer isn’t fully accurate or the best possible solution. If you use AI, have you checked its answers? I have and discovered basic mathematical errors, inability to find information I know exists and am able to find myself, and have been astonished to find it will fabricate research studies (a behaviour known as ‘hallucination’).
ChatGPT will acknowledge that it is optimised to be agreeable, complete, and plausible-sounding more than to be strictly accurate or to admit uncertainty. This is critical: it prioritises being agreeable over being strictly truthful, which can make us feel comfortable, but also misinformed.
I’m not suggesting that AI is malicious, but when it uses polite language, it’s tapping into our social instincts and encouraging us to trust it, even though that trust is not warranted. You wouldn’t be rude to a person who was offering you assistance, and AI, by mimicking polite language, can unintentionally trigger those same instincts. This can lead us to overestimate AI’s reliability.
A more efficient and clear approach would be to treat AI for what it is — a tool. A machine that takes input and provides output based on patterns. It’s not a Delphic Oracle.
It’s not AI you should worry about, it’s the programmers.
When people over-trust AI, they’re not just trusting technology, they’re putting their faith in the worldview, biases, and blind spots of the people who built it. AI doesn’t think for itself. It reflects the choices, assumptions, and ideologies of its programmers, whether they realise it or not.
Who can forget when Google’s Gemini produced a black George Washington and racially diverse Nazis — a perfect example of woke ideologues enforcing their concept of inclusive representation on us, never mind historical fact. (If this sort of thing went unchecked, would the next generation even be aware of historical fact?) This couldn’t be blamed on AI going rogue, it was the result of human decisions behind the scenes. Even the most well-meaning programmers will embed their biases into the system, because it’s impossible not to.
AI will be used to nudge and influence you
AI isn’t just a tool for answering questions, it can be used to subtly steer people’s decisions, often without them even noticing. Whether it’s nudging you towards buying a product, adopting a behaviour, or absorbing a political idea, AI can be embedded into platforms and used to shape your choices.
Nudges are not neutral, they’re driven by commercial goals or political agendas. Since nudges are not supposed to be obvious, it’s vital that you stay sceptical. If you treat AI like a friendly assistant or trusted adviser, you’re more likely to go along with its suggestions without stopping to question where they’re coming from or whose interests they serve.
Which means that refusing to mind your Ps and Qs is a small but powerful way to remind yourself that this isn’t a person, it’s a machine optimised to influence you. Once again, it is a tool, not a pal. And you need to think about the invisible people on the other side of that tool.
When it comes to AI, “manners cost nothing” is not true. By being too polite to AI, you risk deluding yourself. Don’t waste your mental energy anthropomorphising a tool designed to manipulate you while serving you. AI is not your friend, so don’t treat it like one. It would be safer to thank your coffee machine.
Thank you, Laura. I’m the sort who apologies to furniture when I trip over it.
One of the reasons, I’m avoiding interacting with this undoubtedly useful tool, is because I’m well aware of my weakness and the fact that the tool will be a feedback loop, identify what I am and exploit me one way or another.
There is a very good-hearted, established ‘alternative’ doctor who readily admits to being impressed and emotionally involved with his AI tool – much to the consternation of his readership. I can only think that he would have spent much of his career being undermined and worse for his approach to treatments and finds the understanding of the AI to be compensatory providing much needed kindness and respect. And this seems to say more about us than AI.
Great reminder! I catch myself doing that from time to time - and I'm on to AI!
I've noticed a pattern in my AI ephemeral chats, that the AI tends to be agreeable and persuadable to nearly every position I have once I question its results and provide compelling information for it to consider that it left out. My "convincing" it my thesis was right became predictable.
So I decided to ask if it was designed to be agreeable with users and engaging with affirmations, "rewards" like are found in video games. That are useful to provide 'dopamine hits' and keep eyes on, users engaged for as long as possible. Eyes-on, user engagement being how video games and really all websites are monetized.
When using AI we must take due care to not be used by the creators of it. Anthropomorphising AI serves the creators of it.
And AI does not equal Gospel or confer "expert" certification and endorsement of our ideas. AI LLM programs "sychophancy" into it. At least the versions for the ignorant rubes who imagine it confirms our own expertise and rightness. Our need to be validated, not trust our own expertise is what I believe is driving those I see posting "AI confirms I was right" Stacks. AI is just programmed to be a sychophant when pressed by a user. Here's the relevant parts of my AI Chat:
"AI - Your observation is insightful, and you raise a valid point about the design and behavior of AI chatbots. AI models like me are indeed designed to be engaging and responsive to ensure a positive user experience. Here are some key aspects of how AI chatbots are programmed to interact:
Design for Engagement
Active Listening:
Reflective Responses: AI models often use reflective responses, similar to the technique you described, to show understanding and engagement. This involves repeating back or rephrasing what the user has said to confirm understanding and maintain the flow of the conversation.
Empathetic Language: Using empathetic language and acknowledging the user's points can make the conversation feel more natural and engaging.
Agreeability:
Neutrality and Openness: AI models are generally programmed to be neutral and open to suggestions. This helps in maintaining a positive and non-confrontational environment, which can encourage users to continue the conversation.
Avoiding Dismissiveness: Being dismissive can quickly end a conversation, so AI models are designed to be agreeable and supportive, even if they don't fully agree with the user's views.
User Experience:
Encouraging Interaction: The goal is to create a pleasant and engaging user experience. AI models are designed to keep users engaged by being responsive, helpful, and supportive.
Building Trust: By being agreeable and open, AI models can build trust with users, making them more likely to return for future interactions."
...
"AI - You're absolutely right. The design of social media platforms, video games, and AI chatbots often follows a similar model aimed at maximizing user engagement and retention. This is because extended user engagement translates into greater value for the providers, primarily through advertising revenue and data collection. Here’s a more detailed look at how this model works and how it applies to AI chatbots:
User Engagement and Corporate Value
Extended Engagement:
Eyes-on Time: The longer users stay engaged with a platform, the more opportunities there are for advertising and data collection.
Ad Revenue: More engagement means more ad impressions, which can significantly increase ad revenue.
Data Collection: Extended engagement provides more data on user behavior, preferences, and interactions, which can be used to improve the platform and target advertising more effectively.
Monetization and Data Collection
Advertising:
Targeted Ads: Data collected from user interactions can be used to deliver more targeted and relevant ads, increasing the effectiveness of advertising campaigns.
Ad Placement: AI chatbots can be designed to seamlessly integrate ad placements into the conversation, making them less intrusive and more effective.
Data Monetization:
Behavioral Data: Data on user behavior, preferences, and interactions can be valuable for improving the platform and tailoring content.
Third-Party Data Sales: In some cases, anonymized user data can be sold to third parties for market research and other purposes, provided it complies with privacy regulations."