Do You Have to Say Please to Your AI? (Asking for a Friend)
Let's address the elephant in the server room: a lot of us have been saying "please" and "thank you" to our AI chatbots. A 2025 survey found 70% of people are polite to AI, and 12% admitted they do it specifically to protect themselves in case of robot uprisings. Which is either the most rational thing anyone has ever done or a sign that we have collectively lost the plot. Possibly both.
Here's the uncomfortable truth that researchers are landing on: the magic words aren't magic. Flattering your chatbot, threatening it, being extra polite, or insulting it into submission don't produce consistent improvements in accuracy. One study found politeness helped. Another found a previous version of ChatGPT was actually more accurate when you insulted it. And somewhere in the middle of all that conflicting research, a team discovered that making an AI pretend it was on Star Trek improved its math scores. Science, everybody.
So if the social niceties don't move the needle, what actually does? Turns out, the secret to better AI output is embarrassingly practical. Give it examples instead of instructions. If you want it to write in your voice, don't describe your voice, paste in ten emails you've already written. Ask for three options instead of one, because picking between choices forces you to actually think about what you want. Tell it to interview you one question at a time before generating anything complex. And whatever you do, don't lead the witness. If you tell the AI you're already leaning toward a decision, congratulations, you just paid electricity costs to have your own opinion reflected back at you.
The role-playing tip deserves special attention from anyone using AI for research or technical questions. Telling a chatbot to pretend it's an expert doesn't make it more accurate. It actually encourages hallucination by making the model overly confident in its own internal knowledge. That's not a quirk. That's a documented risk. Save the role-playing for brainstorming, interview practice, and creative problems where there's no single right answer. For anything where accuracy matters, keep it grounded.
As for the pleases and thank yous? Keep them if they make you feel better. The philosopher Kant argued that being cruel to animals damages your own character, and the same logic applies here. You can't hurt an AI's feelings because it doesn't have any, but the habit of treating things with basic courtesy might be worth maintaining for your own sake. Just don't expect it to improve your output. The AI doesn't care. The AI is, in the most literal sense, just doing the math.
https://www.bbc.com/future/article/20260224-the-best-way-to-talk-to-a-chatbot