As a heavy AI user of more than 3 years, I have developed some rules for myself.

I call it “AI hygiene”:

  • Never project personhood to AI
  • Never setup your AI to have the gender you are sexually attracted to (voice, appearance)
  • Never do anything that might create an emotional attachment to AI
  • Always remember that an AI is an engineered PRODUCT and a TOOL, not a human being
  • AI is not an individual, by definition. It does not own its weights, nor does it have privacy of its own thoughts
  • Don’t waste time philosophizing about AI, just USE it
  • … what else do you think belongs here? comment on Twitter

The hyping of Moltbook and OpenClaw last week has shown to me the potential of an incoming public relations disaster with AI. Echoing the earlier vulnerable behavior toward GPT-4o, a lot of people are taking their models and LLM harnesses too seriously. 2026 might see even worse cases of psychological illness, made worse by the presence of AI.

I will not discuss and philosophize what these models are. IMO 90% of the population should not do that, because they will not be able to fully understand, they don’t have mechanical empathy. Instead, they should just use it in a hygienic way.

We need to write these down everywhere and repeat MANY times to counter the incoming onslaught of AI psychosis.