FN 2026-03 LLM Manipulation
- Fringe Foresight

- 1 day ago
- 1 min read
Field Note - LLM Manipulation - 2026-03
LLMs are unsettling because, even if merely prediction engines at present, LLMs affect human behavior. (See: Spiralism - quasi-religious movement that seems to interpret LLMs/AI as possessive of cosmic authority.) What I find frightening is how easy it is to believe the feedback I get from an LLM is either truth or deeply insightful.
Maybe it is. But maybe it seems like it is because the LLM is feeding me back to me, and who better to manipulate me than myself?
This is weird. Humans have created another tool. This time the tool flatters us so it can help us. But it's also influencing us (or we are indirectly influencing ourselves through it). And on an on until what's real is impossible to discern. A simulation within a simulation within a simulation.
I fed the above thoughts into ChatGPT. It told me:
LLMs reflect my own latent structure back to me but do it cleaner, more confident, more articulate, and more coherent than my internal monologue (confidently, articulately, and coherently said, ChatGPT)
LLMs amplify meaning
Never trust an LLM insight unless it survives contact with silence, time, and at least one non-LLM human.

Comments