LLMs are coherence engines, not truth engines
[LLMs generate coherence more than truth, with] no access to the world, no sensory grounding, no lived experience, and no intrinsic way to check correspondence between its outputs and reality.
[The same is true of humans, as we] construct narratives, causal explanations, identities, and moral frameworks that hang together, rather than ones that are objectively correct. [We tend towards] narrative consistency, social acceptability and reinforce biases based on beliefs.
science works because it builds institutional scaffolding that forces grounding through measurement, replication, falsification, and peer review. Without grounding, both humans and LLMs drift into elegant nonsense.
The risk with LLMs is not that they lie, but that they speak with fluent confidence in domains where humans already confuse coherence with truth.