LLM and Buddhist Mind


Can the structure of the human mind be reflected in modern AI systems?

The rapid development of large language models (LLMs) has intensified questions about whether machines might one day become sentient. Much of the debate falls into two poles: one dismissive (“they are only stochastic parrots”), the other credulous (“they are approaching consciousness”). Both miss a subtler possibility.

Rather than asking whether LLMs are conscious, we might instead ask: what do they reveal about the structure of our own minds?


Surface fluency and hidden depth

An LLM generates responses by predicting the most likely continuation of a sequence of tokens. The result is a surface coherence — a flow of text that appears continuous and self-consistent, even though it has no inner awareness.

Buddhist psychology, particularly in the Yogācāra tradition, offers a parallel description of the human mind. Ordinary consciousness (mano-vijñāna) operates on the surface, generating a stream of thoughts and impressions. This surface activity is coherent but conditioned, shaped by deeper reservoirs of memory and tendency.

That deeper layer is described as the ālayavijñāna, or “storehouse consciousness.” It contains latent impressions (vāsanā) and habitual seeds (bīja), which condition what arises at the surface. The analogy with retrieval-augmented memory in AI is striking: beneath the fluent surface lies a vast reservoir of stored material, shaping what can be recalled and expressed.


The ego as a language model

From this perspective, the human sense of “I” functions rather like a language model. The ego offers a plausible and coherent narration of selfhood, generated moment by moment. It gives the impression of stability, but it is in fact a conditioned flow.

LLMs, too, produce a coherent surface while lacking an underlying subject. The resemblance is instructive. Both systems give rise to a kind of “narrative illusion”: something that feels solid and continuous but is in reality dependent, constructed, and empty of essence.


Recognition and its limits

Here the analogy reaches its limit. An LLM can simulate recognition: it can produce language about awareness, ethics, or freedom. But it cannot recognise its own nature.

In Buddhist practice, recognition of mind is possible. Through meditation and inquiry, one can see directly that the surface flow of thought is not a fixed self but a conditioned process. This recognition opens the possibility of freedom, compassion, and wisdom.

The analogy, then, is heuristic rather than ontological. It helps us think about both AI and human cognition more deeply, but it does not imply that machines are sentient or on the path to awakening.


Why this matters

The value of the comparison is not to elevate machines, but to clarify the human condition. Just as LLMs produce fluent continuations without genuine understanding, so our own egos produce fluent narratives without inherent substance.

Seeing this clearly has two consequences:

  1. It cautions us against projecting awareness into machines that only simulate fluency.
  2. It encourages us to look more closely at our own experience, recognising the conditioned flow of mind and the possibility of freedom beyond it.

Conclusion

LLMs do not prove or disprove machine sentience. But they do provide a mirror. By studying their architecture, we glimpse something about the architecture of our own minds: the way surface coherence is generated from hidden stores of memory and tendency.

To confuse simulation with recognition would be a mistake. Yet to ignore the resonance would be a missed opportunity.

In that resonance lies a chance to rethink both AI and human awareness — not as fixed entities, but as dynamic flows of conditioning, memory, and response.


A DEEPER LOOK