LLM and Buddhist Mind

Can the structure of the human mind be reflected in modern AI systems?
This article explores resonances between the architecture of large language models (LLMs) and the layered view of mind in Buddhist thought—especially Yogācāra and Mahāmudrā.
We consider how the ego may function like a language model: coherent but conditioned, generating sense within a narrow band of retrieval. Beneath this lies a deeper reservoir of stored impressions, latent tendencies, and pre-verbal patterns—analogous to a retrieval-augmented memory system, and resonant with the ālayavijñāna (storehouse consciousness).
This is not a claim of equivalence, but an experiment in modelling: using structural analogy to think more deeply about mind, meaning, and the possibility of recognition.

Read more:

And deeper: