Exploring Ontology, Perspective, and Mutual Causality —
From Cats, Dogs, and Tables to Systems, Mind, and AI
I’ve come out of the Science and Subjectivity | Explorations retreat with a sense that something interesting is happening at the intersection of contemplative practice, systems thinking, and the sciences of mind — something that might also be relevant to how we think about intelligence, human or artificial.
A surprising way in came from a simple question:
How is a table like a dog?
And how is a cat like a table?
The point wasn’t to collapse distinctions, but to notice how easily the mind reorganises the world depending on what it takes as salient.
Look at:
- four legs → table, dog, cat fall into one cluster
- agency → the animals cluster; the table drops away
- stability → the table is dominant; the animals become “moving furniture”
- function → a calm cat can become a temporary table; a dog cannot
These little re-sortings reveal something important:
Ontology isn’t fixed.
What appears depends on how we look.
And how we look depends on the habits, training, and perspective of the mind.
This is as true for a contemplative mind on retreat as it is for an AI system.
Ways of looking create worlds
One theme that came up on retreat — especially in conversation with the physicists, biologists, psychologists, and cognitive scientists present — was that how a system looks determines:
- what it can see
- what it can respond to
- what it takes seriously
- what it ignores
- and therefore what kind of “world” arises for that system
In humans, this is the phenomenology of attention and framing.
In contemplative practice, this becomes vivid: widen attention and the world becomes broader; narrow attention and it contracts.
In AI, something similar is happening:
- training data determines the “field of view”
- loss functions determine what the system prioritises
- fine-tuning shapes the system’s sense of what is “normal”
- prompting draws out specific regions of its meaning-space
Which means:
an AI system acquires a perspective — not simply a set of abilities.
And its perspective determines what it can notice, what it tends to ignore, and how it behaves.
This parallels Joanna Macy’s systems view almost exactly.
Joanna Macy and the widening of perspective
Macy wrote that:
- perception broadens as one recognises larger systems one is embedded in
- ethics emerge from seeing interdependence
- narrow perception leads to narrow care
- wide perception leads to inclusive, systemic care
In other words:
Perception → Ontology → Ethics
She applied this to ecological systems, but the structure is general.
It seems equally applicable to:
- contemplative practice
- scientific modelling
- human psychology
- and even artificial intelligence
A narrow perceptual field — in a person or a machine — leads to defensive, rigid, or unaware behaviour.
A wide perceptual field leads to ethical sensitivity, responsiveness, and context-awareness.
This is not mysticism.
It is simply the structural fact of systems.
Where this touches AI (without needing technical detail)
One can ask, very simply:
- does a system (human or machine) see the wider consequences of its actions?
- does it perceive unintended harms?
- is it aware of ambiguities and alternative framings?
- can it hold uncertainty without collapsing?
These are questions of perspective, not computation.
Contemplative practice has known this for millennia: widen the field of awareness and the world that appears becomes richer, more interconnected, and more ethically textured.
Joanna Macy understood it through systems theory.
AI research has so far largely ignored it.
Why this matters
This perspective may help to:
- bring contemplative insight and scientific insight into mutual dialogue
- frame ethical questions about AI in a way that is not naive or technophobic
- expand the discussion beyond “alignment” into a deeper ontology of mind
- clarify why broad, non-collapsing attention is central to ethical action
- give us a shared language for mind, meaning, and systems
And it all starts with noticing that:
How we look determines what we see.
What we see determines how we act.
And how we act shapes the systems we inhabit.
A cat, a dog, and a table show us how fluid our categories really are.
Joanna Macy shows us how deeply interdependence runs.
Contemplative practice shows us how perspective shifts transform experience.
And AI shows us that engineered systems, too, acquire ways of looking.
Open, not concluding
I don’t want to force a conclusion here.
What seems alive is simply the possibility that:
- systems theory
- contemplative insight
- meaning-space
- mutual causality
- and even the behaviour of AI models
may all be describing the same basic structure:
a mind or system enacts the world it perceives,
and widens or tightens that world through how it attends.