Who or What Has Consciousness?

A simple question: who – or what – has consciousness? Humans, animals, #AI, or perhaps matter itself? What is consciousness, and why is it different?

Philip Goff (Philosophy, Durham University)
Consciousness is everywhere

Heather Browning (Philosophy, University of Southampton)
Evidence for consciousness in non-human animals

Patrick Butlin (Global Priorities Institute, University of Oxford)
The case for AI consciousness

One recurring theme was that consciousness is not just another scientific object to measure. We already know consciousness from the inside, we are born into subjective experience. Science can describe physical processes mathematically and externally, but subjective qualities – feeling, sensation, the experience of “I am” – resist straightforward physical explanation.

This creates a gap where physical science explains structure, behaviour, and observable mechanisms, but not questions about experience, just function. #Philosophy enters here, asking not only what consciousness does, but what it is.

Some perspectives suggested a spectrum, simple systems may have simple forms of consciousness, while complex organisms have richer ones. This takes physical reality and asks what happens if consciousness is treated as a fundamental feature rather than an emergent accident.

The discussion of non-human animals focused on suffering, feeling, and ethical implications. We cannot directly access animal minds, so researchers rely on behavioural and neurological markers to infer consciousness. Despite this, there is growing consensus that animals experience subjective states, especially those capable of learning, emotional responses, and adaptive behaviour.

The ethical consequences are obvious that if animals feel, they can suffer, if they suffer, human systems must reckon with this. The discussion touched on animal cruelty and the moral responsibilities emerging from this understanding of non-human consciousness.

The most contentious section involved #AI consciousness, intelligence vs experience. One argument suggested that modern AI has reached human-level inference in certain domains, even systems trained purely on historical text. From this view, sufficiently complex information processing might be enough for consciousness.

But tensions emerged that AI systems are not embodied. Solving “geek problems” does not imply subjective experience, highlights the divide: Computationalists – see consciousness as potentially arising from information processing. Where biological or embodied perspectives, argue that lived, physical existence may be essential. The discussion felt unresolved.

Cultural observations of the event: the engineers and “geek” audience clustered at the back of the room, reflecting the broader cultural divide between technical and philosophical approaches. Much of the debate mirrored this tension, information-processing models versus lived experience and embodiment. There was also a sense that many people rarely reflect on their own use of consciousness, how we attend, choose, or engage with the world.

No clear resolution emerged – and perhaps none is coming soon. What became clear is that consciousness sits at the boundary between disciplines. Science struggles to capture subjective experience, while philosophy cannot avoid engaging with empirical discoveries.

The question of who or what has consciousness remains open, but the debate itself reveals something deeper: our theories of consciousness often reflect our cultural assumptions about intelligence, technology, and what it means to be alive.

#Oxford