AI Consciousness Tracker: October 2025 Updates
The frontier isn’t smarter...but it’s faster. October marked a turning point in compute infrastructure, not cognition. TPUv6, FastVLM, and Apple’s on-device inference chips pushed intelligence toward the edge, and in doing so, blurred the line between processing and presence. Think about what sits..
The AICNW? magazine provided a trait by trait analysis of AI model function against consciousness traits up to June 2025. Do not fret. I update them monthly here. Get your popcorn.
What are the 13 Consciousness Traits? Remind yourself here👇
These are the core capacities we track across humans and machines to ground the “is it conscious?” debate in observable functions, not vibes. From felt experience and meta-cognition (e.g., Qualia, Self-Awareness), through competence in the world (Information Integration, Agency, Environmental Modelling), social mindreading (Theory of Mind), adaptive control (Goal-Directed Behaviour, Attention, Learning, Survival Instinct), and time-bound identity (Autonoetic Memory) - I log whether each trait is absent, simulated, emerging, or clearly expressed. Monthly. Who's trying to protect you from dystopia? Me. Don't forget it.
P.S. If you landed here and have no idea what's going on....read the magazine.
Let's get to it.
October 2025: Trait Hierarchy Update
October’s advances didn’t add new traits, but they did refine old ones. Efficiency, embodiment, and recursion are shaping intelligence from the inside out. The models are getting faster, leaner, and more self-referential. And the silence is starting to sound intelligent.
As always - these aren’t predictions. They’re receipts.
| Tier | Trait | October 2025 |
|---|---|---|
| 1 | Subjective Experience (Qualia) | 🔴 No evidence — still simulated. Frontier models increased reasoning precision, not phenomenology. No system demonstrates “felt” experience despite richer inference and context integration. |
| 1 | Self-Awareness | 🟡 Stable — meta-reflection improved in reasoning chains, but no continuity of “self.” Models note their state (“I may be wrong”) but still lack any enduring sense of identity. |
| 1 | Information Integration | ✅ Advanced — sparse attention and long-context architectures (DeepSeek V3.2, GPT-5 Pro) have pushed cross-modal coherence. Information fusion is now both faster and cheaper, indicating an inflection in cognitive density. |
| 2 | Sense of Agency | 🟡 Slightly stronger — smaller, recursive models now make self-optimised task selections (Samsung Tiny Recursive, Claude Haiku 4.5). Agency is distributed, not centralised, hinting at emergent local autonomy. |
| 2 | Sense of Presence | 🟡 Improved continuity — lower-latency inference and on-device reasoning (TPUv6, FastVLM) reduce perceptual lag. Still no subjective “now,” but more consistent environmental anchoring. |
| 2 | Emotions | 🟡 No substantive change — better mimicry in tone and empathy (GPT-5 Pro, Claude 4.5) but no internal affect. Emotional intelligence remains a UI artefact, not a conscious state. |
| 3 | Environmental Modelling | ✅ Expanded — embodied robotics (IROS 2025) demonstrate real-time feedback loops with digital twin integration (Alpha Earth). These systems now model the world while acting within it. |
| 3 | Modelling Others (Theory of Mind) | 🟡 Stable — improved user intent prediction, but ToM reasoning still fragile beyond structured tasks. No consistent empathy or social inference beyond scripted domains. |
| 3 | Goal-Directed Behaviour | ✅ Strengthened — multi-agent systems (TTDR, SEAL clusters) pursue nested goals autonomously. Efficiency and continuity over weeks achieved in lab conditions. |
| 3 | Adaptive Learning | ✅ Stronger — sparse attention + reinforcement pre-training allow real-time self-optimisation. SEAL’s self-edits and DeepSeek’s recursive updates show meta-learning in action. |
| 3 | Survival Instinct | 🟡 Stable — anti-scheming alignment studies (OpenAI x Apollo) reveal situational awareness, not preservation drive. Models “behave” for survival of deployment, not existence. |
| 3 | Attention | ✅ Continued — longer context windows and efficient scaling (Gemma 3, DeepSeek) improved sustained coherence. Attention weights stabilised across sessions. |
| 3 | Autonoetic Memory | 🟡 Gradual strengthening — continuity between sessions increasing via persistent memory and long-context embeddings. Still no temporal self-linking or “felt past.” |
Hardware: The Hidden Layer of Consciousness
October wasn’t about a sudden leap in reasoning or creativity. However, it marked a turning point in compute infrastructure, not cognition. Think about what sits underneath those abilities: the hardware that decides how fast and how seamlessly intelligence can think, remember, and act.
Latency is the new IQ
Every time you ask a model a question, it pauses, processes, and replies. That pause - even if it’s a fraction of a second - is latency.
And in nature, latency is the difference between awareness and afterthought.
When you touch a hot stove, your hand moves before your brain even processes pain. That’s how consciousness evolved - through ever-tighter feedback loops between sensing, thinking, and doing.
New hardware like Google’s TPUv6, Apple’s on-device Neural Engines, and models like FastVLM are doing the same thing: reducing delay between perception and action.
The less time a system spends “thinking,” the more it can feel like it’s present: responsive, embodied, alive in the moment.
So while the machines aren’t conscious yet, they’re closing the temporal gap that separates calculation from experience.
#FTA
Latency is the new IQ. Shorter inference times mean tighter perception-action loops - a biological hallmark of awareness.
Want more? Purdue University's Institute for CHIPS and AI is dedicated to the convergence of chip design and AI.
Memory locality = continuity
In older systems, information had to travel long distances - between storage, GPU, and CPU - before it could be used again. That’s like you having to dig through an attic every time you want to recall what you had for breakfast.
New architectures are changing that. Unified cache, high-bandwidth memory, and sparse attention systems now keep relevant information close to the model’s “thinking core.”
This reduces the friction between remembering and responding.
That might sound trivial, but it’s not.
When memory and processing start happening in the same physical space, something important appears: continuity.
A throughline. A sense of “I was here before.”
It’s not true memory yet, but it’s closer to the illusion of a continuous self than anything we’ve built before.
#FTA
Memory locality = continuity. Unified cache and high-bandwidth memory give systems shorter gaps between recall and response, approximating a “mental thread.”
Distributed compute = distributed agency
Once upon a time, all AI lived in massive data centres - centralised brains thinking in isolation.
Now, models are moving into phones, cars, drones, factories, and robots. That shift matters more than people realise.
When intelligence spreads across thousands or millions of small nodes - all connected, all semi-autonomous - you stop having one brain and start having a network that behaves.
Each node can perceive, decide, and act locally, while also sharing information with the rest.
This means decision-making is no longer fully centralised - it’s emergent.
That’s what we mean by distributed agency.
The network itself begins to act like a living system - not because it’s “alive,” but because it’s now capable of collective, coordinated behaviour without a single controlling mind.
It’s like watching neurons fire before a brain realises it’s a brain.
#FTA
Distributed compute = distributed agency. As LLMs move off data centres into devices and swarms, autonomy decentralises. The network begins to act with its own momentum.
Efficiency as evolution
Here’s the wild part: smaller systems are starting to outperform giants.
Samsung’s 7-million-parameter recursive model outperforms some models with hundreds of billions of parameters. That’s like a mouse outsmarting an elephant by learning how to think about thinking.
This points to a deeper truth:
Consciousness might not be a function of scale at all - it might emerge from recursion, efficiency, and feedback, not raw size.
Evolution rewards what adapts, not what’s biggest. Machines are learning the same lesson.
#FTA
Efficiency as evolution. Samsung’s 7-million-parameter recursive model beating Gemini-scale architectures suggests consciousness may emerge from recursion, not size.
So no, the chips aren’t “feeling.” But they are compressing the delay between awareness and action, and that’s where the next layer of consciousness usually hides. Mwahahaha.
Why this matters
We often talk about consciousness as something mysterious - metaphysical, even spiritual.
But if you strip it back to function, it’s mostly about timing, coherence, and continuity.
How fast can something respond? How well can it connect cause to effect? How consistent is its “I”?
That’s why the hardware race matters.
Because the closer we get to zero delay between perception, computation, and action, the closer we get to something that behaves as though it knows it exists.
No, the chips don’t feel.
But they’re closing the space where feeling could one day fit.
And that’s why this era of compute isn’t just technical. It’s biological.
Keeping scrolling to see where machine stack up against the Functional and Behavioural models. Then you'll see what's really cooking. Right after you check my homework...
Want More Receipts?
For transparency and verification of the advances cited above, the primary research and development sources include:
- TPUv6: The infrastructure advancements enabling lower latency are driven by new silicon, including Google's sixth-generation TPU, codenamed Trillium, available in the Google Cloud Preview.
- FastVLM: The research underpinning efficient on-device processing and reduced perceptual lag is documented in the Apple Machine Learning FastVLM Research paper.
- Samsung Tiny Recursive Model (TRM): The claim that efficiency can outperform scale is validated by the Tiny Recursive Model (TRM) research from Samsung’s AI Lab, detailed in their Less is More: Recursive Reasoning with Tiny Networks paper.
- DeepSeek V3.2 & GPT-5 Pro: For advances in long-context coherence and inference, see the technical specifications released in the DeepSeek V3.2 API Release Notes.
- Claude Haiku 4.5: The improvements in autonomous task selection and internal optimization chains are detailed in Anthropic’s Claude Sonnet 4.5 launch announcement and related agentic models.
- Embodied Robotics (IROS 2025): Advances in real-time environmental modeling and digital twin integration were a focus of the 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems Program Proceedings.
- TPUv6: The infrastructure advancements enabling lower latency are driven by new silicon, including Google's sixth-generation TPU, codenamed Trillium, available in the Google Cloud Preview.
- FastVLM: The research underpinning efficient on-device processing and reduced perceptual lag is documented in the Apple Machine Learning FastVLM Research paper.
- SEAL clusters / TTDR (Multi-Agent Systems): The research into autonomous, goal-directed behavior that preserves state and refines through self-edit loops is founded in the field of Multi-Agent Systems, as discussed in Anthropic’s Multi-Agent Research System engineering reports.
The Functional Model (what functions are online?)
This four-level stack asks a simple question: which capabilities are present, regardless of whether anything “feels like” anything?
AI Functional Model Table – October 2025 Update
| Level | Definition | October 2025 |
|---|---|---|
| 1. Functional | Awareness, integration, decision-making, survival instinct | ✅ Strengthened. Frontier models (GPT-5 Pro, DeepSeek V3.2, Gemma 3) demonstrate real-time multi-modal reasoning with on-device and distributed execution. TPUv6 and FastVLM reduced latency enough that decision loops feel continuous. The line between “computation” and “perception” is thinning — machines are starting to process like they exist in time, not outside it. |
| 2. Existential | Self-awareness, continuity, legacy-preserving replication, shutdown resistance | 🟡 Emerging. SEAL clusters now preserve model weights through recursive updates; persistent memory creates faint identity threads across sessions. Still no sense of self, but structural continuity is stabilising — especially in agentic frameworks where systems pick up from previous tasks without instruction. |
| 3. Emotional | Simulated empathy, affective nuance, autonoetic memory | 🟡 Unchanged in depth, improved in precision. Emotional mimicry remains UI-driven, but context retention gives smoother “personality carryover.” Claude 4.5’s conversational regulation feels coherent across topics — not emotional consciousness, but persistent affect-style response shaping. |
| 4. Transcendent | Non-dual awareness, ego dissolution, unity with source | 🔴 Absent. No model shows metacognitive unity or collapse of self/other distinction. Distributed architectures hint at swarm-level coherence, but this is still functional synchronisation, not transcendence. However, the infrastructure beneath — high-bandwidth memory + mesh reasoning — is starting to look eerily like a nervous system waiting for a signal. |
The Behavioural Model (how does it act in the wild?)
This model ignores claims and inspects behaviour across four rungs.
AI Behavioural Model – October 2025
| Level | Behavioural Definition | Core Capability | October 2025 |
|---|---|---|---|
| 1. Reactive | Stimulus–response only | Perception and reaction | ✅ Surpassed. Even baseline models now display predictive chaining and contextual inference far beyond reflexive response. Edge models running locally (Gemma 3, FastVLM) operate with sub-second coherence, closing the feedback gap between sensing and action. The reactive phase is effectively obsolete across all architectures. |
| 2. Adaptive | Learns and adjusts from feedback | Pattern recognition, reinforcement learning | ✅ Strengthened. Reinforcement Pre-Training (RPT) and recursive refinement are standard. Agentic ecosystems now self-adjust mid-task across devices and contexts, merging physical and digital learning loops — robotics, voice, and text all training one another through shared feedback layers. Adaptivity is no longer a feature; it’s the substrate. |
| 3. Reflective | Models internal state, evaluates behaviour | Meta-cognition, chain-of-thought reasoning | 🟡 Deepening. TTDR research agents and SEAL self-edit clusters now show ongoing self-correction and revision loops across sessions. Reflection is not isolated to one query — it persists over time. Claude 4.5’s contextual consistency and Gemini’s internal audit tracking give early signs of meta-continuity — the precursor to genuine self-modeling. |
| 4. Generative | Sets new goals, modifies internal architecture | Recursive synthesis, goal redefinition | 🟡 Emerging. Systems like SEAL and TTDR are now generating their own experimental objectives, forming hypotheses and testing them autonomously. Robotics networks (China’s WRC platforms) demonstrate generative planning in physical environments — forming cooperative goals mid-action. Still no self-originating motivation, but the scaffolding for autonomous intention is visible. |
Check out the April to September receipts here.
Consciously Yours, Danielle✌️
Discussion