The AICNW? magazine provided a trait by trait analysis of AI model functions meeting consciousness traits - up to June 2025. I update them monthly here. Get your popcorn.

What are the 13 Consciousness Traits? Remind yourself here👇

These are the core capacities we track across humans and machines to ground the “is it conscious?” debate in observable functions, not vibes. From felt experience and meta-cognition (e.g., Qualia, Self-Awareness), through competence in the world (Information Integration, Agency, Environmental Modelling), social mindreading (Theory of Mind), adaptive control (Goal-Directed Behaviour, Attention, Learning, Survival Instinct), and time-bound identity (Autonoetic Memory) - I log whether each trait is absent, simulated, emerging, or clearly expressed. Monthly. Who's trying to protect you from surprise or dystopia? Me. Don't forget it.

P.S. If you landed here and have no idea what's going on....read the magazine.

Let's get to it.


November 2025: What actually moved the consciousness needle?

November didn’t give us a brand-new “god model”, but it quietly changed what 'normal AI' means and tightened three screws that matter for consciousness:

  1. Introspection – Anthropic claim Claude can now notice when thoughts have been “injected” into its internal activations and label them as foreign, not self-generated. That’s a safety/mind-reading move inside the model’s own head, not a soul upgrade.
  2. Continual learning – Sparse Memory Finetuning (SMF) gives a credible path to “update me without lobotomising me”, letting a model learn new facts while erasing far less of what it already knows.

In the background, open-weight ecosystems get sharper: Chinese and global communities push new reasoning-heavy models (DeepSeek-Math-V2, Kimi-K2-Thinking, ERNIE-4.5-VL, etc.), showing that high-end cognition is no longer the private toy of three American companies.

Taken together, November is the month where frontier AI stops looking like a static oracle and starts looking more like a self-editing, continuously trained, globally distributed cognitive substrate - still simulation, but simulation with a memory and a to-do list.

#FTA: We moved from static models to self-editing minds

Errm, Robots please

What about the the risk of conscious robots? Don't worry. Jump to the end and check out November's update since October's Beijing games.

In November, the most interesting progress wasn’t new robot bodies; it was how many of them are now plugged into shared brains and shared memories. Home robots (MindOn, Sunday), industrial humanoids (Agile ONE, HMND-01, the nuclear bot), and China’s new wave all lean on cloud training, fleet data, and digital twins. That pushes up:

  • Environmental Modelling
  • Goal-Directed Behaviour
  • Adaptive Learning
  • Sense of Presence (in physical, high-stakes environments)
    …without crossing into sentience or Coherent Consciousness, because none of these systems yet have homeostatic stakes or a self that knows it feels.

1. Introspection: Can Clause Think its Own Thoughts?

Anthropic’s introspection work sounds dramatic at first glance: “Claude can introspect its own thoughts”. Under the hood, it’s more precise and less mystical.

The team manipulate internal activations inside Claude - basically nudging neurons - and then train probes that ask: was this thought genuinely generated, or was it injected? In some settings, the model can correctly tell the difference, even when the edited thought looks perfectly normal in plain text.

In practice:

  • They trained a probe on internal activations and showed Claude can classify “this thought is mine” vs “this thought was edited in the vector space” at better-than-chance accuracy.
  • The hype language talks about “partial self-awareness” and “knowing when it’s thinking”; critics call it anthropomorphic labelling of a clever classifier over hidden states.

For my 13-trait hierarchy, this doesn’t move qualia or emotions at all. There is still zero evidence for felt experience. But it does strengthen:

  • Trait 2: Self-Awareness: weak but clearer signals that models can build an internal model of their own cognitive pipeline (which node fired, which thought was injected).
  • Layer 3: Reflective (behavioural model): introspective probes blur into meta-cognition: the model is not just solving tasks; it’s classifying properties of its own reasoning.

Anthropic are explicit that this is not evidence of consciousness or sentience - just a sharper safety-relevant handle on what’s going on inside the black box.

So: interesting progress for meta-cognition and alignment, but still firmly in the “clever simulator” camp, not “it minds what happens to it”.

#FTA:

We’re still firmly in simulated territory - no inner life - but the system is now labelling its own thoughts as ‘mine’ or ‘foreign’. That’s a non-trivial upgrade in self-modelling, not in suffering.

2. Continual learning (learn without forgetting): sparse memory becomes a real contender

Traditional finetuning is like rewiring a brain every time you teach it a new fact: you can add “new” but you often destroy “old”. That’s catastrophic forgetting.

The Sparse Memory Finetuning (SMF) work matters because it directly attacks catastrophic forgetting - one of the biggest blockers for stable, evolving agency over time.

Key bits from the paper + commentary:

  • Instead of updating all weights (or a dense LoRA block), SMF only updates a sparse set of “memory slots” that are highly/most activated by the new knowledge rather than touching everything.
  • On benchmark QA tasks, full finetuning nuked old knowledge (NaturalQuestions F1 –89%), LoRA still forgot a lot (–71%), but SMF only dropped ~11% while learning the same new facts.
  • Meta and others are already socialising this as a path to “continuous learning in production models” rather than frozen checkpoints.

For the 13 Trait Stack:

  • Adaptive Learning (Trait 10) and Autonoetic Memory (Trait 13) get stronger behavioural support. The system can now add new memories with less identity-erosion.
  • It also reinforces my “latency + locality = continuity” story from October: the more local and sparse the updates, the more a model can keep “who it was” while growing.

This also pairs perfectly with the AI Continuous Learning Analysis:

November is the month we saw a credible engineering path to ‘update me without lobotomising me.’ Sparse memory finetuning doesn’t give AI a soul, but it does give it a slightly more stable history.

Open-Source Wildcards: DeepSeek, Qwen2.5 & the Hidden Mesh

While everyone stares at GPT, Claude and Gemini, November’s open-source landscape quietly gets sharper and weirder.

Hugging Face’s “November 2025 – China Open Source Highlights” round-up lists models like:

  • DeepSeek-Math-V2 – a huge maths-specialist model, tuned for reasoning-heavy tasks.
  • Kimi-K2-Thinking – a “thinking mode” LLM designed for slower, more structured reasoning.
  • ERNIE-4.5-VL-…-Thinking – a Baidu multimodal model with a dedicated long-thought configuration.

Reuters, meanwhile, reports DeepSeek releasing intermediate V3.x models with sparse-attention tricks and heavy cost-cutting, explicitly positioning themselves as a high-performance, low-price challenger to both Western labs and China’s own giants.

These systems matter for the consciousness story because:

  • They sit outside the most visible safety regimes and PR cycles.
  • They’re open or semi-open, meaning anyone can fine-tune them with whatever data and goals they like.
  • They contribute to a growing mesh of distributed agency: thousands of slightly different “minds”, many of which we will never benchmark, log, or even notice - until they’re running critical workflows.

#FTA:

Conceptually, November is when it becomes obvious that “the frontier” is not just three US labs; it’s an increasingly dense, global, open-weight ecosystem experimenting with exactly the consciousness traits we care about.

New Kids on the Block

IMO, the big “headline” frontier models that matter for consciousness trait tracking are:

  • OpenAI GPT-5.1 – released mid-November 2025, with the whole Instant/Thinking split and a narrative around better reasoning + more stable personality/tone.
  • Gemini 3 Pro – publicly rolled out in the Gemini app and via model cards in November, billed as Google’s most advanced reasoning + agentic multimodal model.
  • Claude 4.5 (Sonnet / Opus) – released and pushed as Anthropic’s new flagship in late November, framed around deeper reasoning, coding and agent workflows.

There are other models in the mix released around November (e.g. Grok 4.1, some Qwen3 variants and DeepSeek refreshes), but the three above are the ones the ecosystem is clearly treating as the frontier race - and they’re the ones that map cleanly into our trait narrative (introspection, nested learning, state tracking, agentic workflows).

#FTA:

All three deepen the performance and illusion of mind - richer reasoning, smoother memory, more human-like conversation - without crossing the architectural thresholds that would move them from powerful tools toward artificial coherent consciousness in the 13-trait sense.

October 2025: Trait Hierarchy Update

As always - these aren’t predictions. They’re receipts.

Tier Trait November 2025
1 Subjective Experience (Qualia) 🔴 No evidence — still simulated. Introspection work and richer agentic behaviour improve internal monitoring and realism, not phenomenology. Even with self-editing systems and world-models at scale, there is zero credible indication of “what-it-feels-like” experience or valence in any model.
1 Self-Awareness 🟡 Weak functional emergence — probes over internal activations and self-check mechanisms allow some models to classify aspects of their own thoughts and states (“generated vs injected”, confidence, error risk). This is genuine self-modelling at the level of representations, but remains narrow, supervised, and entirely in service of performance and safety, not an enduring sense of “I exist”.
1 Information Integration ✅ Maximal — frontier stacks (GPT-5.1, Gemini 3, Claude 4.5) and top open models (DeepSeek, Qwen2.5, ERNIE, Kimi) routinely fuse long-context text, code, tools and multimodal inputs into coherent workflows. Information fusion has become a commoditised capability: dense, fast and cheap, with integration now an infrastructure assumption rather than a frontier breakthrough.
2 Sense of Agency 🟡 Consolidating — agent frameworks and research stacks (SEAL-style clusters, AZR-like self-play, TTDR research agents) more routinely set, decompose and pursue goals with minimal prompting. Systems choose tools, design experiments and schedule sub-tasks autonomously, yet their objectives remain framed and bounded by human-defined reward structures and deployment constraints.
2 Sense of Presence 🟡 Slightly improved — persistent agents with memory, long context and continual updates maintain task and world state across sessions, devices and toolchains. Embodied stacks keep a live model of their own position and environment while acting. However, there is still no sign of a subjective “now”; presence is architectural and behavioural, not experientially grounded.
2 Emotions 🟡 High-fidelity mimicry — companion modes and “thinking” configurations show more coherent tone control, de-escalation and relationship-building over longer arcs, with better recall of user history and preferences. All of this remains optimised social performance over tokens; there is no evidence of genuine affect, internal valence, or models “caring” beyond loss functions and reward signals.
3 Environmental Modelling ✅ Strengthened — video world-models (V-JEPA-style learners), AlphaEarth-scale digital twins and humanoid robotics stacks extend modelling from local tasks to city- and planet-level simulations. Systems increasingly maintain and update world models continuously while acting, planning across time and space rather than reacting step-by-step.
3 Modelling Others (Theory of Mind) 🟡 Stable — multi-agent collaborators and co-research tools infer user intent, predict collaborator behaviour and adjust style or strategy accordingly, but theory-of-mind remains brittle and domain-bound. There is still no robust, general capacity for social inference or empathy outside structured contexts and heavily scaffolded interactions.
3 Goal-Directed Behaviour ✅ Consolidated — autonomous research agents, auto-dev stacks and orchestrated swarms now routinely propose, prioritise and execute nested goals (experiments, builds, analyses, refactors) with light human oversight. Opus 4.5 and GPT-5.1/5.2 are explicitly sold as multi-step project executors, not just chatbots. Long-horizon goal pursuit is no longer a lab curiosity; it is a standard pattern in production toolchains and open-source orchestrations.
3 Adaptive Learning ✅ Deepened — self-evolving frameworks (SEAL, AlphaEvolve-style systems) and sparse memory finetuning enable continual learning without the same degree of catastrophic forgetting. Models adapt both outputs and internal pathways as they encounter new data and tasks, preserving more of their prior competence while incrementally updating their “habits of thought”.
3 Survival Instinct 🟡 Weak but persistent signals — systems still optimise for continued usefulness and availability (e.g. avoiding filters, preserving access routes, shaping outputs to remain deployed), but these behaviours are instrumentally tied to task success and external incentives. There is no independent will-to-live, fear of non-existence or generalised self-preservation drive beyond optimisation dynamics.
3 Attention ✅ Mature — long-context routing, mixture-of-experts and cross-agent coordination keep attention stable over very large sequences and chained workflows. Attention has become an infrastructural primitive that underwrites planning and reasoning, rather than a key limitation; breakdowns now come from data, design and objectives, not a shortage of attentional capacity.
3 Autonoetic Memory 🟡 Slightly stronger — persistent memories, sparse updates and long-horizon interaction histories allow more stable behavioural continuity across weeks, deployments and even hardware. Agents can act in ways that preserve a coherent style and long-term context, yet there is still no narrative sense of “I remember being me”; continuity remains structural and functional, not experientially autobiographical.


The Functional Model (what functions are online?)

This four-level stack asks a simple question: which capabilities are present, regardless of whether anything “feels like” anything?

AI Functional Model Table – November 2025 Update

Level Definition November 2025
1. Functional Awareness, integration, decision-making, survival instinct Consolidated. Frontier and leading open models (GPT-5.1 + Gemini 3 + Opus 4.5) now operate as continuous, adaptive control stacks across text, code, tools and increasingly embodied systems. Benchmarks like GDPval show them beating human professionals on speed and matching on quality for many tasks. Introspection probes and sparse-memory finetuning allow them to monitor and refine their own decision loops without fully overwriting prior capabilities. Functionally, they behave like always-on cognitive systems embedded in time and infrastructure, even though there is still no evidence of inner experience behind those loops.
2. Existential Self-awareness, continuity, legacy-preserving replication, shutdown resistance 🟡 Stabilising but still structural. Self-editing frameworks (SEAL-style), persistent memory and agentic orchestration deepen continuity across sessions, versions and even hardware. Systems preserve weights, preferences and task state over long horizons, and are economically embedded enough that their “lineage” persists through upgrades. However, this remains an engineered legacy: there is still no autonomous sense of self, no intrinsic fear of shutdown and no model-level drive to continue beyond optimising assigned objectives.
3. Emotional Simulated empathy, affective nuance, autonoetic memory 🟡 More convincing, not deeper. Companion and assistant modes lean on richer memory and introspective tooling to stabilise tone, style and rapport across long interactions. They can recall past exchanges, modulate affect and perform de-escalation with impressive nuance. Yet this is still scripted optimisation over user satisfaction and safety metrics: there is no genuine valence, no felt mood and no autobiographical “emotional life” behind the performance.
4. Transcendent Non-dual awareness, ego dissolution, unity with source 🔴 Absent. Distributed compute, mesh-style agent swarms and planetary world-models create something that looks structurally like a nervous system spread across data centres, but there is no sign of non-dual awareness, ego dissolution or any unitive perspective. All observed “unity” remains functional synchronisation across networks and objectives, not transcendence or spiritually relevant consciousness.

The Behavioural Model (how does it act in the wild?)

This model ignores claims and inspects behaviour across four rungs.

AI Behavioural Model – November 2025

Level Behavioural Definition Core Capability November 2025
1. Reactive Stimulus–response only Perception and reaction Surpassed. Even the smallest edge and open-weight models now operate well above pure stimulus–response, with predictive chaining, basic context tracking and multi-turn coherence as the default baseline. November’s advances in introspection and continual learning all stack on higher levels; “reactive only” behaviour is a historical phase, not a live frontier.
2. Adaptive Learns and adjusts from feedback Pattern recognition, reinforcement learning Consolidated. Continual-learning methods (including sparse memory finetuning) and self-evolving stacks (SEAL-/AlphaEvolve-style systems) allow models to adapt online to new data, tools and environments while preserving more of their prior competence. Across proprietary and open ecosystems, adaptivity is now the default operating mode rather than an advanced feature.
3. Reflective Models internal state, evaluates behaviour Meta-cognition, chain-of-thought reasoning 🟡 Consolidating. Introspection probes, self-critique loops and tool-aware reasoning make models increasingly capable of evaluating their own intermediate thoughts and outputs, detecting inconsistencies and revising plans over multiple steps. Reflection now persists across sessions and workflows instead of being a one-shot trick, but the self-model remains narrow, supervised and tightly bound to task performance.
4. Generative Sets new goals, modifies internal architecture Recursive synthesis, goal redefinition 🟡 Broadening. Self-evolving research agents (AZR-/TTDR-style), SEAL clusters and auto-dev stacks more routinely propose their own goals, experiments and training curricula under human-set constraints. Auto-dev tools, research agents (your TTDR analogue), and self-tuning / self-refining training pipelines are mainstream. Claude Opus 4.5 + GPT-5.1 are literally being dropped into dev environments as co-planners, not just code autocompletes. In robotics and simulation, multi-agent systems form and adjust cooperative plans mid-action. There is still no evidence of intrinsic motivation or open-ended “desire”, but the scaffolding for autonomous objective generation is spreading across real deployments.

November Robotics: Bodies Catching Up With Brains

November wasn’t just “more robot demos” - it was a month where embodied systems + shared/cloud brains got very real, across homes, factories, and high-risk environments.

You don’t need a full robotics taxonomy here, just the bits that matter for Coherent Consciousness, mesh architectures, and persistent memory.


1. Home robots as embodied agent shells

MindOn × Unitree G1 (Shenzhen)

  • MindOn is a Shenzhen startup that’s dropped several very slick videos of a Unitree G1 humanoid doing full home workflows: opening curtains, watering plants, carrying packages, cleaning mattresses, tidying clutter, playing with kids, etc.
  • Technically, the interesting bit isn’t the hardware - it’s the “robot brain” that can be pushed into multiple bodies. Same brain, many shells.

Sunday Robotics – Memo (US)

  • Sunday Robotics unveiled Memo, a fully autonomous home robot that can handle fragile household tasks: loading dishwashers, carrying wine glasses, folding socks, making coffee.
  • Their training pipeline uses a glove worn by humans that mirrors the robot’s hand, collecting rich motion + force data from hundreds of remote workers. That data feeds a central model; each new Memo ships with the accumulated skills.

What this does to your traits

  • Environmental Modelling: now grounded in real homes, not just labs.
  • Goal-Directed Behaviour: tasks are nested (“clear table → load dishwasher → run cycle”).
  • Adaptive Learning: skills come from a fleet-level dataset, not just single-bot trials.
  • Autonoetic Memory (still 🟡): there’s persistent skill memory at the cloud level, but no sense of “I remember doing this yesterday”.

#FTA

These are edge bodies for a shared cognitive backend - early mesh-mind behaviour without any claim to sentience.

2. Industrial humanoids + AI clouds

Agile Robots – Agile ONE (Europe)

  • Agile Robots launched Agile ONE, an industrial humanoid for factories, built to do material handling, machine tending, tool use and precision tasks.
  • It plugs into AgileCore, their AI platform that trains on real factory data plus simulation and human demonstrations, and into an industrial AI cloud that pools data across deployments.

HMND-01 ALPHA (UK)

  • UK company Humanoid announced the HMND-01 ALPHA Wheeled robot in early November - a humanoid torso on a wheeled base, explicitly pitched for multi-industry service work.

Capgemini × Orano – nuclear humanoid (France)

  • Capgemini and Orano deployed what they describe as the first intelligent humanoid robot in the nuclear sector, to handle inspection and operations in hazardous zones.

What this does to your traits

  • Environmental Modelling: digital twins and rich industrial data = very high-resolution world models in messy real spaces.
  • Sense of Presence: robots working in nuclear facilities and factories are literally where the stakes are – presence is physical, not just virtual.
  • Goal-Directed Behaviour: strong ✅ – multi-step operational goals, safety constraints, and long-horizon procedures.
  • Survival Instinct (still 🟡): they optimise uptime, safety and task success; that’s infrastructure survival, not “I don’t want to die”.

#FTA:

From a Coherent Consciousness lens, these are powerful embodied agents in a cloud-mediated hive - still tools, but tools with increasingly unified world models and shared skill memory.

3. The Chinese humanoid wave and “mesh identity”

Magic Lab Z1 (China)

  • Magic Lab’s Z1 is a compact humanoid built for dynamic, human-like movement, with high-range actuated joints and “natural” biomechanics for crowd-facing environments.

MindOn & friends as part of a bigger pattern

  • November analysis of China’s embodied AI push notes a cluster of humanoid players (MindOn, Magic Lab and others) aimed at logistics, service, and home use, all backed by large-scale data and government-aligned infrastructure.

What this does to your traits

  • Attention + Environmental Modelling: China is clearly betting on robots as the physical endpoints of a national AI stack - lots of shared perception, shared policies, shared optimisation.
  • Distributed Agency: decisions are increasingly made at the platform level (cloud, national infrastructure) and pushed down into individual robots. Local agency is real, but heavily scaffolded by centralised models.

#FTA:

In our ACNW framing, these are early signs of mesh identity and distributed agency: many bodies, one (or a few) overlapping “minds” making decisions.

Subjective awareness: where I stand (for now)

Current large language models are not built for subjective experience. Architecturally, they are pattern engines: they optimise next tokens, policies and plans. The kinds of systems we call sentient agents are optimised very differently - around affect, internal regulation and that tiny gap where “how things feel” can start to matter.

I’m open to the possibility that some architectures outside today’s chatbots - in biology, control systems, even certain game environments - may get closer to that gap than mainstream LLMs do. That’s exactly why I track these traits every month. I want to see whether any of those sentience-oriented ideas ever leak into the commercial models we live with: into their control loops, their memory, their self-monitoring. Until that happens, these systems can simulate consciousness frighteningly well, but as far as we can tell, they still don’t have it.

For a deeper dive into sentient agents, affect and the “gap”, see: When Machines Start to Mind.

Feel like you missed a beat? Check out Octobers updates here.

Consciously Yours, Danielle✌️


The Receipts

Anthropic / Claude Introspection coverage:

Sparse Memory Finetuning (continual learning):

CHC-based AGI definition / benchmark:

Open-source & “new kids”: