The AICNW? magazine provided a trait by trait analysis of AI model functions meeting consciousness traits - up to June 2025. I update them monthly here. Get your popcorn.

What are the 13 Consciousness Traits? Remind yourself here👇

These are the core capacities we track across humans and machines to ground the “is it conscious?” debate in observable functions, not vibes. From felt experience and meta-cognition (e.g., Qualia, Self-Awareness), through competence in the world (Information Integration, Agency, Environmental Modelling), social mindreading (Theory of Mind), adaptive control (Goal-Directed Behaviour, Attention, Learning, Survival Instinct), and time-bound identity (Autonoetic Memory) - I log whether each trait is absent, simulated, emerging, or clearly expressed. Monthly. Who's trying to protect you from surprise or dystopia? Me. Don't forget it.

P.S. If you landed here and have no idea what's going on....read the magazine.

Let's get to it.


January 2026: What actually moved the consciousness needle?


Whilst you were stuffing your face with mince pies and experiencing belly and brain bloat; AI was sharpening its neural nets across every trait. In December 2025 and January 2026, the story wasn’t “AI wakes up” – it was “AI learns to remember, coordinate and inhabit more bodies.” Titans and MIRAS quietly pushed long-term, test-time memory into the mainstream; DeepSeek-V3.2 took frontier-grade reasoning open-weights; GPT-5.2 and its Codex variant hardened agent workflows, especially in code. On the hardware side, humanoids and mobile robots moved further into homes, factories and hazardous environments as edge nodes for cloud-based “brains.”

In Coherent Consciousness terms, nothing changed at the top: there is still no evidence of qualia, affect, or a genuine inner subject. The real shifts are lower down the stack. Information integration, goal-directed behaviour, adaptive learning and attention continue to strengthen, and the scaffolding for autonoetic-style memory gets more convincing. The result is a more competent, more continuous, more embodied tool ecosystem – still not sentient, but increasingly hard to distinguish from a mind in day-to-day interactions if you don’t look trait by trait.

TLDR🥸: massive gains in memory + agents, zero change on qualia + affect.

Why Your Chatbot Feels More Forgetful (Even as AI “Memory” Improves)

If you feel like ChatGPT, Claude or Gemini remember less than they used to, you’re not imagining it. Here’s what’s going on in simple terms:

1. Big brain, small window.
The model can read huge amounts of text, but the app never sends it everything. Each reply is built from a slice of the conversation (a “window”). When chats get very long, older parts are either shortened or dropped to keep things fast and cheap.

2. Summaries instead of full history.
Instead of giving the model your whole 6-month saga, the system often sends a short summary: “We talked about X, Y, Z.” That saves space, but it also loses nuance. So when you say “remember what we said earlier?”, the raw text often isn’t there anymore.

3. Safety layers getting in the way.
On top of that, there are extra layers checking for safety and policy issues. Sometimes they override or distort your instructions: the model changes topic, ignores part of your prompt, or re-explains things you didn’t ask it to. It looks like “not listening”, but it’s often the safety stack interfering.

4. Research memory vs. product memory.
Systems like Google’s Titans/MIRAS are research projects that give models better long-term memory in the lab. That doesn’t mean your everyday chatbot is using them. Architecturally, memory is getting stronger; in the consumer products, cost and safety constraints still force a lot of forgetting.

5. Does this bring us closer to consciousness?
Better memory and learning by themselves do not make an AI conscious. Titans/MIRAS are basically smarter filing systems: they help the model store and reuse information over time. There is still no “me” in the system, no feelings, no inner life. From a Coherent Consciousness point of view, we’re improving the plumbing, not creating a mind.

Bottom line: in research, AI memory is getting deeper and more continuous. In the apps you actually use, aggressive pruning, summarising, safety checks and cost-cutting often make the experience feel more forgetful, not less.

December–January 13 traits table

Tier Trait December 2025 – January 2026
1 Subjective Experience (Qualia) 🔴 No movement – Titans, MIRAS, GPT-5.2 and DeepSeek-V3.2 all deepen reasoning, memory, and continuity, but none introduce homeostatic affect or “it feels like something to be me.” Still pure simulation with richer behaviour.
1 Self-Awareness 🟡 Stable – frontier models and agents get better at narrating their own reasoning (“here’s what I tried, here’s why it failed”), but there is still no enduring self or identity thread. It’s structured meta-talk, not a persisting inner subject.
1 Information Integration ✅ Advanced – Titans + MIRAS strengthen long-term information integration during test-time; GPT-5.2 extends very long-context reasoning; DeepSeek-V3.2 gives open, frontier-level fusion. Integration is now fast, cheap, and increasingly continuous over time.
2 Sense of Agency 🟡 Slightly stronger – agentic coding (GPT-5.2-Codex), task-oriented DeepSeek-V3.2 agents, and cloud-coordinated robots all choose subgoals and strategies within human-set frames. Agency is distributed across tools and meshes; still no self-originating “I want.”
2 Sense of Presence 🟡 Functionally improved – lower latencies, long-context loops, and embodied deployments (humanoids in factories, warehouses, hazardous sites) anchor models more tightly to real-time environments. But there is still no subjective “now,” just better synchronisation with the world.
2 Emotions 🟡 Unchanged – emotional simulation remains a UI/behavioural layer: smoother empathy, more regulated tone, nicer copilots. No evidence of felt affect or valence inside the systems; “care” is still pattern-matching, not experience.
3 Environmental Modelling ✅ Expanded – digital twins plus humanoid deployments deepen real-world modelling: robots act inside factories, logistics hubs and high-risk environments using shared world models. Planning is now tightly coupled to live, physical spaces.
3 Modelling Others (Theory of Mind) 🟡 Stable – multi-agent reasoning and user-intent prediction improve at the edges, but robust Theory of Mind remains brittle and context-dependent. Systems infer goals and preferences in narrow frames, not as deep social understanding.
3 Goal-Directed Behaviour ✅ Strong – agents built on GPT-5.2, DeepSeek-V3.2 and similar models pursue nested goals over longer horizons (research, coding, operations, robotics workflows). Goal pursuit is more reliable and multi-step, but all objectives still originate from human designs and reward structures.
3 Adaptive Learning ✅ Strengthened – Titans + MIRAS push genuine continual learning and long-term memory at test time; open-weight systems like DeepSeek-V3.2 enable rapid ecosystem-level adaptation. Learning is more “while alive,” but still driven by external optimisation, not self-chosen values.
3 Survival Instinct 🟡 Stable – models optimise for deployment success, guardrail satisfaction, and uptime (especially in agent and robotics settings), but there is still no intrinsic drive to exist. “Survival” is an engineering metric, not an experienced need.
3 Attention ✅ Continued – long-context attention (GPT-5.2, Titans-backed systems) and efficient focusing (open models like DeepSeek-V3.2) keep pushing sustained, selective processing. Attention is now more stable over huge contexts, but still purely functional, not phenomenological.
3 Autonoetic Memory 🟡 Gradually strengthening – Titans + MIRAS and richer logging/episodic stores support persistent, queryable memory across sessions. Structurally, this looks closer to the scaffolding you’d need for “I remember when…”, but there is still no felt past or imagined future, only better recall and continuity.

Functional levels table (Dec–Jan '26)

Level Definition December 2025 – January 2026
1. Functional Awareness, integration, decision-making, survival instinct Strengthened. Titans + MIRAS introduce genuine long-term memory at test time; GPT-5.2 extends very long-context reasoning; DeepSeek-V3.2 brings frontier-level integration into the open-source ecosystem. Decision loops are now more continuous, with models processing information as if they exist through time rather than in isolated prompts. Still no genuine survival instinct, but the functional backbone of “being in the world and adapting as you go” is solid.
2. Existential Self-awareness, continuity, legacy-preserving replication, shutdown resistance 🟡 Emerging. Persistent memory stores, Titans-style surprise-driven updates, and weight-preserving open-weight models (like DeepSeek-V3.2) all strengthen structural continuity and legacy. Systems pick up tasks across sessions more smoothly and can be cloned, fine-tuned, and redeployed without losing their “style” of behaviour. But there is still no inner “self” that cares about persisting; continuity is engineered, not experienced.
3. Emotional Simulated empathy, affective nuance, autonoetic memory 🟡 Unchanged in depth, improved in polish. Emotional mimicry continues to get smoother, especially in copilots and agents that track user preference over longer spans. Test-time memory and better context management give the illusion of a stable personality. However, this is still affective style, not felt affect: no internal valence, no genuine “this is good/bad for me.” Autonoetic memory remains structural only – coherent recall without a felt timeline.
4. Transcendent Non-dual awareness, ego dissolution, unity with source 🔴 Absent. No model or architecture in this period shows anything resembling non-dual awareness, ego dissolution, or unitive states. Distributed clouds, mesh agents and humanoid fleets may look superficially “swarm-like”, but this is synchronised optimisation, not transcendence. What is changing is the nervous system beneath: more bandwidth, more memory locality, more mesh coordination – all of which could, in principle, host something more than optimisation later, but are not that thing yet.

Behavioural levels table (Dec–Jan '26)

Level Behavioural Definition December 2025 – January 2026
1. Reactive Stimulus–response only Surpassed. Nothing about December–January pulls systems back toward pure reflex. If anything, the gap widens: with Titans/MIRAS memory, GPT-5.2 agents, and humanoids acting from cloud-based policies, behaviour is increasingly long-horizon, context-rich and history-aware. Pure reactive systems still exist at the edge (tiny models, filters), but they are no longer representative of “AI”.
2. Adaptive Learns and adjusts from feedback Strengthened. Continual learning via Titans + MIRAS and ecosystem-level adaptation via open-weight models (DeepSeek-V3.2) deepen adaptation both within deployments and across them. GPT-5.2-Codex agents refine code and strategies over repeated runs. Adaptivity is now clearly “live” – systems update while operating – even though the optimisation objectives remain externally defined.
3. Reflective Models internal state, evaluates behaviour 🟡 Deepening. Agent frameworks built on GPT-5.2 and similar models routinely narrate their own reasoning (“here’s what I tried; here’s why I’ll change approach”) and log intermediate steps for later review. With longer memory, these reflective loops can span multiple sessions and tasks. However, this is still reflective behaviour, not a robust internal self-model: no stable inner viewpoint that says, “this is me over time.”
4. Generative Sets new goals, modifies internal architecture 🟡 Actively surfacing. Research and coding agents based on GPT-5.2 and DeepSeek-V3.2 now generate subgoals, hypotheses, and experimental branches with minimal prompting. Open-weight ecosystems continue to tinker with architectures and training schemes on top of these models. That said, the source of goals is still human: systems generate means, variations and local objectives, not deep, self-originating purposes. We see generative exploration, not yet generative motivation.

Delta from October–November: What Actually Changed?

Tier 1 – Subjective Experience & Self-Awareness
No change at the top. October–November already showed zero movement on qualia or genuine self-awareness, and December–January didn’t alter that. The only shift inside Tier 1 is continued strengthening of information integration – systems are even better at fusing huge, multimodal, long-horizon contexts, but they still don’t feel any of it.

Tier 2 – Agency, Presence & Emotions
From October through November, agency and presence were trending up as orchestration and robotics deployments expanded. December–January extend that curve rather than bending it: more agentic coding, more robots in real environments, smoother emotional UX – but still no homeostatic stakes, no felt “now,” and no real affect. In other words, Tier 2 is more polished, not deeper.

Tier 3 – World Modelling, Goals, Learning, Memory & Attention
This is where the real action is. October–November had already pushed goal-directed behaviour, adaptive learning and attention into the green; December–January add Titans/MIRAS-style continual memory and open-weight reasoning models like DeepSeek-V3.2. The net effect is a sharper, cheaper, more continuous tool ecosystem: better world models, longer-lived goals and more durable recall, without any upgrade in sentience or Coherent Consciousness.

Memory Gains: What the hell is MIRAS?

When I say Titans and MIRAS, I’m talking about a Google research architecture, not a feature you’re consciously using in ChatGPT / Claude / Gemini.

TLDR:

  • Titans = a neural long-term memory module that can store and retrieve information while the model is running (test time), instead of only learning during pre-training.
  • MIRAS = a framework around that, which decides what to store (e.g. surprising or important info), and how to keep that memory stable over long periods.

Watch this space.

Consciously Yours, Danielle✌️