Credit: Created using Leonardo AI / University of Melbourne

When AI seems to know you better than you know yourself

by · Tech Xplore

I was at my clinic the other day and asked an AI assistant about the differential diagnosis of a rash in a child. A routine question. The response came back clear and sensible. And then it added, "Are you asking about one of your patients, or one of your grandchildren?"

I was taken aback. Because it was right, I have grandchildren. And it remembered that I have grandchildren.

That moment pointed to something new. Not just smarter AI, but a fundamentally different kind of relationship—one that feels, unsettlingly, like being known.

A machine that knows you

At the end of last year, ChatGPT presented me with a summary of my year, 909 chat conversations, and three recurring themes it had identified unprompted—building AI tools for general practice, teaching and writing about planetary health and creative time with family.

Then it went further.

It offered a visual portrait, rendered in pixel art and titled, "Still Life with Stethoscope and Hang Drum." A stethoscope, a hang drum, an open MacBook, a glowing QR code, and a turquoise mug of peppermint tea.

No face, no figure. Just the objects of a life, selected and arranged by a machine that had been paying attention.

It was accurate. Uncomfortably so.

What bothered me was that I accepted it without question, as though it were a considered verdict rather than a pattern extracted from thousands of exchanges.

I had done none of the work that usually produces that kind of self-knowledge. These insights just arrived, pre-packaged and convincing.

That unease has a long history.

The ancient Greeks had a phrase for this idea of self-knowledge: gnōthi seauton or know thyself. Carved above the entrance to the Oracle at Delphi, it set the terms for a lifetime of inquiry.

Self-knowledge, in that tradition, was hard-won, always incomplete and very personal—something that you pursued, not something a machine just offers us, ready-made.

ChatGPT made me a portrait, rendered in pixel art and titled, "Still Life with Stethoscope and Hang Drum."Credit: ChatGPT

From remembering to constructing

This shift is not accidental.

Early large language models (LLMs) could hold around 1,000 to 2,000 tokens (a token is a chunk of text, roughly a word or part of a word, that an LLM processes as a single unit) at a time.

Today, contemporary systems can process up to 1 or 2 million tokens in a single context window. That is a thousandfold increase in working memory, which is enough to hold entire books, months of conversation and large portions of a personal history in a single pass.

Add persistent memory across sessions, which is the default setting for a number of the LLMs, and something important changes.

AI is no longer storing isolated details. It is building a model of you: what questions you ask, what topics you return to, what seems to matter most.

From construction to influence

Memory on its own is passive. But organized memory becomes narrative, and narrative shapes identity.

The ancient Greek philosopher, Aristotle, observed that character is formed and revealed not in isolated moments but in the patterns of a life, in what we repeatedly choose and repeatedly avoid.

AI systems are now positioned to observe exactly those patterns with a consistency no human could match. They don't just recall—they select, organize and reflect back.

Systems are being developed that can do exactly this. Imagine your AI says, "Over the past three months, your questions have shifted. You're asking more about stress, sleep and coping. Are you doing OK?"

That example is worth sitting with.

AI is increasingly capable of drawing inferences about emotional state from patterns in language and timing. This is not because you disclosed anything directly, but because the accumulated pattern told its own story.

This has genuine clinical promise.

Early detection of mood deterioration or burnout through natural language patterns is an emerging area of real research interest.

The idea that AI might flag warning signs before a person has consciously registered them carries genuine public health potential.

As a clinician, I find that possibility genuinely exciting.

But these inferences are still interpretations. Research shows that people readily incorporate external classifications into their self-understanding, particularly when they carry an air of authority.

And when AI presents a coherent version of you, it doesn't just describe, it begins to define.

Remaining agents in our own lives

There is a significant shift underway. Not just in what is remembered, but in who decides what matters.

AI systems can detect patterns across time, synthesize them and present a distilled portrait of who you are.

That portrait may feel clearer than your own recollection—a bit more consistent, more complete. And coherence is persuasive.

If a system can tell you what defines you and what themes run through your life, the inner work of constructing that meaning begins to feel unnecessary.

But that internal work matters deeply.

Constructing meaning from experience is how identity forms and how we remain agents in our own lives.

Without it, the self risks becoming thinner, more malleable and more easily steered.

We need to return regularly to the hard questions ourselves. Who am I? What matters to me? How have I changed?

These are not questions to outsource. The Delphic Oracle did not promise that self-knowledge would be comfortable, only that it was yours to seek.

In an age when AI is increasingly willing to do that seeking for us, the most human thing left may be to insist on doing it yourself.

Key concepts
Large language modelsAI alignment

Provided by University of Melbourne