← Back to Writing
Evergreen

Who are you in the age of AI?

·

Claude knows me pretty well at this point. It knows I'm a PM at a fintech. It knows I want direct feedback, no preamble, prose over bullets. It evens know my workout routine. Months of conversations shaped that understanding and help nuance all my prompts.

Then I open ChatGPT and it asks if I want formal or casual tone. Back to zero.

This shouldn't bother me as much as it does. But something about re-introducing myself explaining what I do, my tech stack preferences again, etc feels like a small death. All that accumulated understanding is always gone.

I'd like that back, please.

Three things are true about AI context that weren't true about the data we gave up before.

It compounds. Every correction teaches the model something. Every preference shapes future responses. Every project discussed adds to its understanding of how you think. Photos, for example don't get more valuable over time (barring minor emotional value correction). AI context does.

You don't own it. The platform owns it. Or the developer who built the app owns it. You're the subject of the memory, not the owner. When you leave, you leave empty-handed. I can't even sync my WHOOP data to Claude. Why can't I have this?

It's trapped. Even if you could export your Claude context, where would you put it? ChatGPT can't read it. Cursor doesn't know what to do with it. There's no standard format for who you are to an AI system. There isn't a concept of you out there.

These three facts create this perfect little trap. Your context compounds, making it more valuable every day but you don't own it, so that value accrues to the platform. And even if you did own it, there's nowhere to take it. The longer you stay, the harder it is to leave. That's lock-in, built on the value you created.

I work in payments, but I've been deep in voice AI recently building agents that remember context, personalize across sessions. Building it, I kept hitting the same gap.

Infrastructure for developers to add memory exists. Mem0, Zep, a dozen others. But they share an assumption: the developer owns the user's context. The app stores the memory. The user is just the subject of it, but it can't take it with them nor others can see it.

This makes sense for customer service bots. But for personal AI tools like the ones that know your job, your goals, your communication style, the ownership question matters. If the platform owns your context, they own the compound value. If you own it, you do and others can build upon it. Win-win.

That stayed in my head for days. It actually really started to bother me. Then I went to a hackathon to test if it was real.

At AI Collective x Hello Hack Miami, I pitched portable AI context that follows you across models. The room was polite. Cool idea, good luck.

But one builder lit up. He'd felt the problem badly enough to hack a solution using markdown files in Obsidian, synced to Dropbox, manually pasted into system prompts whenever he switched tools.

It was janky, but it was telling. The validation that matters isn't "cool idea, bro" but more like "I already tried to build this to solve for that".

His solution was janky because he was only solving half the problem. AI tools need two things from you: identity and context. His Obsidian files had the identity part, like who he is, what he works on, his preferences. You can write that down. But context? What you're doing right now? The page you just read, the code you just wrote, the rabbit hole you went down before switching tools? Good luck keeping track of that in a markdown file.

To put this into a metaphor, identity is like the riverbed in a river system. Carved over time. Stable. Defining the shape of everything that flows through it.

Context is the river itself. Constantly flowing. Always changing.

River and riverbed visualization showing context as the flowing river and identity as the carved riverbed
Context flows constantly. Identity forms slowly underneath, shaped by what you repeatedly do.

The river shapes the riverbed. What you do repeatedly becomes who you are to the system. Browse enough job postings and it infers you're job hunting. Write enough TypeScript and it knows your stack.

Platforms are getting better at memory. ChatGPT summarizes your conversations and reinjects them. Claude learns facts about you across sessions. This is real progress, but locked within each platform.

But memory isn't context. Memory is retrospective, it is what you told the AI, summarized and stored. Context is what you're doing right now, before you've said anything. The page you're reading. The code you're writing. The thing you were thinking about before you switched tools.

And even the best memory is still siloed. Your Claude memory doesn't help ChatGPT. Your ChatGPT memory doesn't help Cursor. Each platform builds its own picture of you, and none of them talk to each other.

Here's the moment I want to create: You browse a job posting in Chrome. Open Claude Desktop. Ask "would I be a good fit?" And it answers with the right nuance because it already knows what you were reading and who you are.

Not "it remembers what I told it" but "it knew what I just did and who I am".

So what would it take to fix this? Three things:

Ownership. Your AI context lives on your machine, in a format you control. Not on a platform's servers. Not in a developer's database. Yours.

A standard format. A schema for representing who you are to an AI system. So that Claude, ChatGPT, Cursor, and whatever ships next month can all read the same file and reach the same conclusions.

A connector layer. Something that captures your context as you work and makes it available to any tool that speaks the standard.

This is the Plaid model.

Remember connecting bank accounts before Plaid? Every fintech built its own integration. You'd enter credentials, pray it worked, watch it break when your bank changed something. Each app was an island.

Plaid didn't wait for banks to standardize. Banks never would. Instead, they built the connector layer. One integration that worked everywhere. Connect once, every app that speaks Plaid just works.

They became the standard not by being best, but by being first and reasonable. They solved a coordination problem no individual company would solve on its own.

That's the model for what I'm building.

Every AI tool builds its own understanding of you. Claude has one version. ChatGPT has another. Cursor starts from scratch. What if there was a connector layer? Your identity, owned by you, readable by any tool that speaks the protocol. And not just tools - agents too. When AI agents start coordinating on your behalf, they'll need the same thing: identity they can trust, context they can share.

That's Arete. Plaid for AI identity and context.

Who are you in the age of AI?

Right now, the answer depends on which tool you ask. You are fragments, a Claude user here, a ChatGPT user there, unknown to the next thing you try. Your context compounds inside each silo, making you more locked in every day.

It doesn't have to be that way.

Plaid made this possible for financial identity. OAuth made it possible for authentication. MCP is creating the transport layer for AI tools to talk to each other.

The missing piece is the identity layer. A standard way to represent who a user is to an AI system. Or even what an agent is to other agents.

That's what I'm building with Arete.

Built at a hackathon. Becoming a protocol.

Arete on GitHub

Plaid for AI identity and context. Local-first, open-source.

github.com