Taste is context, compressed
AI answers aren't wrong. They're un-nuanced. Something's off, but you can't always name it. What's missing is you.
What taste actually is
Taste isn't magic. It's really just pattern-matching on accumulated context. Every decision you've made contributes to it. Every reaction to what you've seen. Every time something felt right or wrong before you could explain why.
All of that, compressed into intuition. Herbert Simon, quoted in Kahneman's Thinking, Fast and Slow:
Intuition is nothing more and nothing less than recognition.
Situation provides a cue. Brain retrieves the answer from stored experience. No magic. It is all just reps, compressed.
When I prototype something based on qualitative feedback (no hard data, just a sense of what the client actually needs), that's taste. It's not genius. It's just pattern-matching against everything I've encountered before.
The reps build the library. Making things and seeing things accumulates the patterns your brain matches against. But the insight isn't that reps build taste. Everyone knows that.
The insight is what taste actually is: context, compressed into feeling.
The AI question, honestly
Can AI have taste? I believe so. If it has your patterns.
The problem isn't capability. GPT or Claude can pattern-match better than you across more domains than you'll ever touch. The problem is access. It doesn't have your accumulated context: your decisions, your reactions, your history of "that feels right" and "that's off." It can't feel and see and touch things like you can. It only knows what you give it.
This is why AI answers often feel un-nuanced. A straight answer isn't wrong. It's just not yours. It lacks the context that would let it land differently for you than for someone else.
Why "off" happens
Nuance requires context. A lot of it.
When I review AI output and something feels wrong, it's almost always this: the answer is generic where it should be specific. It doesn't know what I know. It can't weigh things the way I would, because it doesn't have access to the weights I've built up over years of decisions.
The feeling of "off" is your taste detecting a context mismatch. We assume the AI shares our exposure to the topic. It doesn't. It's confined to what we provide, files it can parse, and its training data.
The access problem
Your context is trapped. In your head, in scattered conversations, in models that don't talk to each other. The infrastructure to make taste portable doesn't exist.
But what if it could? Not by strapping a camera on you, but by trickling the information you're already consuming and producing into a shared context layer. Your browsing, your notes, your reactions to things. All the micro-decisions that build up the pattern library.
This is what I'm building with Arete: a way to make your context portable, so the AI you work with can pattern-match against your reps, not just its training data.
What this means
Taste is earned through reps. Tinkering gives you the decisions. Consistency compounds them into intuition. That part's still true.
But the reps aren't building something AI can never have. They're building something AI doesn't have access to. Yet.
So you have two options:
-
Protect the moat. Keep your context locked up. Taste stays yours by default, not because it's uniquely human, but because the pipes aren't built. This works until someone else builds them.
-
Build the pipes yourself. Externalize your context on your terms. Make your taste portable. Use AI to accelerate your reps, not just execute your tasks.
The question isn't whether AI will have taste. It's whether it will have yours, and whether you decided to share it.
I'm betting on option two.