You wouldn't throw a new hire into production without context. No documentation. No intro to the codebase. No explanation of how the team works or what matters to the business.
So why do you keep starting fresh conversations with AI like it has amnesia?
Every time you open a new session with Claude, ChatGPT, or whatever model you're using, you're onboarding a new employee. Same capabilities. Same potential. Zero institutional knowledge. You spend the first ten minutes explaining who you are, what you're building, how you like to work — context that evaporates the moment you close the window.
This is a solved problem. Most people just don't know the solution exists.
The Agent Configuration Pattern
This problem has a standard solution: a markdown file that AI tools load automatically at the start of every session. Think of it as a persistent employee handbook that follows your AI wherever it goes.
The pattern is called AGENTS.md — a README for agents. Over 60,000 open-source projects use it, and it's supported by Claude Code, Cursor, Codex, Devin, and most other AI coding tools. The format is simple: just markdown, no required fields, placed at the project root.
Claude Code extends this with CLAUDE.md files that can exist at three levels:
- Global (
~/.claude/CLAUDE.md) — loaded for every project on your machine - Project root — loaded for that specific codebase
- Subdirectory — loaded when working in that part of the project
(Anthropic published their own best practices guide covering this and more.)
Pro tip: Symlink AGENTS.md to CLAUDE.md in your project root. One file, every tool.
This isn't a prompt. It's context that persists across every conversation, every session, every task. Your AI remembers who you are because you told it once and it never forgets.
What Goes in the File
Here's what I've learned actually matters:
Who you are and what you're building. Not your life story — the context that affects technical decisions. Your tech stack preferences. Your constraints. What you care about. An AI that knows you're building privacy-first software will make different recommendations than one that doesn't.
How you want to work. Do you want tests written first? Do you prefer Rails conventions over clever abstractions? Do you want the AI to challenge your assumptions or just execute? These preferences shape every interaction. State them once.
Your active projects and their relationships. When I mention "FabWise," my AI knows it's a Rails 8 multi-tenant SaaS for manufacturing. When I mention "Postiller," it knows that's a SwiftUI app with on-device ML. No re-explaining required.
Your principles — not as philosophy, but as constraints. "Privacy as architecture" tells the AI something useful. "Data sovereignty means user data stays with the user" is actionable. These become guardrails for every recommendation.
What role you want the AI to play. I explicitly tell Claude to operate as a Principal Software Engineer — not just a code generator, but someone who challenges assumptions, explains trade-offs, and helps me learn. The AI doesn't know what level of engagement you want unless you tell it.
CLAUDE.md Starter Template
A ready-to-use template based on our own setup. Download, customize, and stop re-explaining yourself.
The Multiplier Effect
The first time I set this up, I spent about an hour writing the file. That hour has paid back hundreds of times over.
Every session starts at full speed. No warm-up. No context-setting. The AI already knows my stack is Rails 8 with PostgreSQL and UUID primary keys. It knows I prefer Minitest over RSpec. It knows I'm a "vibecoder" who learns by building and iterates fast.
But the bigger payoff is consistency. Without persistent context, every conversation is a fresh negotiation. The AI might suggest React when you prefer Stimulus. It might write RSpec when you use Minitest. It might over-engineer when you value simplicity.
With the file in place, every conversation inherits your preferences. The AI operates within your constraints instead of its defaults. You stop fighting the same battles repeatedly.
The Organizational Opportunity
This pattern scales beyond individuals.
Imagine a company where every engineer's AI assistant knows the architecture. Where the coding standards, security requirements, and deployment patterns are encoded once and inherited by every AI interaction across the organization.
Project-level CLAUDE.md files can encode:
- Architectural decisions that shouldn't be relitigated
- Security requirements that must be followed
- Testing patterns the team has agreed on
- Integration patterns with external services
The AI becomes a carrier of institutional knowledge. New team members get AI assistants that already understand how things work here.
Implicit vs. Explicit Personalization
There are two emerging approaches to making AI actually know you.
Implicit personalization is what Google just announced with Personal Intelligence for Gemini. The AI connects to your Google apps — Gmail, Calendar, Drive, Photos — and learns about you by analyzing your data. You don't configure anything. It watches, infers, and personalizes automatically.
Explicit personalization is what AGENTS.md and CLAUDE.md represent. You tell the AI who you are, what you care about, how you want to work. Nothing is inferred. Everything is stated.
Both approaches solve the same problem: making AI useful without re-explaining yourself every session. But they have fundamentally different trade-offs.
Implicit personalization is convenient. You don't have to write anything. But it requires giving an AI access to your email, calendar, and documents. It infers your preferences from behavior rather than letting you state them directly. And you can't easily see or edit what it "knows" about you.
Explicit personalization requires upfront work. You have to think about your preferences and write them down. But you control exactly what the AI knows. You can version it, share it, edit it. There's no black box of inferred preferences that might be wrong.
For work that matters — code, architecture, business decisions — explicit beats implicit. You want the AI operating on your stated constraints, not its inferences about your behavior. You want to be able to audit what it knows and correct it when it's wrong.
The CLAUDE.md pattern isn't just about convenience. It's about control.
Why This Matters Beyond Tooling
The broader point: the quality of AI output is a direct function of the context you provide.
Most people treat AI like a search engine — ask a question, get an answer, move on. But AI is a collaborator, and collaborators need context. They need to know your constraints, your preferences, your goals. They need institutional knowledge.
The CLAUDE.md pattern is one implementation. The principle is universal: invest in giving your AI the context it needs to be useful.
Think about it this way: if you had a brilliant new hire who could learn anything instantly but started every day with complete amnesia, what would you write down for them to read each morning?
Write that down. Load it automatically. Watch your AI transform from a capable stranger into an effective teammate.
The Starting Point
The file doesn't need to be comprehensive on day one. Start with:
- Your role and what you're building (2-3 sentences)
- Your tech stack and strong preferences (bulleted list)
- How you want the AI to work with you (direct, challenging, educational — whatever fits)
Add to it when you find yourself repeating context. Within a week, you'll have a file that captures most of what matters.
The difference between "AI as tool" and "AI as teammate" isn't the model. It's the context you give it to work with.
Stop onboarding from scratch. Your AI is ready to be a permanent employee — if you treat it like one.
If you liked this, you might also like...

The Compounding Loop
We're optimizing our workflows while AI optimizes its own. The gains compound faster than anyone expected. Here's how I'm actually using this — and why the same pattern works beyond software.

Cost Per Outcome
The only AI metric that matters isn't capability — it's whether AI can produce an outcome cheaper than a human, including the cost of checking its work. And the bottleneck isn't the technology. It's our willingness to change how work gets done.

