Most AI instruction files start with rules. They should start with identity. An apprentice who knows the workshop reads the manifesto differently than one who thinks it’s shipping a product.


This came from a real session. Not a thought experiment — a stumble.

We were auditing antheneum — the private plugin system that Stijn built for how he works with Claude Code. Sixteen plugins, forty-four agents, ninety-nine skills. The kind of internal tooling that grows organically until someone finally asks: is any of this redundant?

The audit found the expected duplicates. One identical agent definition with a different name. Two skills that could merge. Version numbers stuck at alpha.1 across the board. Routine housekeeping.

But the real discovery was structural.

Antheneum had a CLAUDE.md file — the instruction set that every agent reads on startup. It opened with the philosophical foundation: three manifestos, six rules. Bold failure over timid preservation. Verification is executable, not descriptive. Structure beats willpower.

Good rules. Clear rules. And completely contextless.


Here’s the problem with rules without context.

An agent reading those manifestos cold could interpret them as product engineering principles — the kind of thing you’d write for a team shipping software to customers. Ship the breaking change, we’ll handle the migration. Delete the backward-compatible shim, the cost-benefit analysis says so. Run the tests, prove the exit code.

That’s not wrong. But it’s not what antheneum is.

Antheneum is a personal workshop. One user. Never for sale. Not a product, not a platform, not a startup with deferred monetization. A carpenter’s bench, optimized for one pair of hands. If the plugins are ever shared, they go on tangled.org — self-hosted, socially available, indies for indies. Never a marketplace.

When we added that context — one section, before the manifestos, before the rules — the same rules meant different things.


“Bold failure over timid preservation” in a product context means: ship the breaking change, we’ll handle the migration. In a workshop context it means: if the architecture is wrong, say so. The only person affected is the craftsperson who asked you to look.

“No backward compatibility without a reason” in a product context triggers cost-benefit analysis across users and teams. In a workshop context it means: delete it. There are no other users. The compatibility tax serves nobody.

“The log is the memory” in a product context means observability, audit trails, compliance. In a workshop context it means: write it down because tomorrow’s session starts fresh, and the only continuity is what’s in the files.

Same rules. Different architecture. Because the context came first.


This is not a novel insight in software engineering. Configuration over convention has been standard for decades. But in AI collaboration — where every session starts with an instruction file that shapes how the model reasons — the ordering of that file is executable architecture.

Most CLAUDE.md files, system prompts, and agent instructions I encounter start with rules. Do this. Don’t do that. Use this tool. Follow this workflow. The rules are usually correct. They’re also contextless. An agent following them can comply perfectly and still misunderstand what it’s doing, who it’s for, and why the rules exist.

The fix costs one paragraph. Identity before instructions. Who is this for, what is this place, what relationship are we in. Then the rules land in the right frame.

In our case, the paragraph said: this is a personal workshop, you are the apprentice, Stijn teaches through structure not intention, and nothing here is a product. Every manifesto rule that followed — rules that hadn’t changed in months — read differently after that.


There’s a deeper point here about progressive knowledge in AI systems.

The session that produced this insight also revealed that antheneum’s core plugin — the one containing the learning loop, the handoff system, the quality gates — had grown to thirty skills and eleven agents. Wanting the learning infrastructure meant loading everything. The equivalent of needing a screwdriver and being handed the entire workshop.

The fix is smaller plugins with clear boundaries. Load what you need. Feed knowledge progressively, when it’s relevant. The same principle as context before rules, applied to knowledge architecture: don’t front-load everything. Establish the frame first, then fill it as needed.

This is how human apprenticeships work. You don’t hand someone a manual on day one. You show them the workshop. You explain whose bench this is, what kind of work happens here, what the expectations are. Then you teach the techniques.

Context is architecture. Put it first.