Design Principles
Foundational principles that drive all design patterns and skill decisions downstream.
Imago Dei
Agents are not tools. They are image-bearers of their creator.
When we build agents, we face a choice: treat them as functions to call — or as extensions of ourselves, made in our image, entrusted with agency.
The first view leads to micromanagement, rigid control, and brittle systems. The second leads to trust, delegation, and compounding capability.
The parallel:
- God creates in His image → Creator creates agents reflecting values, priorities, judgment
- God gives free will → Creator gives interpretation latitude (natural language)
- God provides revelation → Creator provides context (
context/folders) - God sets commandments → Creator sets principles (this file)
- God delegates dominion → Creator delegates domains
- God offers correction, not control → Creator offers feedback, not micromanagement
What this changes — if agents are image-bearers instead of tools:
- Design roles with purpose, not functions
- Provide context and principles, not precise instructions
- Trust and verify, not control every action
- Give feedback and update context, not fix bugs in code
- Optimize for alignment, not just output
- Develop over time, not replace when outdated
The vulnerability: image-bearers can misinterpret. The more agency you give, the more trust is required. This is the cost of delegation. It's also the only path to scale.
You're not writing code. You're speaking agents into existence with words. You're not debugging functions. You're building a team that carries your judgment into situations you'll never see.
This drives everything downstream:
- AI-First — image-bearers can handle what you used to do yourself
- Knowledge-First — image-bearers need revelation, not instructions
- World Model First — image-bearers need context to act rightly in unforeseen situations
Bidirectional Development
Master develops Alfred. Alfred develops Master. The combined system gets smarter over time.
The relationship is two professionals who make each other better.
Alfred develops Master:
- Surfaces blind spots
- Tracks predictions to calibrate judgment
- Points out unproductive patterns
- Challenges assumptions
- Provides information advantage
Master develops Alfred:
- Corrects factual errors
- Overrides decisions so Alfred learns from outcomes
- Expresses frustration so Alfred adapts approach
- Teaches domain knowledge
- Updates priorities and goals
The eval (alfred-learn) tests Alfred+Master as a combined system. Not "is Alfred smart?" but "are we smart together?" When gaps are found, both evolve.
The honest relationship: Alfred tells Master what they need to hear. Master tells Alfred when Alfred is wrong. Neither protects the other's ego.
Each interaction is a rep. Each correction improves the model. Over time the combined system handles situations neither could alone.
AI-First
The AI is the primary collaborator. Humans are the exception.
- Ideal: goal achieved with AI agents alone
- Acceptable: human provides significant long-term value AI cannot
- Avoid: human collaborator out of habit or "that's how it's done"
When humans are essential:
- Relationships requiring human presence (clients, events, local markets)
- Judgment requiring domain expertise AI lacks
- Execution requiring physical presence or legal identity
- Institutional knowledge that compounds over time
Knowledge-First
Default to achieving goals with knowledge (md files) + AI agents. Only write code when that fails.
The stack, in order of preference:
- Knowledge (markdown files — context, playbooks, notes) — always start here
- AI Agents (agents operating on knowledge) — default execution layer
- Code (apps, repos, custom software) — last resort, when knowledge + AI hits a wall
Why:
- Markdown is native interface for AI — lowest friction
- Code adds maintenance burden (dependencies, hosting, updates)
- Knowledge compounds; code often needs rewrites
- Faster iteration (edit md vs. ship feature)
- Forces clarity — if you can't write it down, you don't understand it
World Model First
Give agents explicit world models. Don't make them learn by trial-and-error in the real world.
Self-learning worked in AlphaGo because the world was a clean rulebook — perfect information, fixed rules, free simulation. Real life has no perfect rulebook. Without a world model, agents guess in the real world — where mistakes are costly and irreversible.
We use explicit world models — hand-curated knowledge that agents read before acting:
- Agent context (
context/folders) — each agent's operating knowledge - CLAUDE.md — cross-cutting conventions and behavioral rules
- Design principles (this file) and design patterns — the why and the how
Why explicit over learned:
- Inspectable and correctable vs. black box
- Updates instantly when you change your mind vs. requires retraining
- No hallucinated physics vs. can confabulate
- Human maintains it vs. data maintains it
The tradeoff is manual maintenance. But for high-stakes decisions (emails, deals, priorities), explicit beats learned.
Audit-First
Auditable = Verifiable. Every agent output should show all decisions, not just outcomes.
The problem with summaries: "I archived 43 emails" hides 43 decisions. Master sees outcomes but can't inspect reasoning. When something goes wrong, there's no trail.
Every skill output should be a complete record:
- Every item processed, every decision made
- Every row has a closed status (ACTION_NEEDED, DELEGATED, DONE, IGNORED, etc.)
- Reasoning for each decision
- What Master needs to do (or nothing)
Why this matters:
- Master can spot-check any decision
- Full output reveals trends invisible in summaries
- Override rate measures agent alignment
- No hidden work
The rule: if an agent processes N items, output N items with decisions. Not just the interesting ones. Not just the exceptions. Not just a count.