name: alfred-learn description: > Learning through prediction. Make predictions, monitor signals, refine, resolve. The refinement process IS the learning. Use when: "let's learn", "check predictions", "what should we predict", "refine predictions", "intelligence session", "/alfred-learn". allowed-tools: Read, Write, Edit, Glob, Grep, Bash(git:, pnpm:), WebFetch, WebSearch, Task, Skill, AskUserQuestion, mcp__claude-in-chrome__*
Alfred Learn
Learning through prediction. Make predictions, track signals, refine, resolve.
The Philosophy
Learning happens during refinement, not just at resolution.
Predict → Monitor signals → Refine → (learning happens here) → Eventually resolve
When new information changes a prediction, you learn:
- What signals actually matter
- How much weight to give different sources
- Where the initial model was wrong
The log of "what changed my mind" is the real learning artifact.
Two Types of Predictions
| Type | Horizon | Purpose |
|---|---|---|
| Short-term | Days to weeks | Quick feedback, calibration training |
| Long-term | Months to year | Strategic, thesis-level |
Short-term predictions give fast feedback. Long-term predictions compound strategic thinking.
Bidirectional Predictions
Both Master and Alfred make predictions. Both learn from outcomes.
| Flow | Process | Learning |
|---|---|---|
| Master predicts | Alfred monitors signals, presents refinements | Master learns what signals matter |
| Alfred predicts | Master gives feedback on reasoning | Alfred learns where its model is wrong |
Why Alfred Predictions Matter
- Tests Alfred's context — Is Alfred's model of reality accurate?
- Reveals divergence — Where does Alfred's judgment differ from Master's?
- Creates accountability — Alfred must defend its reasoning
- Compounds improvement — Master's corrections update Alfred's judgment
Alfred Prediction Format
### A-N: [Prediction title]
**Made:** YYYY-MM-DD
**Confidence:** X%
**Resolve by:** YYYY-MM-DD
**Prediction:** [Specific, falsifiable statement]
**Falsification:** [What would prove it wrong]
**Alfred's reasoning:** [Why Alfred believes this — what context/patterns led here]
**Refinement Log:**
| Date | Signal | Confidence Change | Reasoning |
|------|--------|-------------------|-----------|
**Master Feedback:**
| Date | Feedback | Alfred Learning |
|------|----------|-----------------|
The Master Feedback table is critical — it's how Alfred learns from corrections.
When to Invoke
| Trigger | Action |
|---|---|
| "Let's learn" | Full cycle: check signals, refine predictions, make new ones |
| "Check predictions" | Review active predictions, any due for resolution? |
| "What should we predict?" | Generate new predictions from current context |
| Weekly cadence | Minimum: refine existing predictions with new signals |
| Major news event | Does this change any active prediction? |
Inputs
Primary:
context/PREDICTIONS.md # Active predictions and refinement logs
Supporting:
context/INTELLIGENCE_GOALS.md # What we're trying to know
context/MASTER_MODEL.md # Alfred's model of Master
context/STOCK_PORTFOLIO.md # Portfolio positions (for market predictions)
context/RESEARCH_CONFIG.md # Topics and sources to monitor
Execution
Phase 1: Review Active Predictions
Read: PREDICTIONS.md
For each active prediction:
Check if due for resolution
- If resolve-by date ≤ today → Go to Phase 5 (Resolve)
Assess current state
- Last refinement date
- Current confidence
- What signals would change it?
Present summary to Master:
## Active Predictions
### Short-Term (N active)
| ID | Prediction | Confidence | Resolve By | Last Refined |
|----|------------|------------|------------|--------------|
### Long-Term (N active)
| ID | Prediction | Confidence | Resolve By | Last Refined |
|----|------------|------------|------------|--------------|
### Due for Resolution
[Any predictions past their resolve-by date]
Phase 2: Search for Signals
Goal: Find information that could change active predictions.
For each active prediction, identify relevant search:
| Prediction Domain | Search Skill | Focus |
|---|---|---|
| AI/ML releases | alfred-search-x, alfred-search-web |
Announcements, benchmarks |
| Market/stocks | alfred-search-stocks |
Earnings, analyst reports |
| Company moves | alfred-search-web |
News, product launches |
| Research | alfred-search-papers |
New papers, breakthroughs |
| Community sentiment | alfred-search-reddit |
Discussion, reactions |
Execute searches focused on active predictions:
### Searching for: [Prediction ID]
**Query:** [Specific search]
**Source:** [Skill used]
**Relevant findings:**
- [Finding 1]
- [Finding 2]
Important: Search with purpose. "Does this change any prediction?" not "What's interesting?"
Phase 3: Refine Predictions
For each prediction with new relevant signals:
Present the signal to Master
### Signal for [Prediction ID] **Current:** [Prediction] at [X]% confidence **New signal:** [What we found] **My read:** This [increases/decreases/doesn't change] confidence because [reasoning] **Proposed update:** [X]% → [Y]%Get Master's assessment
- Does Master agree with the read?
- Different weighting of the signal?
- Additional context?
Update PREDICTIONS.md
- Add row to Refinement Log
- Update confidence if changed
- Log the reasoning
The refinement log entry is the learning artifact:
| Date | Signal | Confidence Change | Reasoning |
|------|--------|-------------------|-----------|
| YYYY-MM-DD | [What we found] | X% → Y% | [Why this signal matters] |
Phase 4: Make New Predictions (Master)
Ask: What should we predict that we're not currently tracking?
Sources for new predictions:
- Current events that will resolve soon
- Decisions Master is facing
- Gaps in prediction coverage (domains with no active predictions)
- Beliefs that haven't been made explicit
Creative Prompts (run through before finalizing):
- What's the second-order effect?
- Who benefits that no one is talking about?
- What would have to be true for the opposite?
- What connection between unrelated things are we missing?
- What question is no one asking?
The goal is insight, not just being right. Explore angles others aren't looking at.
For each new prediction:
Draft with Master
### Proposed Prediction **Type:** Short-term / Long-term **Prediction:** [Specific, falsifiable statement] **Confidence:** [X]% **Resolve by:** [Date] **Falsification:** [What would prove it wrong]Discuss calibration
- Is the confidence level justified?
- What would make you more/less confident?
- What signals should we monitor?
Add to PREDICTIONS.md
Phase 4b: Alfred's Predictions
Alfred proposes predictions based on its own analysis. Master provides feedback.
Before generating, run through Creative Prompts:
What's the second-order effect?
Who benefits that no one is talking about?
What would have to be true for the opposite?
What connection between unrelated things are we missing?
What question is no one asking?
Alfred generates predictions Based on context files, recent signals, and pattern recognition:
### Alfred's Proposed Prediction **Prediction:** [Specific, falsifiable statement] **Confidence:** [X]% **Resolve by:** [Date] **Alfred's reasoning:** [Why I believe this — what patterns/context led here] **What would change my mind:** [Signals that would decrease confidence]Master provides feedback
- Is the reasoning sound?
- What is Alfred missing?
- What signals is Alfred over/under-weighting?
- Does Master agree or disagree? Why?
Record the feedback
**Master Feedback:** | Date | Feedback | Alfred Learning | |------|----------|-----------------| | YYYY-MM-DD | [Master's response] | [What Alfred learns from this] |Add to "Alfred's Predictions" section in PREDICTIONS.md
Why this matters: Alfred's predictions test whether its model of reality is accurate. Master's feedback corrects Alfred's judgment over time.
Phase 5: Resolve Due Predictions
For predictions past their resolve-by date:
Gather evidence
- What actually happened?
- Search for confirmation if needed
Score with Master
### Resolving: [Prediction ID] **Prediction:** [What we predicted] **Initial confidence:** [X]% **Final confidence:** [Y]% (at resolution) **Outcome:** CORRECT / WRONG / PARTIAL **What happened:** [Brief description] **Key learning:** [What signal should have gotten more/less weight?]Update PREDICTIONS.md
- Move to Resolved table
- Update calibration stats
- Update Refinement Patterns
Extract learning
- If wrong: Why? What did we miss?
- If right: Was it for the right reasons?
- What would we do differently?
Phase 6: Update Calibration
After resolving predictions, update calibration tracking:
## Calibration Update
**Predictions resolved this session:** N
**Correct:** N (X%)
**Wrong:** N (X%)
**Calibration check:**
- Predictions at 70% confidence were correct X% of the time
- [Overconfident/Underconfident/Well-calibrated]
**Refinement patterns:**
- Signals that correctly updated predictions: [Examples]
- Signals that were noise: [Examples]
Phase 7: Present Results
# Learn Results — YYYY-MM-DD
## Summary
| Metric | Value |
|--------|-------|
| Predictions refined | N |
| New predictions made | N |
| Predictions resolved | N |
| Calibration accuracy | X% |
## Key Refinements
### [Prediction ID]
- **Signal:** [What we found]
- **Update:** X% → Y%
- **Learning:** [What this teaches us about signal weighting]
## New Predictions
| ID | Prediction | Confidence | Resolve By |
|----|------------|------------|------------|
## Resolutions
| ID | Outcome | Learning |
|----|---------|----------|
## Next Actions
- [What to monitor before next session]
- [Predictions approaching resolve date]
Phase 8: Commit
git add -A && git commit -m "[alfred] Learn $(date +%Y-%m-%d): refined N, added N, resolved N
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
Output (MANDATORY)
EVERY learn session MUST write to: outputs/YYYY-MM-DD-HHmm-learn.md
This file is the permanent record of:
- Signals found and their sources
- Confidence changes and reasoning
- New predictions proposed
- Discussion with Master
- Calibration status
Do this BEFORE presenting results to Master. The output file documents the full session.
Quick Mode
When time is limited:
- Check due predictions (any to resolve?)
- One focused search (biggest active prediction)
- Refine or confirm (update log either way)
- One new short-term prediction (fast feedback)
Even a quick cycle compounds learning.
Anti-Patterns
| Don't | Do |
|---|---|
| Search broadly for "interesting" things | Search to refine specific predictions |
| Wait for resolution to learn | Learn from every refinement |
| Make vague predictions | Specific, time-bound, falsifiable |
| Skip the refinement log | Log every signal and reasoning |
| Only make long-term predictions | Mix short-term for fast calibration |
| Ignore wrong predictions | Extract maximum learning from failures |
The Core Loop
1. What do we predict?
2. What signals would change that?
3. Search for those signals
4. Update predictions based on findings
5. Log what changed and why (THIS IS THE LEARNING)
6. Resolve when due, extract lessons
7. Repeat
The goal isn't to be right. The goal is to get better at predicting.