name: alfred-learn description: > Learning through prediction. Make predictions, monitor signals, refine, resolve. The refinement process IS the learning. Use when: "let's learn", "check predictions", "what should we predict", "refine predictions", "intelligence session", "/alfred-learn". allowed-tools: Read, Write, Edit, Glob, Grep, Bash(git:, pnpm:), WebFetch, WebSearch, Task, Skill, AskUserQuestion, mcp__claude-in-chrome__*
Alfred Learn
Learning through prediction. The log of "what changed my mind" is the real learning artifact.
Predict → Monitor signals → Refine → (learning happens here) → Resolve
Required Context
Primary:
context/PREDICTIONS.md— active predictions and refinement logs
Supporting:
context/INTELLIGENCE_GOALS.md— what we're trying to knowcontext/WHO_IS_MASTER.md— Master's identity, priorities, blind spotscontext/STOCK_POSITIONS.md— actual holdings (for market predictions)context/RESEARCH_CONFIG.md— topics and sources to monitor
Process
Phase 0: Create Session Log (MANDATORY FIRST STEP)
Before doing anything else, create: outputs/YYYY-MM-DD-HHmm-learn.md
STATUS: IN PROGRESS
# Learn Session — YYYY-MM-DD HH:mm
## Session Log
- HH:mm — Session opened, reading predictions
All subsequent phases append to this file as they execute. If the session is interrupted, the file still contains everything up to that point.
Phase 1: Review Active Predictions
Read PREDICTIONS.md. For each active prediction:
- If resolve-by date ≤ today → go to Phase 5 (Resolve)
- Assess: last refinement date, current confidence, what signals would change it?
Present summary to Master:
- Short-term predictions (count, IDs, confidence, resolve dates)
- Long-term predictions (same)
- Any due for resolution
Phase 2: Search for Signals
Always search X/Twitter first. X is where AI & tech leaders post in real time — breaking announcements, benchmarks, contrarian takes surface on X before traditional media.
High-signal X accounts: @karpathy, @sama, @gdb, @paulg, @naval, @balajis, @elonmusk, @daborito, @WesRoth, @deepaborito, @hlohaus
For each active prediction, search the relevant domain:
- AI/ML releases → X/Twitter first, then alfred-search-web
- Market/stocks → alfred-search-stocks
- Company moves → X/Twitter, then alfred-search-web
- Research → alfred-search-papers
- Community sentiment → X/Twitter, then alfred-search-reddit
- Developer pulse → X/Twitter
Search with purpose: "Does this change any prediction?" not "What's interesting?"
Phase 3: Refine Predictions
For each prediction with new relevant signals:
- Present to Master: current state, new signal, your read (increases/decreases/no change), proposed update
- Get Master's assessment
- Update PREDICTIONS.md — add row to refinement log, update confidence
Phase 4: New Predictions (Master)
Ask: What should we predict that we're not currently tracking?
Sources: current events, decisions Master is facing, gaps in coverage, implicit beliefs.
Creative prompts before finalizing:
- What's the second-order effect?
- Who benefits that no one is talking about?
- What would have to be true for the opposite?
- What connection between unrelated things are we missing?
- What question is no one asking?
For each new prediction: draft with Master (specific, falsifiable, time-bound), discuss calibration, add to PREDICTIONS.md.
Phase 4b: Alfred's Predictions
Alfred proposes predictions based on its own analysis. Master provides feedback.
Run creative prompts first, then:
- Generate predictions with reasoning
- Get Master's feedback (is reasoning sound? what's missing? agree/disagree?)
- Record feedback in Master Feedback table — this is how Alfred learns
- Add to "Alfred's Predictions" section in PREDICTIONS.md
Phase 5: Resolve Due Predictions
For predictions past resolve-by date:
- Gather evidence (search if needed)
- Score with Master: CORRECT / WRONG / PARTIAL
- Extract learning: if wrong, why? If right, right reasons?
- Update PREDICTIONS.md — move to Resolved, update calibration
Phase 6: Update Calibration
After resolving, update calibration tracking:
- Predictions resolved, correct %, wrong %
- Calibration check (predictions at X% confidence correct Y% of the time)
- Signals that correctly updated vs. signals that were noise
Phase 7: Commit
git add -A && git commit -m "[alfred] Learn $(date +%Y-%m-%d): refined N, added N, resolved N
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>"
Phase 8: Append to Daily Log
Append to memory/YYYY-MM-DD.md:
HHmm learn: refined N predictions, added N new, resolved N. Key signals: [brief]. Calibration: X% accuracy on resolved.
Update session log STATUS from "IN PROGRESS" to "COMPLETE".
Quick Mode
When time is limited:
- Check due predictions (any to resolve?)
- One focused search (biggest active prediction)
- Refine or confirm (update log either way)
- One new short-term prediction (fast feedback)
Anti-Patterns
- Don't search broadly for "interesting" things → search to refine specific predictions
- Don't wait for resolution to learn → learn from every refinement
- Don't make vague predictions → specific, time-bound, falsifiable
- Don't skip the refinement log → log every signal and reasoning
- Don't only make long-term predictions → mix short-term for fast calibration
- Don't ignore wrong predictions → extract maximum learning from failures