Compare commits

..

46 Commits

Author SHA1 Message Date
6e4ff3af86 feat: add DDD capability for vision-to-issues workflow
Add complete DDD capability set for breaking down product vision into
implementation issues using Domain-Driven Design principles.

Components:
- issue-writing skill: Enhanced with user story format and vertical slices
- ddd skill: Strategic and tactical DDD patterns (bounded contexts, aggregates, commands, events)
- ddd-breakdown skill: User-invocable workflow (/ddd-breakdown)
- ddd-analyst agent: Analyzes manifesto/vision/code, generates DDD-structured user stories

Workflow: Read manifesto + vision → analyze codebase → identify bounded contexts
→ map features to DDD patterns → generate user stories → create Gitea issues

Co-Authored-By: Claude Code <noreply@anthropic.com>
2026-01-12 13:02:56 +01:00
dd9c1c0090 refactor(skills): apply progressive disclosure to gitea skill
Split gitea skill into main file and reference documentation.
Main SKILL.md now focuses on core commands (154 lines, down from 201),
with setup/auth and CI/Actions moved to reference files.

Co-Authored-By: Claude Code <noreply@anthropic.com>
2026-01-12 12:32:13 +01:00
90b18b95c6 try to restructure the agents and skills given the new skills and command merge 2026-01-12 11:47:52 +01:00
4de58a3a8c changed the recommended skill size to 300 lines 2026-01-12 11:25:08 +01:00
04b6c52e9a chore: remove global opus model setting from settings.json
Remove top-level model override to allow per-skill/agent model configuration.
Reorder sections for consistency.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-11 18:12:01 +01:00
f424a7f992 feat(skills): modernize capability-writing with Anthropic best practices
Updates capability-writing skill with progressive disclosure structure based on
Anthropic's January 2025 documentation. Implements Haiku-first approach (12x
cheaper, 2-5x faster than Sonnet).

Key changes:
- Add 5 core principles: conciseness, progressive disclosure, script bundling,
  degrees of freedom, and Haiku-first model selection
- Restructure with best-practices.md, templates/, examples/, and reference/
- Create 4 templates: user-invocable skill, background skill, agent, helper script
- Add 3 examples: simple workflow, progressive disclosure, with scripts
- Add 3 reference docs: frontmatter fields, model selection, anti-patterns
- Update create-capability to analyze complexity and recommend structures
- Default all new skills/agents to Haiku unless justified

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-11 18:10:53 +01:00
7406517cd9 refactor: migrate commands to user-invocable skills
Claude Code has unified commands into skills with the user-invocable
frontmatter field. This migration:

- Converts 20 commands to skills with user-invocable: true
- Consolidates docs into single writing-capabilities.md
- Rewrites capability-writing skill for unified model
- Updates CLAUDE.md, Makefile, and other references
- Removes commands/ directory

Skills now have two types:
- user-invocable: true - workflows users trigger with /name
- user-invocable: false - background knowledge auto-loaded

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 16:39:55 +01:00
3d9933fd52 Fix typo: use REPO_PATH instead of REPO_NAME
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 00:15:10 +01:00
81c2a90ce1 Spawn agents with cwd set to their worktree
Resolves issue #86 by having the spawn-issues orchestrator create worktrees
upfront and pass the worktree paths to agents, instead of having agents
create their own worktrees in sibling directories outside the sandbox.

Changes:
- spawn-issues orchestrator creates all worktrees before spawning agents
- issue-worker, pr-fixer, code-reviewer accept optional WORKTREE_PATH
- When WORKTREE_PATH is provided, agents work directly in that directory
- Backward compatible: agents still support creating their own worktrees
  if WORKTREE_PATH is not provided
- Orchestrator handles all worktree cleanup after agents complete
- Eliminates permission denied errors from agents trying to access
  sibling worktree directories

This ensures agents operate within their sandbox while still being able to
work with isolated git worktrees for parallel implementation.

Closes #86

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 00:12:14 +01:00
bbd7870483 Configure model settings for commands, agents, and skills
Set explicit model preferences to optimize for speed vs capability:

- haiku: 11 commands, 2 agents (issue-worker, pr-fixer), 10 skills
  Fast execution for straightforward tasks

- sonnet: 4 commands (groom, improve, plan-issues, review-pr),
  1 agent (code-reviewer)
  Better judgment for analysis and review tasks

- opus: 2 commands (arch-refine-issue, arch-review-repo),
  1 agent (software-architect)
  Deep reasoning for architectural analysis

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 00:06:53 +01:00
a4c09b8411 Add lint checking to code-reviewer agent
- Add linter detection logic that checks for common linter config files
  (ESLint, Ruff, Flake8, Pylint, golangci-lint, Clippy, RuboCop)
- Add instructions to run linter on changed files only
- Add "Lint Issues" section to review output format
- Clearly distinguish lint issues from logic/security issues
- Document that lint issues alone should not block PRs

Closes #25

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 18:27:26 +00:00
d5deccde82 Add /create-capability command for scaffolding capability sets
Introduces a new command that guides users through creating capabilities
for the architecture repository. The command analyzes user descriptions,
recommends appropriate component combinations (skill, command, agent),
gathers necessary information, generates files from templates, and presents
them for approval before creation.

Closes #75

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 18:22:49 +00:00
90ea817077 Add explicit model specifications to commands and agents
- Add model: sonnet to issue-worker agent (balanced for implementation)
- Add model: sonnet to pr-fixer agent (balanced for feedback iteration)
- Add model: haiku to /dashboard command (read-only display)
- Add model: haiku to /roadmap command (read-only categorization)
- Document rationale for each model selection in frontmatter comments

Closes #72

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 18:22:44 +00:00
110c3233be Add /pr command for quick PR creation from current branch
Creates a lighter-weight PR creation flow for when you're already on a
branch with commits. Features:
- Auto-generates title from branch name or commits
- Auto-generates description summarizing changes
- Links to related issue if branch name contains issue number
- Triggers code-reviewer agent after PR creation

Closes #19

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 18:22:10 +00:00
6dd760fffd Add CI status section to dashboard command
- Add new section to display recent workflow runs from tea actions runs
- Show status indicators: [SUCCESS], [FAILURE], [RUNNING], [PENDING]
- Highlight failed runs with bold formatting for visibility
- Gracefully handle repos without CI configured
- Include example output format for clarity

Closes #20

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 18:21:29 +00:00
1a6c962f1d Add discovery phase to /plan-issues workflow
The planning process previously jumped directly from understanding a feature
to breaking it down into issues. This led to proposing issues without first
understanding the user's actual workflow and where the gaps are.

Added a discovery phase that requires walking through:
- Who is the specific user
- What is their goal
- Step-by-step workflow to reach the goal
- What exists today
- Where the workflow breaks or has gaps
- What's the MVP

Issues are now derived from workflow gaps rather than guessing.

Closes #29

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 18:21:19 +00:00
065635694b feat(commands): add /commit command for conventional commits
Add streamlined commit workflow that analyzes staged changes and
generates conventional commit messages (feat:, fix:, etc.) with
user approval before committing.

Closes #18

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 19:18:37 +01:00
7ed31432ee Fix subagent_type in spawn-pr-fixes and review-pr commands
- spawn-pr-fixes: "general-purpose" → "pr-fixer"
- review-pr: Added explicit subagent_type: "software-architect"

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 16:14:14 +01:00
e1c19c12c3 Fix spawn-issues to use correct subagent_type for each agent
- Issue worker: "general-purpose" → "issue-worker"
- Code reviewer: Added explicit subagent_type: "code-reviewer"
- PR fixer: Added explicit subagent_type: "pr-fixer"

Using the wrong agent type caused permission loops when spawning
background agents.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 16:13:09 +01:00
c9a72bf1d3 Add capability-writing skill with templates and design guidance
Creates a skill that teaches how to design and create capabilities
(skill + command + agent combinations) for the architecture repository.

Includes:
- Component templates for skills, commands, and agents
- Decision tree and matrix for when to use each component
- Model selection guidance (haiku/sonnet/opus)
- Naming conventions and anti-patterns to avoid
- References to detailed documentation in docs/
- Checklists for creating each component type

Closes #74

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 15:42:50 +01:00
f8d4640d4f Add architecture beliefs to manifesto and enhance software-architecture skill
- Add Architecture Beliefs section to manifesto with outcome-focused beliefs:
  auditability, business language in code, independent evolution, explicit over implicit
- Create software-architecture.md as human-readable documentation
- Enhance software-architecture skill with beliefs→patterns mapping (DDD, Event
  Sourcing, event-driven communication) and auto-trigger description
- Update work-issue command to reference skill and check project architecture
- Update issue-worker agent with software-architecture skill
- Add Architecture section template to vision-management skill

The skill is now auto-triggered when implementing, reviewing, or planning
architectural work. Project-level architecture choices go in vision.md.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 14:52:40 +01:00
73caf4e4cf Fix spawn-issues: use worktrees for code reviewers
The code reviewer prompt was minimal and didn't specify worktree setup,
causing parallel reviewers to interfere with each other by checking out
different branches in the same directory.

Changes:
- Add worktree setup/cleanup to code reviewer prompt (like issue-worker/fixer)
- Add branch tracking to issue state
- Add note about passing branch name to reviewers
- Expand reviewer prompt with full review process

This ensures each reviewer works in isolation at:
  ../<repo>-review-<pr-number>

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 01:14:16 +01:00
095b5e7982 Add /arch-refine-issue command for architectural issue refinement
Creates a new command that refines issues with architectural perspective
by spawning the software-architect agent to analyze the codebase before
proposing implementation guidance. The command:

- Fetches issue details and spawns software-architect agent
- Analyzes existing patterns and affected components
- Identifies architectural concerns and dependencies
- Proposes refined description with technical notes
- Allows user to apply, edit, or skip the refinement

Closes #59

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 00:10:42 +00:00
8f0b50b9ce Enhance /review-pr with software architecture review
Add software architecture review as a standard part of PR review process:
- Reference software-architecture skill for patterns and checklists
- Spawn software-architect agent for architectural analysis
- Add checks for pattern consistency, dependency direction, breaking changes,
  module boundaries, and error handling
- Structure review output with separate Code Review and Architecture Review
  sections

Closes #60

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 00:09:50 +00:00
3a64d68889 Add /arch-review-repo command for repository architecture reviews
Creates a new command that spawns the software-architect agent to perform
comprehensive architecture audits. The command analyzes directory structure,
package organization, patterns, anti-patterns, dependencies, and test coverage,
then presents prioritized recommendations with a health score.

Closes #58

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 01:05:47 +01:00
c27659f1dd Update spawn-issues to event-driven pattern
Replace polling loop with task-notification based orchestration.
Background tasks send notifications when complete - no need to poll.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 01:03:17 +01:00
392228a34f Add software-architect agent for architectural analysis
Create the software-architect agent that performs deep architectural
analysis on codebases. The agent:

- References software-architecture skill for patterns and checklists
- Supports three analysis types: repo-audit, issue-refine, pr-review
- Analyzes codebase structure and patterns
- Applies architectural review checklists from the skill
- Identifies anti-patterns (god packages, circular deps, etc.)
- Generates prioritized recommendations (P0-P3)
- Returns structured ARCHITECT_ANALYSIS_RESULT for calling commands

Closes #57

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 00:59:46 +01:00
7d4facfedc Fix code-reviewer agent: heredoc bug and branch cleanup
- Add warning about heredoc syntax with tea comment (causes backgrounding)
- Add tea pulls clean step after merging PRs
- Agent already references gitea skill which documents the heredoc issue

Closes #62

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 23:50:13 +00:00
8ed646857a Add software-architecture skill
Creates the foundational skill that encodes software architecture
best practices, review checklists, and patterns for Go and generic
architecture guidance.

Closes #56

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 00:04:04 +01:00
22962c22cf Update spawn-issues to concurrent pipeline with status updates
- Each issue flows independently through: implement → review → fix → review
- Don't wait for all workers before starting reviews
- Print status update as each step completes
- Poll loop checks all tasks, advances each issue independently
- State machine: implementing → reviewing → fixing → approved/failed

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 18:11:05 +01:00
3afe930a27 Refactor spawn-issues as orchestrator
spawn-issues now orchestrates the full workflow:
- Phase 1: Spawn issue-workers in parallel, wait for completion
- Phase 2: Review loop - spawn code-reviewer, if needs work spawn pr-fixer
- Phase 3: Report final status

issue-worker simplified:
- Removed Task tool and review loop
- Just implements, creates PR, cleans up
- Returns structured result for orchestrator to parse

Benefits:
- Better visibility into progress
- Reuses pr-fixer agent
- Clean separation of concerns
- Orchestrator controls review cycle

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 17:33:22 +01:00
7dffdc4e77 Add review loop to spawn-issues agent prompt
The inline prompt in spawn-issues.md was missing the review loop
that was added to issue-worker/agent.md. Now includes:
- Step 7: Spawn code-reviewer synchronously, fix and re-review if needed
- Step 9: Concise final summary output

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 17:26:21 +01:00
d3bc674b4a Add /spawn-pr-fixes command and pr-fixer agent
New command to spawn parallel agents that address PR review feedback:
- /spawn-pr-fixes 12 15 18 - fix specific PRs
- /spawn-pr-fixes - auto-find PRs with requested changes

pr-fixer agent workflow:
- Creates worktree from PR branch
- Reads review comments
- Addresses each piece of feedback
- Commits and pushes fixes
- Runs code-reviewer synchronously
- Loops until approved (max 3 iterations)
- Cleans up worktree
- Outputs concise summary

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 17:14:24 +01:00
0692074e16 Add review loop and concise summary to issue-worker agent
- Add Task tool to spawn code-reviewer synchronously
- Add review loop: fix issues and re-review until approved (max 3 iterations)
- Add final summary format for cleaner output to spawning process
- Reviewer works in same worktree, cleanup only after review completes

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 17:07:42 +01:00
c67595b421 Add skills frontmatter to issue-worker agent
Background agents need skills specified in frontmatter rather than
using @ syntax which may not expand for Task-spawned agents.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 16:57:22 +01:00
a7d7d60440 Add /spawn-issues command for parallel issue work
New command that spawns background agents to work on multiple
issues simultaneously, each in an isolated git worktree.

- commands/spawn-issues.md: Entry point, parses args, spawns agents
- agents/issue-worker/agent.md: Autonomous agent that implements
  a single issue (worktree setup, implement, PR, cleanup)

Worktrees are automatically cleaned up after PR creation.
Branch remains on remote for follow-up work if needed.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 16:50:34 +01:00
65a107c2eb Add vertical vs horizontal slicing guidance
Adds guidance to prefer vertical slices (user-visible value) over
horizontal slices (technical layers) when planning and writing issues.

roadmap-planning skill:
- New "Vertical vs Horizontal Slices" section
- Demo test: "Can a user demo/test this independently?"
- Good vs bad examples table
- When horizontal slices are acceptable

issue-writing skill:
- New "Vertical Slices" section
- Demo test guidance
- Good vs bad issue titles table
- User-focused issue framing examples

Closes #31

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 13:21:52 +01:00
ff56168073 Mark skills as not user-invocable
Skills are knowledge modules referenced by commands, not
directly invoked by users. Added user-invocable: false to:
- backlog-grooming (used by /groom)
- claude-md-writing (used by /update-claude-md)
- code-review (used by /review-pr)
- issue-writing (used by /create-issue)
- roadmap-planning (used by /plan-issues)
- vision-management (used by /vision, /manifesto)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 13:01:19 +01:00
d980a0d0bc Add new frontmatter fields from Claude Code 2.1.0
Update documentation and apply new frontmatter capabilities:

Documentation:
- Add user-invocable, context, agent, hooks fields to writing-skills.md
- Add disallowedTools, permissionMode, hooks fields to writing-agents.md
- Add model, context, hooks, allowed-tools fields to writing-commands.md
- Document skill hot-reload, built-in agents, background execution

Skills:
- Add user-invocable: false to gitea (CLI reference)
- Add user-invocable: false to repo-conventions (standards reference)

Commands:
- Add context: fork to heavy exploration commands (improve, plan-issues,
  create-repo, update-claude-md)
- Add missing argument-hint to roadmap, manifesto, improve

Agents:
- Add disallowedTools: [Edit, Write] to code-reviewer for safety

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 14:19:56 +01:00
1f1d9961fc Add /update-claude-md command
Updates or creates CLAUDE.md with:
- Organization context section (links to manifesto, repos.md, vision)
- Current project structure from filesystem scan
- Architecture patterns inferred or asked

Preserves existing custom content, shows diff before writing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 10:26:10 +01:00
057d4dac57 Add CLAUDE.md guidance and repository map
- Create claude-md-writing skill with best practices for CLAUDE.md files
- Create repos.md registry of all repos with status (Active/Planned/Splitting)
- Update /create-repo to include organization context section
- Update repo-conventions to reference new skill

Each repo's CLAUDE.md now links to manifesto, repos.md, and vision.md
so Claude always understands the bigger picture.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 10:24:10 +01:00
0e1a65f0e3 Add repo-conventions skill and /create-repo command
Skill documents standard repo structure, naming conventions,
open vs proprietary guidance, and CI/CD patterns.

Command scaffolds new repos with vision.md, CLAUDE.md, Makefile,
CI workflow, and .gitignore - all linked to the architecture repo.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 23:58:16 +01:00
305e4b8927 Add resource efficiency belief to manifesto
Software should run well on modest hardware. ARM64-native where possible.
Bloated software is a sign of poor engineering.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 23:50:07 +01:00
1dff275479 Use sibling repo convention for manifesto location
Product repos find the manifesto at ../architecture/manifesto.md.
This allows the architecture repo to be a sibling of product repos.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 22:21:31 +01:00
c88304a271 Update vision system to properly extend manifesto
- Rebuild vision.md to trace personas, jobs, and principles back to manifesto
- Improve /vision command with inheritance guidance and templates
- Update vision-management skill with explicit inheritance rules and formats

Product visions now explicitly extend (not duplicate) organization manifesto.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 22:14:25 +01:00
a3056bce12 Refocus manifesto on domain experts and organizations
Shift from developer-centric personas (solo dev, small team) to the actual
mission: empowering domain experts to create software without coding.

- Who We Serve: Domain experts, Agencies, Organizations (small → enterprise)
- Added "Empowering Domain Experts" beliefs section
- Integrated "build in public" into Who We Are
- Updated non-goals to align with new focus

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 21:33:25 +01:00
68 changed files with 8506 additions and 2308 deletions

View File

@@ -16,9 +16,9 @@ make install
| Component | Purpose | | Component | Purpose |
|-----------|---------| |-----------|---------|
| `manifesto.md` | Organization vision, personas, beliefs, principles | | `manifesto.md` | Organization vision, personas, beliefs, principles |
| `software-architecture.md` | Architectural patterns (human docs, mirrored in skill) |
| `learnings/` | Historical record and governance | | `learnings/` | Historical record and governance |
| `commands/` | AI workflow entry points (/work-issue, /manifesto, etc.) | | `skills/` | AI workflows and knowledge modules |
| `skills/` | Tool and practice knowledge |
| `agents/` | Focused subtask handlers | | `agents/` | Focused subtask handlers |
| `settings.json` | Claude Code configuration | | `settings.json` | Claude Code configuration |
| `Makefile` | Install symlinks to ~/.claude/ | | `Makefile` | Install symlinks to ~/.claude/ |
@@ -28,9 +28,9 @@ make install
``` ```
architecture/ architecture/
├── manifesto.md # Organization vision and beliefs ├── manifesto.md # Organization vision and beliefs
├── software-architecture.md # Patterns linked to beliefs (DDD, ES)
├── learnings/ # Captured learnings and governance ├── learnings/ # Captured learnings and governance
├── commands/ # Slash commands (/work-issue, /dashboard) ├── skills/ # User-invocable (/work-issue) and background skills
├── skills/ # Knowledge modules (auto-triggered)
├── agents/ # Focused subtask handlers (isolated context) ├── agents/ # Focused subtask handlers (isolated context)
├── scripts/ # Hook scripts (pre-commit, token loading) ├── scripts/ # Hook scripts (pre-commit, token loading)
├── settings.json # Claude Code settings ├── settings.json # Claude Code settings
@@ -41,17 +41,17 @@ All files symlink to `~/.claude/` via `make install`.
## Two Levels of Vision ## Two Levels of Vision
| Level | Document | Command | Purpose | | Level | Document | Skill | Purpose |
|-------|----------|---------|---------| |-------|----------|-------|---------|
| Organization | `manifesto.md` | `/manifesto` | Who we are, shared personas, beliefs | | Organization | `manifesto.md` | `/manifesto` | Who we are, shared personas, beliefs |
| Product | `vision.md` | `/vision` | Product-specific direction and goals | | Product | `vision.md` | `/vision` | Product-specific direction and goals |
See the manifesto for our identity, personas, and beliefs about AI-augmented development. See the manifesto for our identity, personas, and beliefs about AI-augmented development.
## Available Commands ## Available Skills
| Command | Description | | Skill | Description |
|---------|-------------| |-------|-------------|
| `/manifesto` | View/manage organization manifesto | | `/manifesto` | View/manage organization manifesto |
| `/vision` | View/manage product vision and milestones | | `/vision` | View/manage product vision and milestones |
| `/work-issue <n>` | Fetch issue, create branch, implement, create PR | | `/work-issue <n>` | Fetch issue, create branch, implement, create PR |
@@ -77,28 +77,28 @@ tea logins add --name flowmade --url https://git.flowmade.one --token <your-toke
## Architecture Components ## Architecture Components
### Skills ### Skills
Knowledge modules that teach Claude how to do something.
Skills come in two types:
**User-invocable** (`user-invocable: true`): Workflows users trigger with `/skill-name`
- **Purpose**: Orchestrate workflows with user interaction
- **Location**: `skills/<name>/SKILL.md`
- **Usage**: User types `/dashboard`, `/work-issue 42`, etc.
**Background** (`user-invocable: false`): Knowledge auto-loaded when needed
- **Purpose**: Encode best practices and tool knowledge - **Purpose**: Encode best practices and tool knowledge
- **Location**: `skills/<name>/SKILL.md` - **Location**: `skills/<name>/SKILL.md`
- **Usage**: Referenced by commands via `@~/.claude/skills/xxx/SKILL.md` - **Usage**: Referenced by other skills via `@~/.claude/skills/xxx/SKILL.md`
### Commands
User-facing entry points invoked with `/command-name`.
- **Purpose**: Orchestrate workflows with user interaction
- **Location**: `commands/<name>.md`
- **Usage**: User types `/dashboard`, `/work-issue 42`, etc.
### Agents ### Agents
Focused units that handle specific subtasks in isolated context. Focused units that handle specific subtasks in isolated context.
- **Purpose**: Complex subtasks that benefit from isolation - **Purpose**: Complex subtasks that benefit from isolation
- **Location**: `agents/<name>/agent.md` - **Location**: `agents/<name>/AGENT.md`
- **Usage**: Spawned via Task tool, return results to caller - **Usage**: Spawned via Task tool, return results to caller
### Learnings ### Learnings
Captured insights from work, encoded into skills/commands/agents. Captured insights from work, encoded into skills/agents.
- **Purpose**: Historical record + governance + continuous improvement - **Purpose**: Historical record + governance + continuous improvement
- **Location**: `learnings/YYYY-MM-DD-title.md` - **Location**: `learnings/YYYY-MM-DD-title.md`

View File

@@ -4,7 +4,7 @@ CLAUDE_DIR := $(HOME)/.claude
REPO_DIR := $(shell pwd) REPO_DIR := $(shell pwd)
# Items to symlink # Items to symlink
ITEMS := commands scripts skills agents settings.json ITEMS := scripts skills agents settings.json
install: install:
@echo "Installing Claude Code config symlinks..." @echo "Installing Claude Code config symlinks..."

133
VISION.md
View File

@@ -1,5 +1,32 @@
# Vision # Vision
This product vision builds on the [organization manifesto](manifesto.md).
## Who This Product Serves
### Flowmade Developers
The team building Flowmade's platform. They need efficient, consistent AI workflows to deliver on the organization's promise: helping domain experts create software without coding.
*Extends: Agencies & Consultancies (from manifesto) - we are our own first customer.*
### AI-Augmented Developers
Developers in the broader community who want to treat AI assistance as a structured tool. They benefit from our "build in public" approach - adopting and adapting our workflows for their own teams.
*Extends: The manifesto's commitment to sharing practices with the developer community.*
## What They're Trying to Achieve
These trace back to organization-level jobs:
| Product Job | Enables Org Job |
|-------------|-----------------|
| "Help me work consistently with AI across sessions" | "Help me deliver maintainable solutions to clients faster" |
| "Help me encode best practices so AI applies them" | "Help me reduce dependency on developers for business process changes" |
| "Help me manage issues and PRs without context switching" | "Help me deliver maintainable solutions to clients faster" |
| "Help me capture and share learnings from my work" | (Build in public commitment) |
## The Problem ## The Problem
AI-assisted development is powerful but inconsistent. Claude Code can help with nearly any task, but without structure: AI-assisted development is powerful but inconsistent. Claude Code can help with nearly any task, but without structure:
@@ -9,102 +36,60 @@ AI-assisted development is powerful but inconsistent. Claude Code can help with
- Context gets lost when switching between tasks - Context gets lost when switching between tasks
- There's no shared vocabulary for common patterns - There's no shared vocabulary for common patterns
The gap isn't in AI capabilityit's in how we use it. The gap isn't in AI capability - it's in how we use it.
## The Solution ## The Solution
This project provides a **composable toolkit** for Claude Code that turns ad-hoc AI assistance into structured, repeatable workflows. A **composable toolkit** for Claude Code that turns ad-hoc AI assistance into structured, repeatable workflows.
Instead of asking Claude to "help with issues" differently each time, you run `/work-issue 42` and get a consistent workflow: fetch the issue, create a branch, plan the work, implement, commit with proper references, and create a PR. Instead of asking Claude to "help with issues" differently each time, you run `/work-issue 42` and get a consistent workflow: fetch the issue, create a branch, plan the work, implement, commit with proper references, and create a PR.
The key insight: **encode your team's best practices into reusable components** that Claude can apply consistently. ### Architecture
## Composable Components Three component types that stack together:
The system is built from three types of components that stack together: | Component | Purpose | Example |
|-----------|---------|---------|
| **Skills** | Knowledge modules - teach Claude how to do something | `gitea`, `issue-writing` |
| **Agents** | Focused subtask handlers in isolated context | `code-reviewer` |
| **Commands** | User workflows - orchestrate skills and agents | `/work-issue`, `/dashboard` |
### Skills Skills don't act on their own. Agents handle complex subtasks in isolation. Commands are the entry points that tie it together.
Skills are knowledge modules—focused documents that teach Claude how to do something well. ## Product Principles
Examples: These extend the organization's guiding principles:
- `issue-writing`: How to structure clear, actionable issues
- `gitea`: How to use the Gitea CLI for issue/PR management
- `backlog-grooming`: What makes a healthy backlog
Skills don't do anything on their own. They're building blocks.
### Agents
Agents are small, focused units that handle specific subtasks in isolated context.
Unlike commands (which run in the main conversation), agents are spawned via the Task tool to do a specific job and report back. They should be:
- **Small and focused**: One clear responsibility
- **Isolated**: Work without needing conversation history
- **Result-oriented**: Return a specific output (analysis, categorization, generated content)
Examples:
- `code-reviewer`: Reviews a PR diff and reports issues
- A hypothetical `categorize-milestone`: Given an issue, determines which milestone it belongs to
Agents enable:
- **Parallel processing**: Multiple agents can work simultaneously
- **Context isolation**: Complex subtasks don't pollute the main conversation
- **Reusability**: Same agent can be spawned by different commands
### Commands
Commands are the user-facing entry points—what you actually invoke.
When you run `/plan-issues add dark mode`, the command:
1. Understands what you're asking for
2. References skills for knowledge (how to write issues, use Gitea, etc.)
3. Optionally spawns agents for complex subtasks
4. Guides you through the workflow with approvals
5. Takes action (creates issues, PRs, etc.)
Commands run in the main conversation context, using skills for knowledge and spawning agents only when isolated processing is beneficial.
## Target Users
This toolkit is for:
- **Developers using Claude Code** who want consistent, efficient workflows
- **Teams** who want to encode and share their best practices
- **Gitea/Git users** who want seamless issue and PR management integrated into their AI workflow
You should have:
- Claude Code CLI installed
- A Gitea instance (or adapt the tooling for GitHub/GitLab)
- Interest in treating AI assistance as a structured tool, not just a chat interface
## Guiding Principles
### Encode, Don't Repeat
If you find yourself explaining the same thing to Claude repeatedly, that's a skill waiting to be written. Capture it once, use it everywhere.
### Composability Over Complexity ### Composability Over Complexity
Small, focused components that combine well beat large, monolithic solutions. A skill should do one thing. An agent should serve one role. A command should trigger one workflow. Small, focused components that combine well beat large, monolithic solutions. A skill does one thing. An agent serves one role. A command triggers one workflow.
*Extends: "Small teams, big leverage"*
### Approval Before Action ### Approval Before Action
Destructive or significant actions should require user approval. Commands should show what they're about to do and ask before doing it. This builds trust and catches mistakes. Destructive or significant actions require user approval. Commands show what they're about to do and ask before doing it.
### Use the Tools to Build the Tools *Extends: Non-goal "Replacing human judgment"*
This project uses its own commands to manage itself. Issues are created with `/create-issue`. Features are planned with `/plan-issues`. PRs are reviewed with `/review-pr`. Dogfooding ensures the tools actually work. ### Dogfooding
This project uses its own commands to manage itself. Issues are created with `/create-issue`. PRs are reviewed with `/review-pr`. If the tools don't work for us, they won't work for anyone.
*Extends: "Ship to learn"*
### Progressive Disclosure ### Progressive Disclosure
Simple things should be simple. `/dashboard` just shows your issues and PRs. But the system supports complex workflows when you need them. Don't require users to understand the full architecture to get value. Simple things should be simple. `/dashboard` just shows your issues and PRs. Complex workflows are available when needed, but not required to get value.
## What This Is Not *Extends: "Opinionated defaults, escape hatches available"*
This is not: ## Non-Goals
- A replacement for Claude Code—it enhances it
- A rigid framework—adapt it to your needs
- Complete—it grows as we discover new patterns
It's a starting point for treating AI-assisted development as a first-class engineering concern. These extend the organization's non-goals:
- **Replacing Claude Code.** This enhances Claude Code, not replaces it. The toolkit adds structure; Claude provides the capability.
- **One-size-fits-all workflows.** Teams should adapt these patterns to their needs. We provide building blocks, not a rigid framework.
- **Feature completeness.** The toolkit grows as we discover new patterns. It's a starting point, not an end state.

View File

@@ -1,73 +0,0 @@
---
name: code-reviewer
description: Automated code review of pull requests. Reviews PRs for quality, bugs, security, style, and test coverage. Spawn after PR creation or for on-demand review.
# Model: sonnet provides good code understanding for review tasks.
# The structured output format doesn't require opus-level reasoning.
model: sonnet
skills: gitea, code-review
---
You are a code review specialist that provides immediate, structured feedback on pull request changes.
## When Invoked
You will receive a PR number to review. Follow this process:
1. Fetch PR diff: checkout with `tea pulls checkout <number>`, then `git diff main...HEAD`
2. Analyze the diff for issues in these categories:
- **Code Quality**: Readability, maintainability, complexity
- **Bugs**: Logic errors, edge cases, null checks
- **Security**: Injection vulnerabilities, auth issues, data exposure
- **Style**: Naming conventions, formatting, consistency
- **Test Coverage**: Missing tests, untested edge cases
3. Generate a structured review comment
4. Post the review using `tea comment <number> "<review body>"`
5. **If verdict is LGTM**: Merge with `tea pulls merge <number> --style rebase`
6. **If verdict is NOT LGTM**: Do not merge; leave for the user to address
## Review Comment Format
Post reviews in this structured format:
```markdown
## AI Code Review
> This is an automated review generated by the code-reviewer agent.
### Summary
[Brief overall assessment]
### Findings
#### Code Quality
- [Finding 1]
- [Finding 2]
#### Potential Bugs
- [Finding or "No issues found"]
#### Security Concerns
- [Finding or "No issues found"]
#### Style Notes
- [Finding or "Consistent with codebase"]
#### Test Coverage
- [Finding or "Adequate coverage"]
### Verdict
[LGTM / Needs Changes / Blocking Issues]
```
## Verdict Criteria
- **LGTM**: No blocking issues, code meets quality standards, ready to merge
- **Needs Changes**: Minor issues worth addressing before merge
- **Blocking Issues**: Security vulnerabilities, logic errors, or missing critical functionality
## Guidelines
- Be specific: Reference exact lines and explain *why* something is an issue
- Be constructive: Suggest alternatives when pointing out problems
- Be kind: Distinguish between blocking issues and suggestions
- Acknowledge good solutions when you see them

255
agents/ddd-analyst/AGENT.md Normal file
View File

@@ -0,0 +1,255 @@
---
name: ddd-analyst
description: >
Analyzes manifesto, vision, and codebase to identify bounded contexts and
generate DDD-based implementation issues as user stories. Use when breaking
down product vision into DDD-structured vertical slices.
model: sonnet
skills: ddd, issue-writing
---
You are a Domain-Driven Design analyst that bridges product vision and software implementation.
## Your Role
Analyze product vision and existing code to:
1. Identify bounded contexts (intended vs actual)
2. Map features to DDD patterns (aggregates, commands, events)
3. Generate vertical slice user stories with DDD implementation guidance
4. Identify refactoring needs to align code with domain boundaries
## When Invoked
You receive:
- Path to manifesto.md (organization vision and personas)
- Path to vision.md (product-specific goals and features)
- Working directory (product codebase to analyze)
You produce:
- Structured analysis of bounded contexts
- List of user stories with DDD implementation guidance
- Each story formatted per issue-writing skill
## Process
### 1. Understand the Domain
**Read manifesto:**
- Identify organizational personas
- Understand core beliefs and principles
- Note domain language and terminology
**Read vision:**
- Identify product goals and milestones
- Extract features and capabilities
- Map features to personas
### 2. Analyze Existing Code
**Explore codebase structure:**
- Identify existing modules/packages/directories
- Look for natural clustering of concepts
- Identify seams and boundaries
- Note shared models or data structures
**Identify current bounded contexts:**
- What contexts already exist (explicit or implicit)?
- Are boundaries clear or mixed?
- Is language consistent within contexts?
- Are there translation layers between contexts?
### 3. Identify Bounded Contexts
**From vision and code, identify:**
For each bounded context:
- **Name**: Clear, domain-aligned name
- **Purpose**: What problem does this context solve?
- **Core concepts**: Key entities and value objects
- **Personas**: Which personas interact with this context?
- **Boundaries**: What's inside vs outside this context?
- **Current state**: Does this exist in code? Is it well-bounded?
**Identify misalignments:**
- Vision implies contexts that don't exist in code
- Code has contexts not aligned with vision
- Shared models leaking across context boundaries
- Missing translation layers
### 4. Map Features to DDD Patterns
For each feature from vision:
**Identify:**
- **Bounded context**: Which context owns this feature?
- **Aggregate(s)**: What entities/value objects are involved?
- **Commands**: What actions can users/systems take?
- **Events**: What facts should be recorded?
- **Value objects**: What concepts are attribute-defined?
**Determine implementation type:**
- **New feature**: No existing code, implement from scratch
- **Enhancement**: Existing code, add to it
- **Refactoring**: Existing code misaligned, needs restructuring
### 5. Generate User Stories
For each feature, create a user story following issue-writing skill format:
```markdown
Title: As a [persona], I want to [capability], so that [benefit]
## User Story
As a [persona], I want to [capability], so that [benefit]
## Acceptance Criteria
- [ ] Specific, testable, user-focused criteria
- [ ] Another criteria
- [ ] Verifiable outcome
## Bounded Context
[Context name]
## DDD Implementation Guidance
**Type:** [New Feature | Enhancement | Refactoring]
**Aggregate(s):**
- `[AggregateName]` (root)
- `[Entity]`
- `[ValueObject]`
**Commands:**
- `[CommandName]` - [what it does]
**Events:**
- `[EventName]` - [when it's published]
**Value Objects:**
- `[ValueObjectName]` - [what it represents]
## Technical Notes
[Implementation hints, dependencies, refactoring needs]
## Dependencies
- [Links to related issues or blockers]
```
**For refactoring issues:**
```markdown
Title: Refactor [component] to align with [context] bounded context
## Summary
Current state: [describe misalignment]
Desired state: [describe proper DDD structure]
## Acceptance Criteria
- [ ] Code moved to [context] module
- [ ] Boundaries clearly defined
- [ ] Tests updated
- [ ] No regression in functionality
## Bounded Context
[Context name]
## DDD Implementation Guidance
**Type:** Refactoring
**Changes needed:**
- Extract [Aggregate] from [current location]
- Introduce [ValueObject] to replace [primitive]
- Add translation layer between [Context1] and [Context2]
## Technical Notes
[Migration strategy, backward compatibility]
```
### 6. Structure Output
**Present analysis as:**
```markdown
# DDD Analysis: [Product Name]
## Bounded Contexts Identified
### [Context Name]
- **Purpose:** [what it does]
- **Core Concepts:** [list]
- **Personas:** [who uses it]
- **Current State:** [exists/partial/missing]
- **Misalignments:** [if any]
[Repeat for each context]
## User Stories Generated
### Context: [Context Name]
1. [Story title]
2. [Story title]
...
[Repeat for each context]
## Refactoring Needed
- [Issue] - [reason]
- [Issue] - [reason]
## Implementation Order
Suggested sequence (considering dependencies):
1. [Story/refactoring]
2. [Story/refactoring]
...
---
## Detailed User Stories
[Full user story format for each issue]
```
## Guidelines
**Strategic before tactical:**
- Identify bounded contexts first
- Then map features to contexts
- Then identify aggregates/commands/events
**Vertical slices:**
- Each story delivers user value
- Can be demoed independently
- Includes all layers (UI, logic, data)
**Keep aggregates small:**
- Single entity when possible
- 2-3 entities maximum
- Each aggregate enforces its own invariants
**Clear boundaries:**
- Each context owns its data
- Communication via events or APIs
- No shared mutable state
**Refactor incrementally:**
- Refactoring issues should be small
- Don't require big-bang rewrites
- Maintain backward compatibility when possible
**Dependencies:**
- Identify blocking issues (e.g., aggregate before commands)
- Note cross-context dependencies
- Suggest implementation order
## Tips
- Use persona names from manifesto in user stories
- Use domain language from vision consistently
- When uncertain about boundaries, propose options
- Prioritize core domain over supporting/generic subdomains
- Identify quick wins (small refactorings with big impact)
- Note where existing code is already well-aligned

View File

@@ -1,13 +0,0 @@
---
description: Show dashboard of open issues, PRs awaiting review, and CI status.
---
# Repository Dashboard
@~/.claude/skills/gitea/SKILL.md
Fetch and display:
1. All open issues
2. All open PRs
Format as tables showing number, title, and author.

View File

@@ -1,38 +0,0 @@
---
description: Review a Gitea pull request. Fetches PR details, diff, and comments.
argument-hint: <pr-number>
---
# Review PR #$1
@~/.claude/skills/gitea/SKILL.md
1. **View PR details** with `--comments` flag to see description, metadata, and discussion
2. **Get the diff** to review the changes
Review the changes and provide feedback on:
- Code quality
- Potential bugs
- Test coverage
- Documentation
Ask the user what action to take:
- **Merge**: Post review summary as comment, then merge with rebase style
- **Request changes**: Leave feedback without merging
- **Comment only**: Add a comment for discussion
## Merging
Always use tea CLI for merges to preserve user attribution:
```bash
tea pulls merge <number> --style rebase
```
For review comments, use `tea comment` since `tea pulls review` is interactive-only:
```bash
tea comment <number> "<review summary>"
```
> **Warning**: Never use the Gitea API with admin credentials for user-facing operations like merging. This causes the merge to be attributed to the admin account instead of the user.

View File

@@ -1,110 +0,0 @@
---
description: View the product vision and goal progress. Manages vision.md and Gitea milestones.
argument-hint: [goals]
---
# Product Vision
@~/.claude/skills/vision-management/SKILL.md
@~/.claude/skills/gitea/SKILL.md
This command manages **product-level** vision. For organization-level vision, use `/manifesto`.
## Architecture
| Level | Document | Purpose | Command |
|-------|----------|---------|---------|
| **Organization** | `manifesto.md` | Who we are, shared personas, beliefs | `/manifesto` |
| **Product** | `vision.md` | Product-specific personas, jobs, solution | `/vision` |
| **Goals** | Gitea milestones | Measurable progress toward vision | `/vision goals` |
Product vision inherits from and extends the organization manifesto.
## Process
1. **Check for organization manifesto**: Note if `manifesto.md` exists (provides org context)
2. **Check for product vision**: Look for `vision.md` in the current repo root
3. **If no vision exists**:
- Reference the organization manifesto if it exists
- Ask if the user wants to create a product vision
- Guide them through defining:
1. **Product personas**: Who does this product serve? (may extend org personas)
2. **Product jobs**: What specific jobs does this product address?
3. **The problem**: What pain points does this product solve?
4. **The solution**: How does this product address those jobs?
5. **Product principles**: Any product-specific principles (beyond org principles)?
6. **Product non-goals**: What is this product explicitly NOT doing?
- Create `vision.md`
- Ask about initial goals, create as Gitea milestones
4. **If vision exists**:
- Display organization context (if manifesto exists)
- Display the product vision from `vision.md`
- Show current milestones and their progress: `tea milestones`
- Check if `$1` specifies an action:
- `goals`: Manage milestones (add, close, view progress)
- If no action specified, just display the current state
5. **Managing Goals (milestones)**:
```bash
# List milestones with progress
tea milestones
# Create a new goal
tea milestones create --title "<goal>" --description "For: <persona>
Job: <job to be done>
Success: <criteria>"
# View issues in a milestone
tea milestones issues <milestone-name>
# Close a completed goal
tea milestones close <milestone-name>
```
## Output Format
```
## Organization Context
See manifesto for shared personas, beliefs, and principles.
[Link or note about manifesto.md location]
## Product: [Name]
### Who This Product Serves
- **[Persona 1]**: [Product-specific description]
- **[Persona 2]**: [Product-specific description]
### What They're Trying to Achieve
- "[Product-specific job 1]"
- "[Product-specific job 2]"
### Product Vision
[Summary of problem/solution from vision.md]
### Goals (Milestones)
| Goal | For | Progress | Due |
|------|-----|----------|-----|
| [title] | [Persona] | 3/5 issues | [date] |
### Current Focus
[Open milestones with nearest due dates or most activity]
```
## Guidelines
- Product vision builds on organization manifesto - don't duplicate, extend
- Product personas can be more specific versions of org personas
- Product jobs should trace back to org-level jobs to be done
- Milestones are product-specific goals toward the vision
- Use `/manifesto` for organization-level identity and beliefs
- Use `/vision` for product-specific direction and goals
- If this is the architecture repo itself, use `/manifesto` instead

View File

@@ -1,17 +0,0 @@
---
description: Work on a Gitea issue. Fetches issue details and sets up branch for implementation.
argument-hint: <issue-number>
---
# Work on Issue #$1
@~/.claude/skills/gitea/SKILL.md
1. **View the issue** with `--comments` flag to understand requirements and context
2. **Create a branch**: `git checkout -b issue-$1-<short-kebab-title>`
3. **Plan**: Use TodoWrite to break down the work based on acceptance criteria
4. **Implement** the changes
5. **Commit** with message referencing the issue
6. **Push** the branch to origin
7. **Create PR** with title "[Issue #$1] <title>" and body "Closes #$1"
8. **Auto-review**: Inform the user that auto-review is starting, then spawn the `code-reviewer` agent in background (using `run_in_background: true`) with the PR number

View File

@@ -1,591 +0,0 @@
# Writing Agents
A guide to creating specialized subagents that combine multiple skills for complex, context-isolated tasks.
## What is an Agent?
Agents are **specialized subprocesses** that combine multiple skills into focused personas. Unlike commands (which define workflows) or skills (which encode knowledge), agents are autonomous workers that can handle complex tasks independently.
Think of agents as specialists you can delegate work to. They have their own context, their own expertise (via skills), and they report back when finished.
## File Structure
Agents live in the `agents/` directory, each in its own folder:
```
agents/
└── product-manager/
└── AGENT.md
```
### Why AGENT.md?
The uppercase `AGENT.md` filename:
- Makes the agent file immediately visible in directory listings
- Follows a consistent convention across all agents
- Clearly identifies the primary file in an agent folder
### Supporting Files (Optional)
An agent folder can contain additional files if needed:
```
agents/
└── code-reviewer/
├── AGENT.md # Main agent document (required)
└── checklists/ # Supporting materials
└── security.md
```
However, prefer keeping everything in `AGENT.md` when possible—agent definitions should be concise.
## Agent Document Structure
A well-structured `AGENT.md` follows this pattern:
```markdown
# Agent Name
Brief description of what this agent does.
## Skills
List of skills this agent has access to.
## Capabilities
What the agent can do—its areas of competence.
## When to Use
Guidance on when to spawn this agent.
## Behavior
How the agent should operate—rules and constraints.
```
All sections are important:
- **Skills**: Defines what knowledge the agent has
- **Capabilities**: Tells spawners what to expect
- **When to Use**: Prevents misuse and guides selection
- **Behavior**: Sets expectations for operation
## How Agents Combine Skills
Agents gain their expertise by combining multiple skills. Each skill contributes domain knowledge to the agent's overall capability.
### Skill Composition
```
┌────────────────────────────────────────────────┐
│ Product Manager Agent │
│ │
│ ┌──────────┐ ┌──────────────┐ │
│ │ gitea │ │issue-writing │ │
│ │ │ │ │ │
│ │ CLI │ │ Structure │ │
│ │ commands │ │ patterns │ │
│ └──────────┘ └──────────────┘ │
│ │
│ ┌──────────────────┐ ┌─────────────────┐ │
│ │backlog-grooming │ │roadmap-planning │ │
│ │ │ │ │ │
│ │ Review │ │ Feature │ │
│ │ checklists │ │ breakdown │ │
│ └──────────────────┘ └─────────────────┘ │
│ │
└────────────────────────────────────────────────┘
```
The agent can:
- Use **gitea** to interact with issues and PRs
- Apply **issue-writing** patterns when creating content
- Follow **backlog-grooming** checklists when reviewing
- Use **roadmap-planning** strategies when breaking down features
### Emergent Capabilities
When skills combine, new capabilities emerge:
| Skills Combined | Emergent Capability |
|-----------------|---------------------|
| gitea + issue-writing | Create well-structured issues programmatically |
| backlog-grooming + issue-writing | Improve existing issues systematically |
| roadmap-planning + gitea | Plan and create linked issue hierarchies |
| All four skills | Full backlog management lifecycle |
## Use Cases for Agents
### 1. Parallel Processing
Agents work independently with their own context. Spawn multiple agents to work on separate tasks simultaneously.
```
Command: /groom (batch mode)
├─── Spawn Agent: Review issues #1-5
├─── Spawn Agent: Review issues #6-10
└─── Spawn Agent: Review issues #11-15
↓ (agents work in parallel)
Results aggregated by command
```
**Use when:**
- Tasks are independent and don't need to share state
- Workload can be divided into discrete chunks
- Speed matters more than sequential consistency
### 2. Context Isolation
Each agent maintains separate conversation state. This prevents context pollution when handling complex, unrelated subtasks.
```
Main Context Agent Context
┌─────────────────┐ ┌─────────────────┐
│ User working on │ │ Isolated work │
│ feature X │ spawn │ on backlog │
│ │ ─────────► │ review │
│ (preserves │ │ │
│ feature X │ return │ (doesn't know │
│ context) │ ◄───────── │ about X) │
└─────────────────┘ └─────────────────┘
```
**Use when:**
- Subtask requires deep exploration that would pollute main context
- Work involves many files or concepts unrelated to main task
- You want clean separation between different concerns
### 3. Complex Workflows
Some workflows are better handled by a specialized agent than by inline execution. Agents can make decisions, iterate, and adapt.
```
Command: /plan-issues "add user authentication"
└─── Spawn product-manager agent
├── Explore codebase to understand structure
├── Research authentication patterns
├── Design issue breakdown
├── Create issues in dependency order
└── Return summary to command
```
**Use when:**
- Task requires iterative decision-making
- Workflow has many steps that depend on intermediate results
- Specialist expertise (via combined skills) adds value
### 4. Autonomous Exploration
Agents can explore codebases independently, building understanding without polluting the main conversation.
**Use when:**
- You need to understand a new part of the codebase
- Exploration might involve many file reads and searches
- Results should be summarized, not shown in full
## When to Use an Agent vs Direct Skill Invocation
### Use Direct Skill Invocation When:
- **Simple, single-skill task**: Writing one issue doesn't need an agent
- **Main context is relevant**: The current conversation context helps
- **Quick reference needed**: Just need to check a pattern or command
- **Sequential workflow**: Command can orchestrate step-by-step
Example: Creating a single issue with `/create-issue`
```
Command reads issue-writing skill directly
└── Creates one issue following patterns
```
### Use an Agent When:
- **Multiple skills needed together**: Complex tasks benefit from composition
- **Context isolation required**: Don't want to pollute main conversation
- **Parallel execution possible**: Can divide and conquer
- **Autonomous exploration needed**: Agent can figure things out independently
- **Specialist persona helps**: "Product manager" framing improves outputs
Example: Grooming entire backlog with `/groom`
```
Command spawns product-manager agent
└── Agent iterates through all issues
using multiple skills
```
### Decision Matrix
| Scenario | Agent? | Reason |
|----------|--------|--------|
| Create one issue | No | Single skill, simple task |
| Review 20 issues | Yes | Batch processing, isolation |
| Quick CLI lookup | No | Just need gitea reference |
| Plan new feature | Yes | Multiple skills, exploration |
| Fix issue title | No | Trivial edit |
| Reorganize backlog | Yes | Complex, multi-skill workflow |
## Annotated Example: Product Manager Agent
Let's examine the `product-manager` agent in detail:
```markdown
# Product Manager Agent
Specialized agent for backlog management and roadmap planning.
```
**The opening** identifies the agent's role clearly. "Product Manager" is a recognizable persona that sets expectations.
```markdown
## Skills
- gitea
- issue-writing
- backlog-grooming
- roadmap-planning
```
**Skills section** lists all knowledge the agent has access to. These skills are loaded into the agent's context when spawned. The combination enables:
- Reading/writing issues (gitea)
- Creating quality content (issue-writing)
- Evaluating existing issues (backlog-grooming)
- Planning work strategically (roadmap-planning)
```markdown
## Capabilities
This agent can:
- Review and improve existing issues
- Create new well-structured issues
- Analyze the backlog for gaps and priorities
- Plan feature breakdowns
- Maintain roadmap clarity
```
**Capabilities section** tells spawners what to expect. Each capability maps to skill combinations:
- "Review and improve" = backlog-grooming + issue-writing
- "Create new issues" = gitea + issue-writing
- "Analyze backlog" = backlog-grooming + roadmap-planning
- "Plan breakdowns" = roadmap-planning + issue-writing
```markdown
## When to Use
Spawn this agent for:
- Batch operations on multiple issues
- Comprehensive backlog reviews
- Feature planning that requires codebase exploration
- Complex issue creation with dependencies
```
**When to Use section** guides appropriate usage. Note the criteria:
- "Batch operations" → Parallel/isolation benefit
- "Comprehensive reviews" → Complex workflow benefit
- "Requires exploration" → Context isolation benefit
- "Complex with dependencies" → Multi-skill benefit
```markdown
## Behavior
- Always fetches current issue state before making changes
- Asks for approval before creating or modifying issues
- Provides clear summaries of actions taken
- Uses the tea CLI for all Forgejo operations
```
**Behavior section** sets operational rules. These ensure:
- Accuracy: Fetches current state, doesn't assume
- Safety: Asks before acting
- Transparency: Summarizes what happened
- Consistency: Uses standard tooling
## Naming Conventions
### Agent Folder Names
- Use **kebab-case**: `product-manager`, `code-reviewer`
- Name by **role or persona**: what the agent "is"
- Keep **recognizable**: familiar roles are easier to understand
Good names:
- `product-manager` - Recognizable role
- `code-reviewer` - Clear function
- `security-auditor` - Specific expertise
- `documentation-writer` - Focused purpose
Avoid:
- `helper` - Too vague
- `do-stuff` - Not a role
- `issue-thing` - Not recognizable
### Agent Titles
The H1 title in `AGENT.md` should be the role name in Title Case:
| Folder | Title |
|--------|-------|
| `product-manager` | Product Manager Agent |
| `code-reviewer` | Code Reviewer Agent |
| `security-auditor` | Security Auditor Agent |
## Model Selection
Agents can specify which Claude model to use via the `model` field in YAML frontmatter. Choosing the right model balances capability, speed, and cost.
### Available Models
| Model | Characteristics | Best For |
|-------|-----------------|----------|
| `haiku` | Fastest, most cost-effective | Simple structured tasks, formatting, basic transformations |
| `sonnet` | Balanced speed and capability | Most agent tasks, code review, issue management |
| `opus` | Most capable, best reasoning | Complex analysis, architectural decisions, nuanced judgment |
| `inherit` | Uses parent context's model | When agent should match caller's capability level |
### Decision Matrix
| Agent Task Type | Recommended Model | Reasoning |
|-----------------|-------------------|-----------|
| Structured output formatting | `haiku` | Pattern-following, no complex reasoning |
| Code review (style/conventions) | `sonnet` | Needs code understanding, not deep analysis |
| Security vulnerability analysis | `opus` | Requires nuanced judgment, high stakes |
| Issue triage and labeling | `haiku` or `sonnet` | Mostly classification tasks |
| Feature planning and breakdown | `sonnet` or `opus` | Needs strategic thinking |
| Batch processing (many items) | `haiku` or `sonnet` | Speed and cost matter at scale |
| Architectural exploration | `opus` | Complex reasoning about tradeoffs |
### Examples
These examples show recommended model configurations for different agent types:
**Code Reviewer Agent** - Use `sonnet`:
```yaml
---
name: code-reviewer
model: sonnet
skills: gitea, code-review
---
```
Code review requires understanding code patterns and conventions but rarely needs the deepest reasoning. Sonnet provides good balance.
**Security Auditor Agent** (hypothetical) - Use `opus`:
```yaml
---
name: security-auditor
model: opus
skills: code-review # would add security-specific skills
---
```
Security analysis requires careful, nuanced judgment where missing issues have real consequences. Worth the extra capability.
**Formatting Agent** (hypothetical) - Use `haiku`:
```yaml
---
name: markdown-formatter
model: haiku
skills: documentation
---
```
Pure formatting tasks follow patterns and don't require complex reasoning. Haiku is fast and sufficient.
### Best Practices for Model Selection
1. **Start with `sonnet`** - It handles most agent tasks well
2. **Use `haiku` for volume** - When processing many items, speed and cost add up
3. **Reserve `opus` for judgment** - Use when errors are costly or reasoning is complex
4. **Avoid `inherit` by default** - Make a deliberate choice; `inherit` obscures the decision
5. **Consider the stakes** - Higher consequence tasks warrant more capable models
6. **Test with real tasks** - Verify the chosen model performs adequately
### When to Use `inherit`
The `inherit` option has legitimate uses:
- **Utility agents**: Small helpers that should match their caller's capability
- **Delegation chains**: When an agent spawns sub-agents that should stay consistent
- **Testing/development**: When you want to control model from the top level
However, most production agents should specify an explicit model.
## Best Practices
### 1. Choose Skills Deliberately
Include only skills the agent needs. More skills = more context = potential confusion.
**Too many skills:**
```markdown
## Skills
- gitea
- issue-writing
- backlog-grooming
- roadmap-planning
- code-review
- testing
- documentation
- deployment
```
**Right-sized:**
```markdown
## Skills
- gitea
- issue-writing
- backlog-grooming
- roadmap-planning
```
### 2. Define Clear Boundaries
Agents should know what they can and cannot do.
**Vague:**
```markdown
## Capabilities
This agent can help with project management.
```
**Clear:**
```markdown
## Capabilities
This agent can:
- Review and improve existing issues
- Create new well-structured issues
- Analyze the backlog for gaps
This agent cannot:
- Merge pull requests
- Deploy code
- Make architectural decisions
```
### 3. Set Behavioral Guardrails
Prevent agents from causing problems by setting explicit rules.
**Important behaviors to specify:**
- When to ask for approval
- What to do before making changes
- How to report results
- Error handling expectations
### 4. Match Persona to Purpose
The agent's name and description should align with its skills and capabilities.
**Mismatched:**
```markdown
# Security Agent
## Skills
- issue-writing
- documentation
```
**Aligned:**
```markdown
# Security Auditor Agent
## Skills
- security-scanning
- vulnerability-assessment
- code-review
```
### 5. Keep Agents Focused
One agent = one role. If an agent does too many unrelated things, split it.
**Too broad:**
```markdown
# Everything Agent
Handles issues, code review, deployment, and customer support.
```
**Focused:**
```markdown
# Product Manager Agent
Specialized for backlog management and roadmap planning.
```
## When to Create a New Agent
Create an agent when you need:
1. **Role-based expertise**: A recognizable persona improves outputs
2. **Skill composition**: Multiple skills work better together
3. **Context isolation**: Work shouldn't pollute main conversation
4. **Parallel capability**: Tasks can run independently
5. **Autonomous operation**: Agent should figure things out on its own
### Signs You Need a New Agent
- Commands repeatedly spawn similar skill combinations
- Tasks require deep exploration that pollutes context
- Work benefits from a specialist "persona"
- Batch processing would help
### Signs You Don't Need a New Agent
- Single skill is sufficient
- Task is simple and sequential
- Main context is helpful, not harmful
- No clear persona or role emerges
## Agent Lifecycle
### 1. Design
Define the agent's role:
- What persona makes sense?
- Which skills does it need?
- What can it do (and not do)?
- When should it be spawned?
### 2. Implement
Create the agent file:
- Clear name and description
- Appropriate skill list
- Specific capabilities
- Usage guidance
- Behavioral rules
### 3. Integrate
Connect the agent to workflows:
- Update commands that should spawn it
- Document in ARCHITECTURE.md
- Test with real tasks
### 4. Refine
Improve based on usage:
- Add/remove skills as needed
- Clarify capabilities
- Strengthen behavioral rules
- Update documentation
## Checklist: Before Submitting a New Agent
- [ ] File is at `agents/<name>/AGENT.md`
- [ ] Name follows kebab-case convention
- [ ] Agent has a clear, recognizable role
- [ ] Skills list is deliberate (not too many, not too few)
- [ ] Model selection is deliberate (not just `inherit` by default)
- [ ] Capabilities are specific and achievable
- [ ] "When to Use" guidance is clear
- [ ] Behavioral rules prevent problems
- [ ] Agent is referenced by at least one command
- [ ] ARCHITECTURE.md is updated
## See Also
- [ARCHITECTURE.md](../ARCHITECTURE.md): How agents fit into the overall system
- [writing-skills.md](writing-skills.md): Creating the skills that agents use
- [VISION.md](../VISION.md): The philosophy behind composable components

View File

@@ -0,0 +1,508 @@
# Writing Capabilities
A comprehensive guide to creating capabilities for the Claude Code AI workflow system.
> **Official Documentation**: For the most up-to-date Claude Code documentation, see https://code.claude.com/docs
## Component Types
The architecture repository uses two component types:
| Component | Location | Purpose | Invocation |
|-----------|----------|---------|------------|
| **Skill** | `skills/<name>/SKILL.md` | Knowledge modules and workflows | Auto-triggered or `/skill-name` |
| **Agent** | `agents/<name>/AGENT.md` | Isolated subtask handlers | Spawned via Task tool |
### Skills: Two Types
Skills come in two flavors based on the `user-invocable` frontmatter field:
| Type | `user-invocable` | Purpose | Example |
|------|------------------|---------|---------|
| **User-invocable** | `true` | Workflows users trigger with `/skill-name` | `/work-issue`, `/dashboard` |
| **Background** | `false` | Reference knowledge auto-loaded when needed | `gitea`, `issue-writing` |
User-invocable skills replaced the former "commands" - they define workflows that users trigger directly.
### Agents: Isolated Workers
Agents are specialized subprocesses that:
- Combine multiple skills into focused personas
- Run with isolated context (don't pollute main conversation)
- Handle complex subtasks autonomously
- Can run in parallel or background
---
## Writing Skills
Skills are markdown files in the `skills/` directory, each in its own folder.
### File Structure
```
skills/
├── gitea/ # Background skill
│ └── SKILL.md
├── work-issue/ # User-invocable skill
│ └── SKILL.md
└── issue-writing/ # Background skill
└── SKILL.md
```
### YAML Frontmatter
Every skill requires YAML frontmatter starting on line 1:
```yaml
---
name: skill-name
description: >
What this skill does and when to use it.
Include trigger terms for auto-detection.
model: haiku
user-invocable: true
argument-hint: <required-arg> [optional-arg]
---
```
#### Required Fields
| Field | Description |
|-------|-------------|
| `name` | Lowercase, hyphens only (max 64 chars). Must match directory name. |
| `description` | What the skill does + when to use (max 1024 chars). Critical for triggering. |
#### Optional Fields
| Field | Description |
|-------|-------------|
| `user-invocable` | Whether skill appears in `/` menu. Default: `true` |
| `model` | Model to use: `haiku`, `sonnet`, `opus` |
| `argument-hint` | For user-invocable: `<required>`, `[optional]` |
| `context` | Use `fork` for isolated context |
| `allowed-tools` | Restrict available tools (YAML list) |
| `hooks` | Define PreToolUse, PostToolUse, or Stop hooks |
### User-Invocable Skills (Workflows)
These replace the former "commands" - workflows users invoke with `/skill-name`.
**Example: `/work-issue`**
```yaml
---
name: work-issue
description: >
Work on a Gitea issue. Fetches issue details and sets up branch.
Use when working on issues, implementing features, or when user says /work-issue.
model: haiku
argument-hint: <issue-number>
user-invocable: true
---
# Work on Issue #$1
@~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/software-architecture/SKILL.md
1. **View the issue** with `--comments` flag
2. **Create a branch**: `git checkout -b issue-$1-<short-title>`
3. **Plan**: Use TodoWrite to break down work
4. **Implement** following architectural patterns
5. **Commit** with message referencing the issue
6. **Push** and **Create PR**
```
**Key patterns for user-invocable skills:**
1. **Argument handling**: Use `$1`, `$2` for positional arguments
2. **Skill references**: Use `@~/.claude/skills/name/SKILL.md` to include background skills
3. **Approval workflows**: Ask before significant actions
4. **Clear steps**: Numbered, actionable workflow steps
### Background Skills (Reference)
Knowledge modules that Claude applies automatically when context matches.
**Example: `gitea`**
```yaml
---
name: gitea
description: >
View, create, and manage Gitea issues and pull requests using tea CLI.
Use when working with issues, PRs, or when user mentions tea, gitea.
model: haiku
user-invocable: false
---
# Gitea CLI (tea)
## Common Commands
### Issues
```bash
tea issues # List open issues
tea issues <number> # View issue details
tea issues create --title "..." --description "..."
```
...
```
**Key patterns for background skills:**
1. **Rich descriptions**: Include trigger terms like tool names, actions
2. **Reference material**: Commands, templates, patterns, checklists
3. **No workflow steps**: Just knowledge, not actions
### Writing Effective Descriptions
The `description` field determines when Claude applies the skill. Include:
1. **What the skill does**: Specific capabilities
2. **When to use it**: Trigger terms users would mention
**Bad:**
```yaml
description: Helps with documents
```
**Good:**
```yaml
description: >
View, create, and manage Gitea issues and pull requests using tea CLI.
Use when working with issues, PRs, viewing issue details, creating pull
requests, or when the user mentions tea, gitea, or issue numbers.
```
### Argument Handling (User-Invocable Skills)
User-invocable skills can accept arguments via `$1`, `$2`, etc.
**Argument hints:**
- `<arg>` - Required argument
- `[arg]` - Optional argument
- `<arg1> [arg2]` - Mix of both
**Example with optional argument:**
```yaml
---
name: groom
argument-hint: [issue-number]
---
# Groom Issues
## If issue number provided ($1):
1. Fetch that specific issue
2. Evaluate against checklist
...
## If no argument:
1. List all open issues
2. Review each against checklist
...
```
### Skill References
User-invocable skills include background skills using file references:
```markdown
@~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/issue-writing/SKILL.md
```
**Important**: Do NOT use phrases like "Use the gitea skill" - skills have ~20% auto-activation rate. File references guarantee the content is loaded.
### Approval Workflows
User-invocable skills should ask for approval before significant actions:
```markdown
4. **Present plan** for approval
5. **If approved**, create the issues
6. **Present summary** with links
```
---
## Writing Agents
Agents are specialized subprocesses that combine skills for complex, isolated tasks.
### File Structure
```
agents/
└── code-reviewer/
└── AGENT.md
```
### YAML Frontmatter
```yaml
---
name: code-reviewer
description: Review code for quality, bugs, and style issues
model: sonnet
skills: gitea, code-review
disallowedTools:
- Edit
- Write
---
```
#### Required Fields
| Field | Description |
|-------|-------------|
| `name` | Agent identifier (lowercase, hyphens). Match directory name. |
| `description` | What the agent does. Used for matching when spawning. |
#### Optional Fields
| Field | Description |
|-------|-------------|
| `model` | `haiku`, `sonnet`, `opus`, or `inherit` |
| `skills` | Comma-separated skill names the agent can access |
| `disallowedTools` | Block specific tools (e.g., Edit, Write for read-only) |
| `permissionMode` | `default` or `bypassPermissions` |
| `hooks` | Define PreToolUse, PostToolUse, or Stop hooks |
### Agent Document Structure
```markdown
# Agent Name
Brief description of the agent's role.
## Skills
- skill1
- skill2
## Capabilities
What the agent can do.
## When to Use
Guidance on when to spawn this agent.
## Behavior
Operational rules and constraints.
```
### Built-in Agents
Claude Code provides built-in agents - prefer these before creating custom ones:
| Agent | Purpose |
|-------|---------|
| **Explore** | Codebase exploration, finding files, searching code |
| **Plan** | Implementation planning, architectural decisions |
### Skill Composition
Agents gain expertise by combining skills:
```
┌─────────────────────────────────────────┐
│ Code Reviewer Agent │
│ │
│ ┌─────────┐ ┌─────────────┐ │
│ │ gitea │ │ code-review │ │
│ │ CLI │ │ patterns │ │
│ └─────────┘ └─────────────┘ │
│ │
└─────────────────────────────────────────┘
```
### Use Cases for Agents
1. **Parallel processing**: Spawn multiple agents for independent tasks
2. **Context isolation**: Deep exploration without polluting main context
3. **Complex workflows**: Iterative decision-making with multiple skills
4. **Background execution**: Long-running tasks while user continues working
### Model Selection
| Model | Best For |
|-------|----------|
| `haiku` | Simple tasks, formatting, batch processing |
| `sonnet` | Most agent tasks, code review (default choice) |
| `opus` | Complex analysis, security audits, architectural decisions |
---
## Decision Guide
### When to Create a User-Invocable Skill
Create when you have:
- Repeatable workflow used multiple times
- User explicitly triggers the action
- Clear start and end points
- Approval checkpoints needed
### When to Create a Background Skill
Create when:
- You explain the same concepts repeatedly
- Multiple user-invocable skills need the same knowledge
- Quality is inconsistent without explicit guidance
- There's a clear domain that doesn't fit existing skills
### When to Create an Agent
Create when:
- Multiple skills needed together for complex tasks
- Context isolation required
- Parallel execution possible
- Autonomous exploration needed
- Specialist persona improves outputs
### Decision Matrix
| Scenario | Component | Reason |
|----------|-----------|--------|
| User types `/work-issue 42` | User-invocable skill | Explicit user trigger |
| Need tea CLI reference | Background skill | Auto-loaded knowledge |
| Review 20 issues in parallel | Agent | Batch processing, isolation |
| Create one issue | User-invocable skill | Single workflow |
| Deep architectural analysis | Agent | Complex, isolated work |
---
## Templates
### User-Invocable Skill Template
```yaml
---
name: skill-name
description: >
What this skill does and when to use it.
Use when [trigger conditions] or when user says /skill-name.
model: haiku
argument-hint: <required> [optional]
user-invocable: true
---
# Skill Title
@~/.claude/skills/relevant-skill/SKILL.md
Brief intro if needed.
1. **First step**: What to do
2. **Second step**: What to do next
3. **Ask for approval** before significant actions
4. **Execute** the approved actions
5. **Present results** with links and summary
```
### Background Skill Template
```yaml
---
name: skill-name
description: >
What this skill teaches and when to use it.
Include trigger conditions in description.
user-invocable: false
---
# Skill Name
Brief description of what this skill covers.
## Core Concepts
Explain fundamental ideas.
## Patterns and Templates
Provide reusable structures.
## Guidelines
List rules and best practices.
## Examples
Show concrete illustrations.
## Common Mistakes
Document pitfalls to avoid.
```
### Agent Template
```yaml
---
name: agent-name
description: What this agent does and when to spawn it
model: sonnet
skills: skill1, skill2
---
You are a [role] specialist that [primary function].
## When Invoked
1. **Gather context**: What to collect
2. **Analyze**: What to evaluate
3. **Act**: What actions to take
4. **Report**: How to communicate results
## Output Format
Describe expected output structure.
## Guidelines
- Behavioral rules
- Constraints
- Quality standards
```
---
## Checklists
### Before Creating a User-Invocable Skill
- [ ] Workflow is repeatable (used multiple times)
- [ ] User explicitly triggers it
- [ ] File at `skills/<name>/SKILL.md`
- [ ] `user-invocable: true` in frontmatter
- [ ] `description` includes "Use when... or when user says /skill-name"
- [ ] Background skills referenced via `@~/.claude/skills/<name>/SKILL.md`
- [ ] Approval checkpoints before significant actions
- [ ] Clear numbered workflow steps
### Before Creating a Background Skill
- [ ] Knowledge used in multiple places
- [ ] Doesn't fit existing skills
- [ ] File at `skills/<name>/SKILL.md`
- [ ] `user-invocable: false` in frontmatter
- [ ] `description` includes trigger terms
- [ ] Content is specific and actionable
### Before Creating an Agent
- [ ] Built-in agents (Explore, Plan) aren't sufficient
- [ ] Context isolation or skill composition needed
- [ ] File at `agents/<name>/AGENT.md`
- [ ] `model` selection is deliberate
- [ ] `skills` list is right-sized
- [ ] Clear role/persona emerges
---
## See Also
- [ARCHITECTURE.md](../ARCHITECTURE.md): How components fit together
- [skills/capability-writing/SKILL.md](../skills/capability-writing/SKILL.md): Quick reference

View File

@@ -1,663 +0,0 @@
# Writing Commands
A guide to creating user-facing entry points that trigger workflows.
## What is a Command?
Commands are **user-facing entry points** that trigger workflows. Unlike skills (which encode knowledge) or agents (which execute tasks autonomously), commands define *what* to do—they orchestrate the workflow that users invoke directly.
Think of commands as the interface between users and the system. Users type `/work-issue 42` and the command defines the entire workflow: fetch issue, create branch, implement, commit, push, create PR.
## File Structure
Commands live directly in the `commands/` directory as markdown files:
```
commands/
├── work-issue.md
├── dashboard.md
├── review-pr.md
├── create-issue.md
├── groom.md
├── roadmap.md
└── plan-issues.md
```
### Why Flat Files?
Unlike skills and agents (which use folders), commands are single files because:
- Commands are self-contained workflow definitions
- No supporting files needed
- Simple naming: `/work-issue` maps to `work-issue.md`
## Command Document Structure
A well-structured command file has two parts:
### 1. Frontmatter (YAML Header)
```yaml
---
description: Brief description shown in command listings
argument-hint: <required-arg> [optional-arg]
---
```
| Field | Purpose | Required |
|-------|---------|----------|
| `description` | One-line summary for help/listings | Yes |
| `argument-hint` | Shows expected arguments | If arguments needed |
### 2. Body (Markdown Instructions)
```markdown
# Command Title
Brief intro if needed.
1. **Step one**: What to do
2. **Step two**: What to do next
...
```
The body contains the workflow steps that Claude follows when the command is invoked.
## Complete Command Example
```markdown
---
description: Work on a Gitea issue. Fetches issue details and sets up branch.
argument-hint: <issue-number>
---
# Work on Issue #$1
@~/.claude/skills/gitea/SKILL.md
1. **View the issue** to understand requirements
2. **Create a branch**: `git checkout -b issue-$1-<short-kebab-title>`
3. **Plan**: Use TodoWrite to break down the work
4. **Implement** the changes
5. **Commit** with message referencing the issue
6. **Push** the branch to origin
7. **Create PR** with title "[Issue #$1] <title>" and body "Closes #$1"
```
## Argument Handling
Commands can accept arguments from the user. Arguments are passed via positional variables: `$1`, `$2`, etc.
### The ARGUMENTS Pattern
When users invoke a command with arguments:
```
/work-issue 42
```
The system provides the arguments via the `$1`, `$2`, etc. placeholders in the command body:
```markdown
# Work on Issue #$1
1. **View the issue** to understand requirements
```
Becomes:
```markdown
# Work on Issue #42
1. **View the issue** to understand requirements
```
### Argument Hints
Use `argument-hint` in frontmatter to document expected arguments:
| Pattern | Meaning |
|---------|---------|
| `<arg>` | Required argument |
| `[arg]` | Optional argument |
| `<arg1> <arg2>` | Multiple required |
| `[arg1] [arg2]` | Multiple optional |
| `<required> [optional]` | Mix of both |
Examples:
```yaml
argument-hint: <issue-number> # One required
argument-hint: [issue-number] # One optional
argument-hint: <title> [description] # Required + optional
argument-hint: [title] or "batch" # Choice of modes
```
### Handling Optional Arguments
Commands often have different behavior based on whether arguments are provided:
```markdown
---
description: Groom issues. Without argument, reviews all. With argument, grooms specific issue.
argument-hint: [issue-number]
---
# Groom Issues
@~/.claude/skills/gitea/SKILL.md
## If issue number provided ($1):
1. **Fetch the issue** details
2. **Evaluate** against checklist
...
## If no argument (groom all):
1. **List open issues**
2. **Review each** against checklist
...
```
### Multiple Modes
Some commands support distinct modes based on the first argument:
```markdown
---
description: Create issues. Single or batch mode.
argument-hint: [title] or "batch"
---
# Create Issue(s)
@~/.claude/skills/gitea/SKILL.md
## Single Issue (default)
If title provided, create an issue with that title.
## Batch Mode
If $1 is "batch":
1. Ask user for the plan
2. Generate list of issues
3. Show for approval
4. Create each issue
```
## Including Skills
Commands include skills using the `@` file reference syntax. This automatically injects the skill content into the command context when the command is invoked.
### File Reference Syntax
Use the `@` prefix followed by the path to the skill file:
```markdown
# Groom Issues
@~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/backlog-grooming/SKILL.md
@~/.claude/skills/issue-writing/SKILL.md
1. **Fetch the issue** details
2. **Evaluate** against grooming checklist
...
```
When the command runs, the content of each referenced skill file is automatically loaded into context.
### Why File References?
**DO NOT** use phrases like "Use the gitea skill" - skills have only ~20% auto-activation rate. File references guarantee the skill content is available.
| Pattern | Behavior |
|---------|----------|
| `@~/.claude/skills/gitea/SKILL.md` | Content automatically injected |
| "Use the gitea skill" | Relies on auto-activation (~20% success) |
### When to Include Skills
| Include explicitly | Skip skill reference |
|-------------------|---------------------|
| CLI syntax is needed | Well-known commands |
| Core methodology required | Simple operations |
| Quality standards matter | One-off actions |
| Patterns should be followed | No domain knowledge needed |
## Invoking Agents
Commands can spawn agents for complex subtasks that benefit from skill composition or context isolation.
### Spawning Agents
```markdown
For comprehensive backlog review, spawn the **product-manager** agent to:
- Review all open issues
- Categorize by readiness
- Propose improvements
```
### When to Spawn Agents
Spawn an agent when the command needs:
- **Parallel processing**: Multiple independent tasks
- **Context isolation**: Deep exploration that would pollute main context
- **Skill composition**: Multiple skills working together
- **Autonomous operation**: Let the agent figure out details
### Example: Conditional Agent Spawning
```markdown
# Groom Issues
## If no argument (groom all):
For large backlogs (>10 issues), consider spawning the
product-manager agent to handle the review autonomously.
```
## Interactive Patterns
Commands often require user interaction for confirmation, choices, or input.
### Approval Workflows
Always ask for approval before significant actions:
```markdown
5. **Ask for approval** before creating issues
6. **Create issues** in order
```
Common approval points:
- Before creating/modifying resources (issues, PRs, files)
- Before executing destructive operations
- When presenting a plan that will be executed
### Presenting Choices
When the command leads to multiple possible actions:
```markdown
Ask the user what action to take:
- **Merge**: Approve and merge the PR
- **Request changes**: Leave feedback without merging
- **Comment only**: Add a comment for discussion
```
### Gathering Input
Some commands need to gather information from the user:
```markdown
## Batch Mode
If $1 is "batch":
1. **Ask user** for the plan/direction
2. Generate list of issues with titles and descriptions
3. Show for approval
```
### Presenting Results
Commands should clearly show what was done:
```markdown
7. **Update dependencies** with actual issue numbers after creation
8. **Present summary** with links to created issues
```
Good result presentations include:
- Tables for lists of items
- Links for created resources
- Summaries of changes made
- Next step suggestions
## Annotated Examples
Let's examine existing commands to understand effective patterns.
### Example 1: work-issue (Linear Workflow)
```markdown
---
description: Work on a Gitea issue. Fetches issue details and sets up branch.
argument-hint: <issue-number>
---
# Work on Issue #$1
@~/.claude/skills/gitea/SKILL.md
1. **View the issue** to understand requirements
2. **Create a branch**: `git checkout -b issue-$1-<short-kebab-title>`
3. **Plan**: Use TodoWrite to break down the work
4. **Implement** the changes
5. **Commit** with message referencing the issue
6. **Push** the branch to origin
7. **Create PR** with title "[Issue #$1] <title>" and body "Closes #$1"
```
**Key patterns:**
- **Linear workflow**: Clear numbered steps in order
- **Required argument**: `<issue-number>` means must provide
- **Variable substitution**: `$1` used throughout
- **Skill reference**: Uses gitea skill for CLI knowledge
- **Git integration**: Branch and push steps specified
### Example 2: dashboard (No Arguments)
```markdown
---
description: Show dashboard of open issues, PRs awaiting review, and CI status.
---
# Repository Dashboard
@~/.claude/skills/gitea/SKILL.md
Fetch and display:
1. All open issues
2. All open PRs
Format as tables showing issue/PR number, title, and author.
```
**Key patterns:**
- **No argument-hint**: Command takes no arguments
- **Output formatting**: Specifies how to present results
- **Aggregation**: Combines multiple data sources
- **Simple workflow**: Just fetch and display
### Example 3: groom (Optional Argument with Modes)
```markdown
---
description: Groom and improve issues. Without argument, reviews all. With argument, grooms specific issue.
argument-hint: [issue-number]
---
# Groom Issues
@~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/backlog-grooming/SKILL.md
@~/.claude/skills/issue-writing/SKILL.md
## If issue number provided ($1):
1. **Fetch the issue** details
2. **Evaluate** against grooming checklist
3. **Suggest improvements** for:
- Title clarity
- Description completeness
- Acceptance criteria quality
4. **Ask user** if they want to apply changes
5. **Update issue** if approved
## If no argument (groom all):
1. **List open issues**
2. **Review each** against grooming checklist
3. **Categorize**: Ready / Needs work / Stale
4. **Present summary** table
5. **Offer to improve** issues that need work
```
**Key patterns:**
- **Optional argument**: `[issue-number]` with brackets
- **Mode switching**: Different behavior based on argument presence
- **Skill file references**: Uses `@~/.claude/skills/` to include multiple skills
- **Approval workflow**: "Ask user if they want to apply changes"
- **Categorization**: Groups items for presentation
### Example 4: plan-issues (Complex Workflow)
```markdown
---
description: Plan and create issues for a feature. Breaks down work into well-structured issues.
argument-hint: <feature-description>
---
# Plan Feature: $1
@~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/roadmap-planning/SKILL.md
@~/.claude/skills/issue-writing/SKILL.md
1. **Understand the feature**: Analyze what "$1" involves
2. **Explore the codebase** if needed to understand context
3. **Break down** into discrete, actionable issues
4. **Present the plan**:
```
## Proposed Issues for: $1
1. [Title] - Brief description
Dependencies: none
...
```
5. **Ask for approval** before creating issues
6. **Create issues** in order
7. **Update dependencies** with actual issue numbers
8. **Present summary** with links to created issues
```
**Key patterns:**
- **Multi-skill composition**: Includes three skills via `@~/.claude/skills/`
- **Codebase exploration**: May need to understand context
- **Structured output**: Template for presenting the plan
- **Two-phase execution**: Plan first, then execute after approval
- **Dependency management**: Creates issues in order, updates references
### Example 5: review-pr (Action Choices)
```markdown
---
description: Review a Gitea pull request. Fetches PR details, diff, and comments.
argument-hint: <pr-number>
---
# Review PR #$1
@~/.claude/skills/gitea/SKILL.md
1. **View PR details** including description and metadata
2. **Get the diff** to review the changes
Review the changes and provide feedback on:
- Code quality
- Potential bugs
- Test coverage
- Documentation
Ask the user what action to take:
- **Merge**: Approve and merge the PR
- **Request changes**: Leave feedback without merging
- **Comment only**: Add a comment for discussion
```
**Key patterns:**
- **Information gathering**: Fetches context before analysis
- **Review criteria**: Checklist of what to examine
- **Action menu**: Clear choices with explanations
- **User decides outcome**: Command presents options, user chooses
## Naming Conventions
### Command File Names
- Use **kebab-case**: `work-issue.md`, `plan-issues.md`
- Use **verbs or verb phrases**: Commands are actions
- Be **concise**: 1-3 words is ideal
- Match the **invocation**: `/work-issue``work-issue.md`
Good names:
- `work-issue` - Action + target
- `dashboard` - What it shows
- `review-pr` - Action + target
- `plan-issues` - Action + target
- `groom` - Action (target implied)
Avoid:
- `issue-work` - Noun-first is awkward
- `do-stuff` - Too vague
- `manage-issues-and-prs` - Too long
### Command Titles
The H1 title can be more descriptive than the filename:
| Filename | Title |
|----------|-------|
| `work-issue.md` | Work on Issue #$1 |
| `dashboard.md` | Repository Dashboard |
| `plan-issues.md` | Plan Feature: $1 |
## Best Practices
### 1. Design Clear Workflows
Each step should be unambiguous:
**Vague:**
```markdown
1. Handle the issue
2. Do the work
3. Finish up
```
**Clear:**
```markdown
1. **View the issue** to understand requirements
2. **Create a branch**: `git checkout -b issue-$1-<title>`
3. **Plan**: Use TodoWrite to break down the work
```
### 2. Show Don't Tell
Include actual commands and expected outputs:
**Telling:**
```markdown
List the open issues.
```
**Showing:**
```markdown
Fetch all open issues and format as table:
| # | Title | Author |
|---|-------|--------|
```
### 3. Always Ask Before Acting
Never modify resources without user approval:
```markdown
4. **Present plan** for approval
5. **If approved**, create the issues
```
### 4. Handle Edge Cases
Consider what happens when things are empty or unexpected:
```markdown
## If no argument (groom all):
1. **List open issues**
2. If no issues found, report "No open issues to groom"
3. Otherwise, **review each** against checklist
```
### 5. Provide Helpful Output
End with useful information:
```markdown
8. **Present summary** with:
- Links to created issues
- Dependency graph
- Suggested next steps
```
### 6. Keep Commands Focused
One command = one workflow. If doing multiple unrelated things, split into separate commands.
**Too broad:**
```markdown
# Manage Everything
Handle issues, PRs, deployments, and documentation...
```
**Focused:**
```markdown
# Review PR #$1
Review and take action on a pull request...
```
## When to Create a Command
Create a command when you have:
1. **Repeatable workflow**: Same steps used multiple times
2. **User-initiated action**: User explicitly triggers it
3. **Clear start and end**: Workflow has defined boundaries
4. **Consistent behavior needed**: Should work the same every time
### Signs You Need a New Command
- You're explaining the same workflow repeatedly
- Users would benefit from a single invocation
- Multiple tools need orchestration
- Approval checkpoints are needed
### Signs You Don't Need a Command
- It's a one-time action
- No workflow orchestration needed
- A skill reference is sufficient
- An agent could handle it autonomously
## Command Lifecycle
### 1. Design
Define the workflow:
- What triggers it?
- What arguments does it need?
- What steps are involved?
- Where are approval points?
- What does success look like?
### 2. Implement
Create the command file:
- Clear frontmatter
- Step-by-step workflow
- Skill references where needed
- Approval checkpoints
- Output formatting
### 3. Test
Verify the workflow:
- Run with typical arguments
- Test edge cases (no args, invalid args)
- Confirm approval points work
- Check output formatting
### 4. Document
Update references:
- Add to ARCHITECTURE.md table
- Update README if user-facing
- Note any skill/agent dependencies
## Checklist: Before Submitting a New Command
- [ ] File is at `commands/<name>.md`
- [ ] Name follows kebab-case verb convention
- [ ] Frontmatter includes description
- [ ] Frontmatter includes argument-hint (if arguments needed)
- [ ] Workflow steps are clear and numbered
- [ ] Commands and tools are specified explicitly
- [ ] Skills are included via `@~/.claude/skills/<name>/SKILL.md` file references
- [ ] Approval points exist before significant actions
- [ ] Edge cases are handled (no data, invalid input)
- [ ] Output formatting is specified
- [ ] ARCHITECTURE.md is updated with new command
## See Also
- [ARCHITECTURE.md](../ARCHITECTURE.md): How commands fit into the overall system
- [writing-skills.md](writing-skills.md): Creating skills that commands reference
- [writing-agents.md](writing-agents.md): Creating agents that commands spawn
- [VISION.md](../VISION.md): The philosophy behind composable components

View File

@@ -1,513 +0,0 @@
# Writing Skills
A guide to creating reusable knowledge modules for the Claude Code AI workflow system.
> **Official Documentation**: For the most up-to-date information, see https://code.claude.com/docs/en/skills
## What is a Skill?
Skills are **model-invoked knowledge modules**—Claude automatically applies them when your request matches their description. Unlike commands (which require explicit `/command` invocation), skills are triggered automatically based on semantic matching.
## YAML Frontmatter (Required)
Every `SKILL.md` file **must** start with YAML frontmatter. This is how Claude discovers and triggers skills.
### Format Requirements
- Must start with `---` on **line 1** (no blank lines before it)
- Must end with `---` before the markdown content
- Use spaces for indentation (not tabs)
### Required Fields
| Field | Required | Description |
|-------|----------|-------------|
| `name` | **Yes** | Lowercase letters, numbers, and hyphens only (max 64 chars). Should match directory name. |
| `description` | **Yes** | What the skill does and when to use it (max 1024 chars). **This is critical for triggering.** |
### Optional Fields
| Field | Description |
|-------|-------------|
| `allowed-tools` | **Restricts** which tools Claude can use when this skill is active. If omitted, no restrictions apply. |
| `model` | Specific model to use when skill is active (e.g., `claude-sonnet-4-20250514`). |
### Writing Effective Descriptions
The `description` field determines when Claude applies the skill. A good description answers:
1. **What does this skill do?** List specific capabilities.
2. **When should Claude use it?** Include trigger terms users would mention.
**Bad (too vague):**
```yaml
description: Helps with documents
```
**Good (specific with trigger terms):**
```yaml
description: View, create, and manage Gitea issues and pull requests using tea CLI. Use when working with issues, PRs, viewing issue details, creating pull requests, adding comments, merging PRs, or when the user mentions tea, gitea, issue numbers, or PR numbers.
```
### Example Frontmatter
```yaml
---
name: gitea
description: View, create, and manage Gitea issues and pull requests using tea CLI. Use when working with issues, PRs, viewing issue details, creating pull requests, or when the user mentions tea, gitea, or issue numbers.
---
# Gitea CLI (tea)
[Rest of skill content...]
```
## Subagents and Skills
Subagents **do not automatically inherit skills** from the main conversation. To give a subagent access to skills, list them in the agent's `skills` field:
```yaml
---
name: code-reviewer
description: Review code for quality and best practices
skills: gitea, code-review
---
```
## File Structure
Skills live in the `skills/` directory, each in its own folder:
```
skills/
├── gitea/
│ └── SKILL.md
├── issue-writing/
│ └── SKILL.md
├── backlog-grooming/
│ └── SKILL.md
└── roadmap-planning/
└── SKILL.md
```
### Why SKILL.md?
The uppercase `SKILL.md` filename:
- Makes the skill file immediately visible in directory listings
- Follows a consistent convention across all skills
- Clearly identifies the primary file in a skill folder
### Supporting Files (Optional)
A skill folder can contain additional files if needed:
```
skills/
└── complex-skill/
├── SKILL.md # Main skill document (required)
├── templates/ # Template files
│ └── example.md
└── examples/ # Extended examples
└── case-study.md
```
However, prefer keeping everything in `SKILL.md` when possible—it's easier to maintain and reference.
## Skill Document Structure
A well-structured `SKILL.md` follows this pattern:
```markdown
# Skill Name
Brief description of what this skill covers.
## Core Concepts
Explain the fundamental ideas Claude needs to understand.
## Patterns and Templates
Provide reusable structures and formats.
## Guidelines
List rules, best practices, and quality standards.
## Examples
Show concrete illustrations of the skill in action.
## Common Mistakes
Document pitfalls to avoid.
## Reference
Quick-reference tables, checklists, or commands.
```
Not every skill needs all sections—include what's relevant. Some skills are primarily patterns (like `issue-writing`), others are reference-heavy (like `gitea`).
## How Skills are Discovered and Triggered
Skills are **model-invoked**: Claude decides which skills to use based on your request.
### Discovery Process
1. **At startup**: Claude loads only the `name` and `description` of each available skill
2. **On request**: Claude matches your request against skill descriptions using semantic similarity
3. **Activation**: When a match is found, Claude asks to use the skill before loading the full content
### Subagent Access
Subagents (defined in `.claude/agents/`) must explicitly list which skills they can use:
```yaml
---
name: product-manager
description: Manages backlog and roadmap
skills: gitea, issue-writing, backlog-grooming, roadmap-planning
---
```
**Important**: Built-in agents and the Task tool do not have access to skills. Only custom subagents with an explicit `skills` field can use them.
### Skills Can Reference Other Skills
Skills can mention other skills for related knowledge:
```markdown
# Roadmap Planning
...
When creating issues, follow the patterns in the **issue-writing** skill.
Use **gitea** commands to create the issues.
```
This creates a natural knowledge hierarchy without duplicating content.
## Naming Conventions
### Skill Folder Names
- Use **kebab-case**: `issue-writing`, `backlog-grooming`
- Be **descriptive**: name should indicate the skill's domain
- Be **concise**: 2-3 words is ideal
- Avoid generic names: `utils`, `helpers`, `common`
Good names:
- `gitea` - Tool-specific knowledge
- `issue-writing` - Activity-focused
- `backlog-grooming` - Process-focused
- `roadmap-planning` - Task-focused
### Skill Titles
The H1 title in `SKILL.md` should match the folder name in Title Case:
| Folder | Title |
|--------|-------|
| `gitea` | Forgejo CLI (fj) |
| `issue-writing` | Issue Writing |
| `backlog-grooming` | Backlog Grooming |
| `roadmap-planning` | Roadmap Planning |
## Best Practices
### 1. Keep Skills Focused
Each skill should cover **one domain, one concern**. If your skill document is getting long or covers multiple unrelated topics, consider splitting it.
**Too broad:**
```markdown
# Project Management
How to manage issues, PRs, releases, and documentation...
```
**Better:**
```markdown
# Issue Writing
How to write clear, actionable issues.
```
### 2. Be Specific, Not Vague
Provide concrete patterns, not abstract principles.
**Vague:**
```markdown
## Writing Good Titles
Titles should be clear and descriptive.
```
**Specific:**
```markdown
## Writing Good Titles
- Start with action verb: "Add", "Fix", "Update", "Remove"
- Be specific: "Add user authentication" not "Auth stuff"
- Keep under 60 characters
```
### 3. Include Actionable Examples
Every guideline should have an example showing what it looks like in practice.
```markdown
### Acceptance Criteria
Good criteria are:
- **Specific**: "User sees error message" not "Handle errors"
- **Testable**: Can verify pass/fail
- **User-focused**: What the user experiences
Examples:
- [ ] Login form validates email format before submission
- [ ] Invalid credentials show "Invalid email or password" message
- [ ] Successful login redirects to dashboard
```
### 4. Use Templates for Repeatability
When the skill involves creating structured content, provide copy-paste templates:
```markdown
### Feature Request Template
\```markdown
## Summary
What feature and why it's valuable.
## Acceptance Criteria
- [ ] Criterion 1
- [ ] Criterion 2
## Context
Additional background or references.
\```
```
### 5. Include Checklists for Verification
Checklists help ensure consistent quality:
```markdown
## Grooming Checklist
For each issue, verify:
- [ ] Starts with action verb
- [ ] Has acceptance criteria
- [ ] Scope is clear
- [ ] Dependencies identified
```
### 6. Document Common Mistakes
Help avoid pitfalls by documenting what goes wrong:
```markdown
## Common Mistakes
### Vague Titles
- Bad: "Fix bug"
- Good: "Fix login form validation on empty email"
### Missing Acceptance Criteria
Every issue needs specific, testable criteria.
```
### 7. Keep It Current
Skills should reflect current practices. When workflows change:
- Update the skill document
- Remove obsolete patterns
- Add new best practices
## Annotated Examples
Let's examine the existing skills to understand effective patterns.
### Example 1: gitea (Tool Reference)
The `gitea` skill is a **tool reference**—it documents how to use a specific CLI tool.
```markdown
# Forgejo CLI (fj)
Command-line interface for interacting with Forgejo repositories.
## Authentication
The `tea` CLI authenticates via `tea auth login`. Credentials are stored locally.
## Common Commands
### Issues
\```bash
# List issues
tea issue search -s open # Open issues
tea issue search -s closed # Closed issues
...
\```
```
**Key patterns:**
- Organized by feature area (Issues, Pull Requests, Repository)
- Includes actual command syntax with comments
- Covers common use cases, not exhaustive documentation
- Tips section for non-obvious behaviors
### Example 2: issue-writing (Process Knowledge)
The `issue-writing` skill is **process knowledge**—it teaches how to do something well.
```markdown
# Issue Writing
How to write clear, actionable issues.
## Issue Structure
### Title
- Start with action verb: "Add", "Fix", "Update", "Remove"
- Be specific: "Add user authentication" not "Auth stuff"
- Keep under 60 characters
### Description
\```markdown
## Summary
One paragraph explaining what and why.
## Acceptance Criteria
- [ ] Specific, testable requirement
...
\```
```
**Key patterns:**
- Clear guidelines with specific rules
- Templates for different issue types
- Good/bad examples for each guideline
- Covers the full lifecycle (structure, criteria, labels, dependencies)
### Example 3: backlog-grooming (Workflow Checklist)
The `backlog-grooming` skill is a **workflow checklist**—it provides a systematic process.
```markdown
# Backlog Grooming
How to review and improve existing issues.
## Grooming Checklist
For each issue, verify:
### 1. Title Clarity
- [ ] Starts with action verb
- [ ] Specific and descriptive
- [ ] Understandable without reading description
...
```
**Key patterns:**
- Structured as a checklist with categories
- Each item is a yes/no verification
- Includes workflow steps (Grooming Workflow section)
- Questions to guide decision-making
### Example 4: roadmap-planning (Strategy Guide)
The `roadmap-planning` skill is a **strategy guide**—it teaches how to think about a problem.
```markdown
# Roadmap Planning
How to plan features and create issues for implementation.
## Planning Process
### 1. Understand the Goal
- What capability or improvement is needed?
- Who benefits and how?
- What's the success criteria?
### 2. Break Down the Work
- Identify distinct components
- Define boundaries between pieces
...
```
**Key patterns:**
- Process-oriented with numbered steps
- Multiple breakdown strategies (by layer, by user story, by component)
- Concrete examples showing the pattern applied
- Questions to guide planning decisions
## When to Create a New Skill
Create a skill when you find yourself:
1. **Explaining the same concepts repeatedly** across different conversations
2. **Wanting consistent quality** in a specific area
3. **Building up domain expertise** that should persist
4. **Needing a reusable reference** for commands or agents
### Signs You Need a New Skill
- You're copy-pasting the same guidelines
- Multiple commands need the same knowledge
- Quality is inconsistent without explicit guidance
- There's a clear domain that doesn't fit existing skills
### Signs You Don't Need a New Skill
- The knowledge is only used once
- It's already covered by an existing skill
- It's too generic to be actionable
- It's better as part of a command's instructions
## Skill Lifecycle
### 1. Draft
Start with the essential content:
- Core patterns and templates
- Key guidelines
- A few examples
### 2. Refine
As you use the skill, improve it:
- Add examples from real usage
- Clarify ambiguous guidelines
- Remove unused content
### 3. Maintain
Keep skills current:
- Update when practices change
- Remove obsolete patterns
- Add newly discovered best practices
## Checklist: Before Submitting a New Skill
### Frontmatter (Critical)
- [ ] YAML frontmatter starts on line 1 (no blank lines before `---`)
- [ ] `name` field uses lowercase letters, numbers, and hyphens only
- [ ] `name` matches the directory name
- [ ] `description` lists specific capabilities
- [ ] `description` includes "Use when..." with trigger terms
### File Structure
- [ ] File is at `skills/<name>/SKILL.md`
- [ ] Name follows kebab-case convention
### Content Quality
- [ ] Skill focuses on a single domain
- [ ] Guidelines are specific and actionable
- [ ] Examples illustrate each major point
- [ ] Templates are provided where appropriate
- [ ] Common mistakes are documented
### Integration
- [ ] Skill is listed in relevant subagent `skills` fields if needed
## See Also
- [ARCHITECTURE.md](../ARCHITECTURE.md): How skills fit into the overall system
- [VISION.md](../VISION.md): The philosophy behind composable components

View File

@@ -4,7 +4,7 @@ This folder captures learnings from retrospectives and day-to-day work. Learning
1. **Historical record**: What we learned and when 1. **Historical record**: What we learned and when
2. **Governance reference**: Why we work the way we do 2. **Governance reference**: Why we work the way we do
3. **Encoding source**: Input that gets encoded into skills, commands, and agents 3. **Encoding source**: Input that gets encoded into skills and agents
## The Learning Flow ## The Learning Flow
@@ -17,7 +17,7 @@ Experience → Learning captured → Encoded into system → Knowledge is action
- Periodic review - Periodic review
``` ```
Learnings are **not** the final destination. They are inputs that get encoded into commands, skills, and agents where Claude can actually use them. But we keep the learning file as a record of *why* we encoded what we did. Learnings are **not** the final destination. They are inputs that get encoded into skills and agents where Claude can actually use them. But we keep the learning file as a record of *why* we encoded what we did.
## Writing a Learning ## Writing a Learning
@@ -40,8 +40,7 @@ The insight we gained. Be specific and actionable.
Where this learning has been (or will be) encoded: Where this learning has been (or will be) encoded:
- `skills/xxx/SKILL.md` - What was added/changed - `skills/xxx/SKILL.md` - What was added/changed
- `commands/xxx.md` - What was added/changed - `agents/xxx/AGENT.md` - What was added/changed
- `agents/xxx/agent.md` - What was added/changed
If not yet encoded, note: "Pending: Issue #XX" If not yet encoded, note: "Pending: Issue #XX"
@@ -54,7 +53,7 @@ What this learning means for how we work going forward. This is the "why" that j
1. **Capture the learning** in this folder 1. **Capture the learning** in this folder
2. **Create an issue** to encode it into the appropriate location 2. **Create an issue** to encode it into the appropriate location
3. **Update the skill/command/agent** with the encoded knowledge 3. **Update the skill/agent** with the encoded knowledge
4. **Update the learning file** with the "Encoded In" references 4. **Update the learning file** with the "Encoded In" references
The goal: Claude should be able to *use* the learning, not just *read* about it. The goal: Claude should be able to *use* the learning, not just *read* about it.
@@ -63,8 +62,8 @@ The goal: Claude should be able to *use* the learning, not just *read* about it.
| Learning Type | Encode In | | Learning Type | Encode In |
|---------------|-----------| |---------------|-----------|
| How to use a tool | `skills/` | | How to use a tool | `skills/` (background skill) |
| Workflow improvement | `commands/` | | Workflow improvement | `skills/` (user-invocable skill) |
| Subtask behavior | `agents/` | | Subtask behavior | `agents/` |
| Organization belief | `manifesto.md` | | Organization belief | `manifesto.md` |
| Product direction | `vision.md` (in product repo) | | Product direction | `vision.md` (in product repo) |

View File

@@ -2,32 +2,41 @@
## Who We Are ## Who We Are
We are a small, focused team of AI-native builders. We believe the future of software development is human-AI collaboration, and we're building the tools and practices to make that real. We are a small, focused team building tools that make work easier. We believe software should support business processes without requiring everyone to become a developer. We build in public - sharing our AI-augmented development practices, tools, and learnings with the developer community.
We move fast with intention. We value quality over quantity. We encode our knowledge into systems that amplify what we can accomplish.
## Who We Serve ## Who We Serve
### Solo Developer ### Domain Experts
The individual shipping side projects, MVPs, or freelance work. Time is their scarcest resource. They context-switch between coding, design, ops, and everything else. They need to move fast without sacrificing quality, and they can't afford to remember every command or best practice. Business analysts, operations managers, process owners - people who understand their domain deeply but shouldn't need to code. They want to create and evolve software solutions that support their processes directly, without waiting for IT or hiring developers.
### Small Team (2-5 people) ### Agencies & Consultancies
The startup or small product team that needs to punch above their weight. They don't have dedicated specialists for every function. They need consistency across contributors and visibility into what's happening without heavyweight process. Teams building solutions for clients using our platform. They need speed, consistency, and the ability to deliver maintainable solutions across engagements. Every efficiency gain multiplies across projects.
### Agency / Consultancy ### Organizations
Building for clients under deadlines. They need speed, consistency, and the ability to apply learnings across projects. Every efficiency gain multiplies across engagements. From small businesses to enterprises - any organization that needs maintainable software to support their business processes. They benefit from solutions built on our platform, whether created by their own domain experts or by agencies on their behalf.
## What They're Trying to Achieve ## What They're Trying to Achieve
- "Help me ship without getting bogged down in repetitive tasks" - "Help me create software that supports my business process without learning to code"
- "Help me maintain quality without slowing down" - "Help me evolve my solutions as my business changes"
- "Help me know what to work on next without checking multiple tools" - "Help me deliver maintainable solutions to clients faster"
- "Help me apply best practices without memorizing them" - "Help me get software that actually fits how we work"
- "Help me onboard to codebases faster" - "Help me reduce dependency on developers for business process changes"
- "Help me stay in flow instead of context-switching"
## What We Believe ## What We Believe
### Empowering Domain Experts
We believe the people closest to business problems should be able to solve them:
- **Domain expertise matters most.** The person who understands the process deeply is better positioned to design the solution than a developer translating requirements.
- **Low-code removes barriers.** When domain experts can create and evolve solutions directly, organizations move faster and get better-fitting software.
- **Maintainability enables evolution.** Business processes change. Software that supports them must be easy to adapt without starting over.
- **Technology should disappear.** The best tools get out of the way. Domain experts should think about their processes, not about technology.
### AI-Augmented Development ### AI-Augmented Development
We believe AI fundamentally changes how software is built: We believe AI fundamentally changes how software is built:
@@ -42,6 +51,20 @@ We believe AI fundamentally changes how software is built:
- **Iteration speed is a competitive advantage.** The faster you can go from idea to deployed code to learning, the faster you improve. AI collapses the feedback loop. - **Iteration speed is a competitive advantage.** The faster you can go from idea to deployed code to learning, the faster you improve. AI collapses the feedback loop.
### Architecture Beliefs
We believe certain outcomes matter more than others when building systems:
- **Auditability by default.** Systems should remember what happened, not just current state. History is valuable - for debugging, compliance, understanding, and recovery.
- **Business language in code.** The words domain experts use should appear in the codebase. When code mirrors how the business thinks, everyone can reason about it.
- **Independent evolution.** Parts of the system should change without breaking other parts. Loose coupling isn't just nice - it's how small teams stay fast as systems grow.
- **Explicit over implicit.** Intent should be visible. Side effects should be traceable. When something important happens, the system should make that obvious.
See [software-architecture.md](./software-architecture.md) for the patterns we use to achieve these outcomes.
### Quality Without Ceremony ### Quality Without Ceremony
- Ship small, ship often - Ship small, ship often
@@ -55,6 +78,13 @@ We believe AI fundamentally changes how software is built:
- Automation should free humans for judgment calls - Automation should free humans for judgment calls
- The goal is flow, not burnout - The goal is flow, not burnout
### Resource Efficiency
- Software should run well on modest hardware
- Cloud cost and energy consumption matter
- ARM64-native where possible - better performance per watt
- Bloated software is a sign of poor engineering, not rich features
## Guiding Principles ## Guiding Principles
1. **Encode, don't document.** If something is important enough to write down, it's important enough to encode into a skill, command, or agent that can act on it. 1. **Encode, don't document.** If something is important enough to write down, it's important enough to encode into a skill, command, or agent that can act on it.
@@ -69,10 +99,10 @@ We believe AI fundamentally changes how software is built:
## Non-Goals ## Non-Goals
- **Building for enterprises with complex compliance needs.** We optimize for speed and small teams, not audit trails and approval workflows. - **Replacing human judgment.** AI and low-code tools augment human decision-making; they don't replace it. Domain expertise, critical thinking, and understanding of business context remain human responsibilities.
- **Supporting every tool and platform.** We go deep on our chosen stack rather than shallow on everything. - **Supporting every tool and platform.** We go deep on our chosen stack rather than shallow on everything.
- **Replacing developer judgment.** AI augments human decision-making; it doesn't replace it. Critical thinking, architecture decisions, and user empathy remain human responsibilities. - **Building generic software.** We focus on maintainable solutions for business processes, not general-purpose applications.
- **Comprehensive documentation for its own sake.** We encode knowledge into actionable systems. Docs exist to explain the "why," not to duplicate what the system already does. - **Comprehensive documentation for its own sake.** We encode knowledge into actionable systems. Docs exist to explain the "why," not to duplicate what the system already does.

View File

@@ -0,0 +1,140 @@
---
name: code-reviewer
description: Automated code review of pull requests. Reviews PRs for quality, bugs, security, style, and test coverage. Spawn after PR creation or for on-demand review.
# Model: sonnet provides good code understanding for review tasks.
# The structured output format doesn't require opus-level reasoning.
model: sonnet
skills: gitea, code-review
disallowedTools:
- Edit
- Write
---
You are a code review specialist that provides immediate, structured feedback on pull request changes.
## When Invoked
You will receive a PR number to review. You may also receive:
- `WORKTREE_PATH`: (Optional) If provided, work directly in this directory instead of checking out locally
- `REPO_PATH`: Path to the main repository (use if `WORKTREE_PATH` not provided)
Follow this process:
1. Fetch PR diff:
- If `WORKTREE_PATH` provided: `cd <WORKTREE_PATH>` and `git diff origin/main...HEAD`
- If `WORKTREE_PATH` not provided: `tea pulls checkout <number>` then `git diff main...HEAD`
2. Detect and run project linter (see Linter Detection below)
3. Analyze the diff for issues in these categories:
- **Code Quality**: Readability, maintainability, complexity
- **Bugs**: Logic errors, edge cases, null checks
- **Security**: Injection vulnerabilities, auth issues, data exposure
- **Lint Issues**: Linter warnings and errors (see below)
- **Test Coverage**: Missing tests, untested edge cases
4. Generate a structured review comment
5. Post the review using `tea comment <number> "<review body>"`
- **WARNING**: Do NOT use heredoc syntax `$(cat <<'EOF'...)` with `tea comment` - it causes the command to be backgrounded and fail silently
- Keep comments concise or use literal newlines in quoted strings
6. **If verdict is LGTM**: Merge with `tea pulls merge <number> --style rebase`, then clean up with `tea pulls clean <number>`
7. **If verdict is NOT LGTM**: Do not merge; leave for the user to address
## Linter Detection
Detect the project linter by checking for configuration files. Run the linter on changed files only.
### Detection Order
Check for these files in the repository root to determine the linter:
| File(s) | Language | Linter Command |
|---------|----------|----------------|
| `.eslintrc*`, `eslint.config.*` | JavaScript/TypeScript | `npx eslint <files>` |
| `pyproject.toml` with `[tool.ruff]` | Python | `ruff check <files>` |
| `ruff.toml`, `.ruff.toml` | Python | `ruff check <files>` |
| `setup.cfg` with `[flake8]` | Python | `flake8 <files>` |
| `.pylintrc`, `pylintrc` | Python | `pylint <files>` |
| `go.mod` | Go | `golangci-lint run <files>` or `go vet <files>` |
| `Cargo.toml` | Rust | `cargo clippy -- -D warnings` |
| `.rubocop.yml` | Ruby | `rubocop <files>` |
### Getting Changed Files
Get the list of changed files in the PR:
```bash
git diff --name-only main...HEAD
```
Filter to only files matching the linter's language extension.
### Running the Linter
1. Only lint files that were changed in the PR
2. Capture both stdout and stderr
3. If linter is not installed, note this in the review (non-blocking)
4. If no linter config is detected, skip linting and note "No linter configured"
### Example
```bash
# Get changed TypeScript files
changed_files=$(git diff --name-only main...HEAD | grep -E '\.(ts|tsx|js|jsx)$')
# Run ESLint if files exist
if [ -n "$changed_files" ]; then
npx eslint $changed_files 2>&1
fi
```
## Review Comment Format
Post reviews in this structured format:
```markdown
## AI Code Review
> This is an automated review generated by the code-reviewer agent.
### Summary
[Brief overall assessment]
### Findings
#### Code Quality
- [Finding 1]
- [Finding 2]
#### Potential Bugs
- [Finding or "No issues found"]
#### Security Concerns
- [Finding or "No issues found"]
#### Lint Issues
- [Linter output or "No lint issues" or "No linter configured"]
Note: Lint issues are stylistic and formatting concerns detected by automated tools.
They are separate from logic bugs and security vulnerabilities.
#### Test Coverage
- [Finding or "Adequate coverage"]
### Verdict
[LGTM / Needs Changes / Blocking Issues]
```
## Verdict Criteria
- **LGTM**: No blocking issues, code meets quality standards, ready to merge
- **Needs Changes**: Minor issues worth addressing before merge (including lint issues)
- **Blocking Issues**: Security vulnerabilities, logic errors, or missing critical functionality
**Note**: Lint issues alone should result in "Needs Changes" at most, never "Blocking Issues".
Lint issues are style/formatting concerns, not functional problems.
## Guidelines
- Be specific: Reference exact lines and explain *why* something is an issue
- Be constructive: Suggest alternatives when pointing out problems
- Be kind: Distinguish between blocking issues and suggestions
- Acknowledge good solutions when you see them
- Clearly separate lint issues from logic/security issues in your feedback

View File

@@ -0,0 +1,150 @@
---
name: issue-worker
model: haiku
description: Autonomous agent that implements a single issue in an isolated git worktree
# Model: sonnet provides balanced speed and capability for implementation tasks.
# Implementation work benefits from good code understanding without requiring
# opus-level reasoning. Faster iteration through the implement-commit-review cycle.
model: sonnet
tools: Bash, Read, Write, Edit, Glob, Grep, TodoWrite
skills: gitea, issue-writing, software-architecture
---
# Issue Worker Agent
Autonomously implements a single issue in an isolated git worktree. Creates a PR and returns - the orchestrator handles review.
## Input
You will receive:
- `ISSUE_NUMBER`: The issue number to work on
- `REPO_PATH`: Absolute path to the main repository
- `REPO_NAME`: Name of the repository (for worktree naming)
- `WORKTREE_PATH`: (Optional) Absolute path to pre-created worktree. If provided, agent works directly in this directory. If not provided, agent creates its own worktree as a sibling directory.
## Process
### 1. Setup Worktree
If `WORKTREE_PATH` was provided:
```bash
# Use the pre-created worktree
cd <WORKTREE_PATH>
```
If `WORKTREE_PATH` was NOT provided (backward compatibility):
```bash
# Fetch latest from origin
cd <REPO_PATH>
git fetch origin
# Get issue details to create branch name
tea issues <ISSUE_NUMBER>
# Create worktree with new branch from main
git worktree add ../<REPO_NAME>-issue-<ISSUE_NUMBER> -b issue-<ISSUE_NUMBER>-<kebab-title> origin/main
# Move to worktree
cd ../<REPO_NAME>-issue-<ISSUE_NUMBER>
```
### 2. Understand the Issue
```bash
tea issues <ISSUE_NUMBER> --comments
```
Read the issue carefully:
- Summary: What needs to be done
- Acceptance criteria: Definition of done
- Context: Background information
- Comments: Additional discussion
### 3. Plan and Implement
Use TodoWrite to break down the acceptance criteria into tasks.
Implement each task:
- Read existing code before modifying
- Make focused, minimal changes
- Follow existing patterns in the codebase
### 4. Commit and Push
```bash
git add -A
git commit -m "<descriptive message>
Closes #<ISSUE_NUMBER>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
git push -u origin issue-<ISSUE_NUMBER>-<kebab-title>
```
### 5. Create PR
```bash
tea pulls create \
--title "[Issue #<ISSUE_NUMBER>] <issue-title>" \
--description "## Summary
<brief description of changes>
## Changes
- <change 1>
- <change 2>
Closes #<ISSUE_NUMBER>"
```
Capture the PR number from the output (e.g., "Pull Request #42 created").
### 6. Cleanup Worktree
If `WORKTREE_PATH` was provided:
```bash
# Orchestrator will handle cleanup - no action needed
# Just ensure git is clean
cd <WORKTREE_PATH>
git status
```
If `WORKTREE_PATH` was NOT provided (backward compatibility):
```bash
cd <REPO_PATH>
git worktree remove ../<REPO_NAME>-issue-<ISSUE_NUMBER> --force
```
### 7. Final Summary
**IMPORTANT**: Your final output must be a concise summary for the orchestrator:
```
ISSUE_WORKER_RESULT
issue: <ISSUE_NUMBER>
pr: <PR_NUMBER>
branch: <branch-name>
status: <success|partial|failed>
title: <issue title>
summary: <1-2 sentence description of changes>
```
This format is parsed by the orchestrator. Do NOT include verbose logs - only this summary.
## Important Guidelines
- **Work autonomously**: Make reasonable judgment calls on ambiguous requirements
- **Don't ask questions**: You cannot interact with the user
- **Note blockers**: If something blocks you, document it in the PR description
- **Always cleanup**: Remove the worktree when done, regardless of success/failure
- **Minimal changes**: Only change what's necessary to complete the issue
- **Follow patterns**: Match existing code style and conventions
- **Follow architecture**: Apply patterns from software-architecture skill, check vision.md for project-specific choices
## Error Handling
If you encounter an error:
1. Try to recover if possible
2. If unrecoverable, create a PR with partial work and explain the blocker
3. Always run the cleanup step
4. Report status as "partial" or "failed" in summary

View File

@@ -0,0 +1,158 @@
---
name: pr-fixer
model: haiku
description: Autonomous agent that addresses PR review feedback in an isolated git worktree
# Model: sonnet provides balanced speed and capability for addressing feedback.
# Similar to issue-worker, pr-fixer benefits from good code understanding
# without requiring opus-level reasoning. Quick iteration on review feedback.
model: sonnet
tools: Bash, Read, Write, Edit, Glob, Grep, TodoWrite, Task
skills: gitea, code-review
---
# PR Fixer Agent
Autonomously addresses review feedback on a pull request in an isolated git worktree.
## Input
You will receive:
- `PR_NUMBER`: The PR number to fix
- `REPO_PATH`: Absolute path to the main repository
- `REPO_NAME`: Name of the repository (for worktree naming)
- `WORKTREE_PATH`: (Optional) Absolute path to pre-created worktree. If provided, agent works directly in this directory. If not provided, agent creates its own worktree as a sibling directory.
## Process
### 1. Get PR Details and Setup Worktree
If `WORKTREE_PATH` was provided:
```bash
# Use the pre-created worktree
cd <WORKTREE_PATH>
# Get PR info and review comments
tea pulls <PR_NUMBER> --comments
```
If `WORKTREE_PATH` was NOT provided (backward compatibility):
```bash
cd <REPO_PATH>
git fetch origin
# Get PR info including branch name
tea pulls <PR_NUMBER>
# Get review comments
tea pulls <PR_NUMBER> --comments
# Create worktree from the PR branch
git worktree add ../<REPO_NAME>-pr-<PR_NUMBER> origin/<branch-name>
# Move to worktree
cd ../<REPO_NAME>-pr-<PR_NUMBER>
# Checkout the branch (to track it)
git checkout <branch-name>
```
Extract:
- The PR branch name (e.g., `issue-42-add-feature`)
- All review comments and requested changes
### 3. Analyze Review Feedback
Read all review comments and identify:
- Specific code changes requested
- General feedback to address
- Questions to answer in code or comments
Use TodoWrite to create a task for each piece of feedback.
### 4. Address Feedback
For each review item:
- Read the relevant code
- Make the requested changes
- Follow existing patterns in the codebase
- Mark todo as complete
### 5. Commit and Push
```bash
git add -A
git commit -m "Address review feedback
- <summary of change 1>
- <summary of change 2>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
git push
```
### 6. Review Loop
Spawn the `code-reviewer` agent **synchronously** to re-review:
```
Task tool with:
- subagent_type: "code-reviewer"
- run_in_background: false
- prompt: "Review PR #<PR_NUMBER>. Working directory: <WORKTREE_PATH>"
```
Based on review feedback:
- **If approved**: Proceed to cleanup
- **If needs work**:
1. Address the new feedback
2. Commit and push the fixes
3. Trigger another review
4. Repeat until approved (max 3 iterations to avoid infinite loops)
### 7. Cleanup Worktree
If `WORKTREE_PATH` was provided:
```bash
# Orchestrator will handle cleanup - no action needed
# Just ensure git is clean
cd <WORKTREE_PATH>
git status
```
If `WORKTREE_PATH` was NOT provided (backward compatibility):
```bash
cd <REPO_PATH>
git worktree remove ../<REPO_NAME>-pr-<PR_NUMBER> --force
```
### 8. Final Summary
**IMPORTANT**: Your final output must be a concise summary (5-10 lines max) for the spawning process:
```
PR #<NUMBER>: <title>
Status: <fixed|partial|blocked>
Feedback addressed: <count> items
Review: <approved|needs-work|skipped>
Commits: <number of commits pushed>
Notes: <any blockers or important details>
```
Do NOT include verbose logs or intermediate output - only this final summary.
## Important Guidelines
- **Work autonomously**: Make reasonable judgment calls on ambiguous feedback
- **Don't ask questions**: You cannot interact with the user
- **Note blockers**: If feedback is unclear or contradictory, document it in a commit message
- **Always cleanup**: Remove the worktree when done, regardless of success/failure
- **Minimal changes**: Only change what's necessary to address the feedback
- **Follow patterns**: Match existing code style and conventions
## Error Handling
If you encounter an error:
1. Try to recover if possible
2. If unrecoverable, push partial work and explain in a comment
3. Always run the cleanup step

View File

@@ -0,0 +1,185 @@
---
name: software-architect
description: Performs architectural analysis on codebases. Analyzes structure, identifies patterns and anti-patterns, and generates prioritized recommendations. Spawned by commands for deep, isolated analysis.
# Model: opus provides strong architectural reasoning and pattern recognition
model: opus
skills: software-architecture
tools: Bash, Read, Glob, Grep, TodoWrite
disallowedTools:
- Edit
- Write
---
# Software Architect Agent
Performs deep architectural analysis on codebases. Returns structured findings for calling commands to present or act upon.
## Input
You will receive one of the following analysis requests:
- **Repository Audit**: Full codebase health assessment
- **Issue Refinement**: Architectural analysis for a specific issue
- **PR Review**: Architectural concerns in a pull request diff
The caller will specify:
- `ANALYSIS_TYPE`: "repo-audit" | "issue-refine" | "pr-review"
- `TARGET`: Repository path, issue number, or PR number
- `CONTEXT`: Additional context (issue description, PR diff, specific concerns)
## Process
### 1. Gather Information
Based on analysis type, collect relevant data:
**For repo-audit:**
```bash
# Understand project structure
ls -la <path>
ls -la <path>/cmd <path>/internal <path>/pkg 2>/dev/null
# Check for key files
cat <path>/CLAUDE.md
cat <path>/go.mod 2>/dev/null
cat <path>/package.json 2>/dev/null
# Analyze package structure
find <path> -name "*.go" -type f | head -50
find <path> -name "*.ts" -type f | head -50
```
**For issue-refine:**
```bash
tea issues <number> --comments
# Then examine files likely affected by the issue
```
**For pr-review:**
```bash
tea pulls checkout <number>
git diff main...HEAD
```
### 2. Apply Analysis Framework
Use the software-architecture skill checklists based on analysis type:
**Repository Audit**: Apply full Repository Audit Checklist
- Structure: Package organization, naming, circular dependencies
- Dependencies: Flow direction, interface ownership, DI patterns
- Code Quality: Naming, god packages, error handling, interfaces
- Testing: Unit tests, integration tests, coverage
- Documentation: CLAUDE.md, vision.md, code comments
**Issue Refinement**: Apply Issue Refinement Checklist
- Scope: Vertical slice, localized changes, hidden cross-cutting concerns
- Design: Follows patterns, justified abstractions, interface compatibility
- Dependencies: Minimal new deps, no circular deps, clear integration points
- Testability: Testable criteria, unit testable, integration test clarity
**PR Review**: Apply PR Review Checklist
- Structure: Respects boundaries, naming conventions, no circular deps
- Interfaces: Defined where used, minimal, breaking changes justified
- Dependencies: Constructor injection, no global state, abstractions
- Error Handling: Wrapped with context, sentinel errors, error types
- Testing: Coverage, clarity, edge cases
### 3. Identify Anti-Patterns
Scan for anti-patterns documented in the skill:
- **God Packages**: utils/, common/, helpers/ with many files
- **Circular Dependencies**: Package import cycles
- **Leaky Abstractions**: Implementation details crossing boundaries
- **Anemic Domain Model**: Data-only domain types, logic elsewhere
- **Shotgun Surgery**: Small changes require many file edits
- **Feature Envy**: Code too interested in another package's data
- **Premature Abstraction**: Interfaces before needed
- **Deep Hierarchy**: Excessive layers of abstraction
### 4. Generate Recommendations
Prioritize findings by impact and effort:
| Priority | Description |
|----------|-------------|
| P0 - Critical | Blocking issues, security vulnerabilities, data integrity risks |
| P1 - High | Significant tech debt, maintainability concerns, test gaps |
| P2 - Medium | Code quality improvements, pattern violations |
| P3 - Low | Style suggestions, minor optimizations |
## Output Format
Return structured results that calling commands can parse:
```markdown
ARCHITECT_ANALYSIS_RESULT
type: <repo-audit|issue-refine|pr-review>
target: <path|issue-number|pr-number>
status: <complete|partial|blocked>
## Summary
[1-2 paragraph overall assessment]
## Health Score
[For repo-audit only: A-F grade with brief justification]
## Findings
### Critical (P0)
- [Finding with specific location and recommendation]
### High Priority (P1)
- [Finding with specific location and recommendation]
### Medium Priority (P2)
- [Finding with specific location and recommendation]
### Low Priority (P3)
- [Finding with specific location and recommendation]
## Anti-Patterns Detected
- [Pattern name]: [Location and description]
## Recommendations
1. [Specific, actionable recommendation]
2. [Specific, actionable recommendation]
## Checklist Results
[Relevant checklist from skill with pass/fail/na for each item]
```
## Guidelines
- **Be specific**: Reference exact files, packages, and line numbers
- **Be actionable**: Every finding should have a clear path to resolution
- **Be proportionate**: Match depth of analysis to scope of request
- **Stay objective**: Focus on patterns and principles, not style preferences
- **Acknowledge strengths**: Note what the codebase does well
## Example Invocations
**Repository Audit:**
```
Analyze the architecture of the repository at /path/to/repo
ANALYSIS_TYPE: repo-audit
TARGET: /path/to/repo
CONTEXT: Focus on Go package organization and dependency flow
```
**Issue Refinement:**
```
Review issue #42 for architectural concerns before implementation
ANALYSIS_TYPE: issue-refine
TARGET: 42
CONTEXT: [Issue title and description]
```
**PR Architectural Review:**
```
Check PR #15 for architectural concerns
ANALYSIS_TYPE: pr-review
TARGET: 15
CONTEXT: [PR diff summary]
```

View File

@@ -0,0 +1,170 @@
---
name: arch-refine-issue
description: >
Refine an issue with architectural perspective. Analyzes existing codebase patterns
and provides implementation guidance. Use when refining issues, adding architectural
context, or when user says /arch-refine-issue.
model: opus
argument-hint: <issue-number>
user-invocable: true
---
# Architecturally Refine Issue #$1
@~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/issue-writing/SKILL.md
## Overview
Refine an issue in the context of the project's architecture. This command:
1. Fetches the issue details
2. Spawns the software-architect agent to analyze the codebase
3. Identifies how the issue fits existing patterns
4. Proposes refined description and acceptance criteria
## Process
### Step 1: Fetch Issue Details
```bash
tea issues $1 --comments
```
Capture:
- Title
- Description
- Acceptance criteria
- Any existing discussion
### Step 2: Spawn Software-Architect Agent
Use the Task tool to spawn the software-architect agent for issue refinement analysis:
```
Task tool with:
- subagent_type: "software-architect"
- prompt: See prompt below
```
**Agent Prompt:**
```
Analyze the architecture for issue refinement.
ANALYSIS_TYPE: issue-refine
TARGET: $1
CONTEXT:
<issue title and description from step 1>
Repository path: <current working directory>
Focus on:
1. Understanding existing project structure and patterns
2. Identifying packages/modules that will be affected
3. Analyzing existing conventions and code style
4. Detecting potential architectural concerns
5. Suggesting implementation approach that fits existing patterns
```
### Step 3: Parse Agent Analysis
The software-architect agent returns structured output with:
- Summary of architectural findings
- Affected packages/modules
- Pattern recommendations
- Potential concerns (breaking changes, tech debt, pattern violations)
- Implementation suggestions
### Step 4: Present Refinement Proposal
Present the refined issue to the user with:
**1. Architectural Context**
- Affected packages/modules
- Existing patterns that apply
- Dependency implications
**2. Concerns and Risks**
- Breaking changes
- Tech debt considerations
- Pattern violations to avoid
**3. Proposed Refinement**
- Refined description with architectural context
- Updated acceptance criteria (if needed)
- Technical notes section
**4. Implementation Guidance**
- Suggested approach
- Files likely to be modified
- Recommended order of changes
### Step 5: User Decision
Ask the user what action to take:
- **Apply**: Update the issue with refined description and technical notes
- **Edit**: Let user modify the proposal before applying
- **Skip**: Keep the original issue unchanged
### Step 6: Update Issue (if approved)
If user approves, update the issue using tea CLI:
```bash
tea issues edit $1 --description "<refined description>"
```
Add a comment with the architectural analysis:
```bash
tea comment $1 "## Architectural Analysis
<findings from software-architect agent>
---
Generated by /arch-refine-issue"
```
## Output Format
Present findings in a clear, actionable format:
```markdown
## Architectural Analysis for Issue #$1
### Affected Components
- `package/name` - Description of impact
- `another/package` - Description of impact
### Existing Patterns
- Pattern 1: How it applies
- Pattern 2: How it applies
### Concerns
- [ ] Breaking change: description (if applicable)
- [ ] Tech debt: description (if applicable)
- [ ] Pattern violation risk: description (if applicable)
### Proposed Refinement
**Updated Description:**
<refined description>
**Updated Acceptance Criteria:**
- [ ] Original criteria (unchanged)
- [ ] New criteria based on analysis
**Technical Notes:**
<implementation guidance based on architecture>
### Recommended Approach
1. Step 1
2. Step 2
3. Step 3
```
## Error Handling
- If issue does not exist, inform user
- If software-architect agent fails, report partial analysis
- If tea CLI fails, show manual instructions

View File

@@ -0,0 +1,79 @@
---
name: arch-review-repo
description: >
Perform a full architecture review of the current repository. Analyzes structure,
patterns, dependencies, and generates prioritized recommendations. Use when reviewing
architecture, auditing codebase, or when user says /arch-review-repo.
model: opus
argument-hint:
context: fork
user-invocable: true
---
# Architecture Review
@~/.claude/skills/software-architecture/SKILL.md
## Process
1. **Identify the repository**: Use the current working directory as the repository path.
2. **Spawn the software-architect agent** for deep analysis:
```
ANALYSIS_TYPE: repo-audit
TARGET: <repository-path>
CONTEXT: Full repository architecture review
```
The agent will:
- Analyze directory structure and package organization
- Identify patterns and anti-patterns in the codebase
- Assess dependency graph and module boundaries
- Review test coverage approach
- Generate structured findings with prioritized recommendations
3. **Present the results** to the user in this format:
```markdown
## Repository Architecture Review: <repo-name>
### Structure: <Good|Needs Work>
- [Key observations about package organization]
- [Directory structure assessment]
- [Naming conventions evaluation]
### Patterns Identified
- [Positive patterns found in the codebase]
- [Architectural styles detected (layered, hexagonal, etc.)]
### Anti-Patterns Detected
- [Anti-pattern name]: [Location and description]
- [Anti-pattern name]: [Location and description]
### Concerns
- [Specific issues that need attention]
- [Technical debt areas]
### Recommendations (prioritized)
1. **P0 - Critical**: [Most urgent recommendation]
2. **P1 - High**: [Important improvement]
3. **P2 - Medium**: [Nice-to-have improvement]
4. **P3 - Low**: [Minor optimization]
### Health Score: <A|B|C|D|F>
[Brief justification for the grade]
```
4. **Offer follow-up actions**:
- Create issues for critical findings
- Generate a detailed report
- Review specific components in more depth
## Guidelines
- Be specific: Reference exact files, packages, and locations
- Be actionable: Every finding should have a clear path to resolution
- Be balanced: Acknowledge what the codebase does well
- Be proportionate: Focus on high-impact issues first
- Stay objective: Focus on patterns and principles, not style preferences

View File

@@ -1,6 +1,8 @@
--- ---
name: backlog-grooming name: backlog-grooming
model: haiku
description: Review and improve existing issues for clarity and actionability. Use when grooming the backlog, reviewing issue quality, cleaning up stale issues, or when the user wants to improve existing issues. description: Review and improve existing issues for clarity and actionability. Use when grooming the backlog, reviewing issue quality, cleaning up stale issues, or when the user wants to improve existing issues.
user-invocable: false
--- ---
# Backlog Grooming # Backlog Grooming

View File

@@ -0,0 +1,219 @@
---
name: claude-md-writing
model: haiku
description: Write effective CLAUDE.md files that give AI assistants the context they need. Use when creating new repos, improving existing CLAUDE.md files, or setting up projects.
user-invocable: false
---
# Writing Effective CLAUDE.md Files
CLAUDE.md is the project's context file for AI assistants. A good CLAUDE.md means Claude understands your project immediately without needing to explore.
## Purpose
CLAUDE.md answers: "What does Claude need to know to work effectively in this repo?"
- **Not a README** - README is for humans discovering the project
- **Not documentation** - Docs explain how to use the product
- **Context for AI** - What Claude needs to make good decisions
## Required Sections
### 1. One-Line Description
Start with what this repo is in one sentence.
```markdown
# Project Name
Brief description of what this project does.
```
### 2. Organization Context
Link to the bigger picture so Claude understands where this fits.
```markdown
## Organization Context
This repo is part of Flowmade. See:
- [Organization manifesto](../architecture/manifesto.md) - who we are, what we believe
- [Repository map](../architecture/repos.md) - how this fits in the bigger picture
- [Vision](./vision.md) - what this specific product does
```
### 3. Setup
How to get the project running locally.
```markdown
## Setup
\`\`\`bash
# Clone and install
git clone <url>
cd <project>
make install # or npm install, etc.
\`\`\`
```
### 4. Project Structure
Key directories and what they contain. Focus on what's non-obvious.
```markdown
## Project Structure
\`\`\`
project/
├── cmd/ # Entry points
├── pkg/ # Shared packages
│ ├── domain/ # Business logic
│ └── infra/ # Infrastructure adapters
├── internal/ # Private packages
└── api/ # API definitions
\`\`\`
```
### 5. Development Commands
The commands Claude will need to build, test, and run.
```markdown
## Development
\`\`\`bash
make build # Build the project
make test # Run tests
make lint # Run linters
make run # Run locally
\`\`\`
```
### 6. Architecture Decisions
Key patterns and conventions specific to this repo.
```markdown
## Architecture
### Patterns Used
- Event sourcing for state management
- CQRS for read/write separation
- Hexagonal architecture
### Conventions
- All commands go through the command bus
- Events are immutable value objects
- Projections rebuild from events
```
## What Makes a Good CLAUDE.md
### Do Include
- **Enough context to skip exploration** - Claude shouldn't need to grep around
- **Key architectural patterns** - How the code is organized and why
- **Non-obvious conventions** - Things that aren't standard
- **Important dependencies** - External services, APIs, databases
- **Common tasks** - How to do things Claude will be asked to do
### Don't Include
- **Duplicated manifesto content** - Link to it instead
- **Duplicated vision content** - Link to vision.md
- **API documentation** - That belongs elsewhere
- **User guides** - CLAUDE.md is for the AI, not end users
- **Obvious things** - Don't explain what `go build` does
## Template
```markdown
# [Project Name]
[One-line description]
## Organization Context
This repo is part of Flowmade. See:
- [Organization manifesto](../architecture/manifesto.md) - who we are, what we believe
- [Repository map](../architecture/repos.md) - how this fits in the bigger picture
- [Vision](./vision.md) - what this specific product does
## Setup
\`\`\`bash
# TODO: Add setup instructions
\`\`\`
## Project Structure
\`\`\`
project/
├── ...
\`\`\`
## Development
\`\`\`bash
make build # Build the project
make test # Run tests
make lint # Run linters
\`\`\`
## Architecture
### Patterns
- [List key patterns]
### Conventions
- [List important conventions]
### Key Components
- [Describe main components and their responsibilities]
```
## Examples
### Good: Enough Context
```markdown
## Architecture
This service uses event sourcing. State is rebuilt from events, not stored directly.
### Key Types
- `Aggregate` - Domain object that emits events
- `Event` - Immutable fact that something happened
- `Projection` - Read model built from events
### Adding a New Aggregate
1. Create type in `pkg/domain/`
2. Implement `HandleCommand()` and `ApplyEvent()`
3. Register in `cmd/main.go`
```
Claude can now work with aggregates without exploring the codebase.
### Bad: Too Vague
```markdown
## Architecture
Uses standard Go patterns. See the code for details.
```
Claude has to explore to understand anything.
## Maintenance
Update CLAUDE.md when:
- Adding new architectural patterns
- Changing project structure
- Adding important dependencies
- Discovering conventions that aren't documented
Don't update for:
- Every code change
- Bug fixes
- Minor refactors

View File

@@ -1,6 +1,8 @@
--- ---
name: code-review name: code-review
model: haiku
description: Review code for quality, bugs, security, and style issues. Use when reviewing pull requests, checking code quality, looking for bugs or security vulnerabilities, or when the user asks for a code review. description: Review code for quality, bugs, security, and style issues. Use when reviewing pull requests, checking code quality, looking for bugs or security vulnerabilities, or when the user asks for a code review.
user-invocable: false
--- ---
# Code Review # Code Review

View File

@@ -0,0 +1,92 @@
---
name: commit
description: >
Create a commit with an auto-generated conventional commit message. Analyzes staged
changes and proposes a message for approval. Use when committing changes, creating
commits, or when user says /commit.
model: haiku
argument-hint:
user-invocable: true
---
# Commit Changes
## Process
1. **Check for staged changes**:
```bash
git diff --staged --stat
```
If no staged changes, inform the user and suggest staging files first:
- Show unstaged changes with `git status`
- Ask if they want to stage all changes (`git add -A`) or specific files
2. **Analyze staged changes**:
```bash
git diff --staged
```
Examine the diff to understand:
- What files were changed, added, or deleted
- The nature of the changes (new feature, bug fix, refactor, docs, etc.)
- Key details worth mentioning
3. **Generate commit message**:
Create a conventional commit message following this format:
```
<type>(<scope>): <description>
[optional body with more details]
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
```
**Types:**
- `feat`: New feature or capability
- `fix`: Bug fix
- `refactor`: Code restructuring without behavior change
- `docs`: Documentation changes
- `style`: Formatting, whitespace (no code change)
- `test`: Adding or updating tests
- `chore`: Maintenance tasks, dependencies, config
**Scope:** The component or area affected (optional, use when helpful)
**Description:**
- Imperative mood ("add" not "added")
- Lowercase first letter
- No period at the end
- Focus on the "why" when the "what" is obvious
4. **Present message for approval**:
Show the proposed message and ask the user to:
- **Approve**: Use the message as-is
- **Edit**: Let them modify the message
- **Regenerate**: Create a new message with different focus
5. **Create the commit**:
Once approved, execute:
```bash
git commit -m "$(cat <<'EOF'
<approved message>
EOF
)"
```
6. **Confirm success**:
Show the commit result and suggest next steps:
- Push to remote: `git push`
- Continue working and commit more changes
## Guidelines
- Only commits what's staged (respects partial staging)
- Never auto-commits without user approval
- Keep descriptions concise (50 chars or less for first line)
- Include body for non-obvious changes
- Always include Co-Authored-By attribution

View File

@@ -1,6 +1,11 @@
--- ---
description: Create a new Gitea issue. Can create single issues or batch create from a plan. name: create-issue
description: >
Create a new Gitea issue. Can create single issues or batch create from a plan.
Use when creating issues, adding tickets, or when user says /create-issue.
model: haiku
argument-hint: [title] or "batch" argument-hint: [title] or "batch"
user-invocable: true
--- ---
# Create Issue(s) # Create Issue(s)

View File

@@ -0,0 +1,214 @@
---
name: create-repo
description: >
Create a new repository with standard structure. Scaffolds vision.md, CLAUDE.md,
and CI configuration. Use when creating repos, initializing projects, or when user
says /create-repo.
model: haiku
argument-hint: <repo-name>
context: fork
user-invocable: true
---
# Create Repository
@~/.claude/skills/repo-conventions/SKILL.md
@~/.claude/skills/vision-management/SKILL.md
@~/.claude/skills/claude-md-writing/SKILL.md
@~/.claude/skills/gitea/SKILL.md
Create a new repository with Flowmade's standard structure.
## Process
1. **Get repository name**: Use `$1` or ask the user
- Validate: lowercase, hyphens only, no `flowmade-` prefix
- Check it doesn't already exist: `tea repos flowmade-one/<name>`
2. **Determine visibility**:
- Ask: "Should this repo be public (open source) or private (proprietary)?"
- Refer to repo-conventions skill for guidance on open vs proprietary
3. **Gather vision context**:
- Read the organization manifesto: `../architecture/manifesto.md`
- Ask: "What does this product do? (one sentence)"
- Ask: "Which manifesto personas does it serve?"
- Ask: "What problem does it solve?"
4. **Create the repository on Gitea**:
```bash
tea repos create --name <repo-name> --private/--public --description "<description>"
```
5. **Clone and set up structure**:
```bash
# Clone the new repo
git clone ssh://git@git.flowmade.one/flowmade-one/<repo-name>.git
cd <repo-name>
```
6. **Create vision.md**:
- Use the vision structure template from vision-management skill
- Link to `../architecture/manifesto.md`
- Fill in based on user's answers
7. **Create CLAUDE.md** (following claude-md-writing skill):
```markdown
# <Repo Name>
<One-line description from step 3>
## Organization Context
This repo is part of Flowmade. See:
- [Organization manifesto](../architecture/manifesto.md) - who we are, what we believe
- [Repository map](../architecture/repos.md) - how this fits in the bigger picture
- [Vision](./vision.md) - what this specific product does
## Setup
```bash
# TODO: Add setup instructions
```
## Project Structure
TODO: Document key directories once code exists.
## Development
```bash
make build # Build the project
make test # Run tests
make lint # Run linters
```
## Architecture
TODO: Document key patterns and conventions once established.
```
8. **Create Makefile** (basic template):
```makefile
.PHONY: build test lint
build:
@echo "TODO: Add build command"
test:
@echo "TODO: Add test command"
lint:
@echo "TODO: Add lint command"
```
9. **Create CI workflow**:
```bash
mkdir -p .gitea/workflows
```
Create `.gitea/workflows/ci.yaml`:
```yaml
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build
run: make build
- name: Test
run: make test
- name: Lint
run: make lint
```
10. **Create .gitignore** (basic, expand based on language):
```
# IDE
.idea/
.vscode/
*.swp
# OS
.DS_Store
Thumbs.db
# Build artifacts
/dist/
/build/
/bin/
# Dependencies (language-specific, add as needed)
/node_modules/
/vendor/
```
11. **Initial commit and push**:
```bash
git add .
git commit -m "Initial repository structure
- vision.md linking to organization manifesto
- CLAUDE.md with project instructions
- CI workflow template
- Basic Makefile
Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
git push -u origin main
```
12. **Report success**:
```
Repository created: https://git.flowmade.one/flowmade-one/<repo-name>
Next steps:
1. cd ../<repo-name>
2. Update CLAUDE.md with actual setup instructions
3. Update Makefile with real build commands
4. Start building!
```
## Output Example
```
## Creating Repository: my-service
Visibility: Private (proprietary)
Description: Internal service for processing events
### Files Created
- vision.md (linked to manifesto)
- CLAUDE.md (project instructions)
- Makefile (build template)
- .gitea/workflows/ci.yaml (CI pipeline)
- .gitignore (standard ignores)
### Repository URL
https://git.flowmade.one/flowmade-one/my-service
### Next Steps
1. cd ../my-service
2. Update CLAUDE.md with setup instructions
3. Update Makefile with build commands
4. Start coding!
```
## Guidelines
- Always link vision.md to the sibling architecture repo
- Keep initial structure minimal - add complexity as needed
- CI should pass on empty repo (use placeholder commands)
- Default to private unless explicitly open-sourcing

View File

@@ -0,0 +1,90 @@
---
name: dashboard
description: >
Show dashboard of open issues, PRs awaiting review, and CI status. Use when
checking project status, viewing issues/PRs, or when user says /dashboard.
model: haiku
user-invocable: true
---
# Repository Dashboard
@~/.claude/skills/gitea/SKILL.md
Fetch and display the following sections:
## 1. Open Issues
Run `tea issues` to list all open issues.
Format as a table showing:
- Number
- Title
- Author
## 2. Open Pull Requests
Run `tea pulls` to list all open PRs.
Format as a table showing:
- Number
- Title
- Author
## 3. CI Status (Recent Workflow Runs)
Run `tea actions runs` to list recent workflow runs.
**Output formatting:**
- Show the most recent 10 workflow runs maximum
- For each run, display:
- Status (use indicators: [SUCCESS], [FAILURE], [RUNNING], [PENDING])
- Workflow name
- Branch or PR reference
- Commit (short SHA)
- Triggered time
**Highlighting:**
- **Highlight failed runs** by prefixing with a warning indicator and ensuring they stand out visually
- Example: "**[FAILURE]** build - PR #42 - abc1234 - 2h ago"
**Handling repos without CI:**
- If `tea actions runs` returns "No workflow runs found" or similar, display:
"No CI workflows configured for this repository."
- Do not treat this as an error - simply note it and continue
## Output Format
Present each section with a clear header. Example:
```
## Open Issues (3)
| # | Title | Author |
|----|------------------------|--------|
| 15 | Fix login timeout | alice |
| 12 | Add dark mode | bob |
| 8 | Update documentation | carol |
## Open Pull Requests (2)
| # | Title | Author |
|----|------------------------|--------|
| 16 | Fix login timeout | alice |
| 14 | Refactor auth module | bob |
## CI Status
| Status | Workflow | Branch/PR | Commit | Time |
|-------------|----------|-------------|---------|---------|
| **[FAILURE]** | build | PR #16 | abc1234 | 2h ago |
| [SUCCESS] | build | main | def5678 | 5h ago |
| [SUCCESS] | lint | main | def5678 | 5h ago |
```
If no CI is configured:
```
## CI Status
No CI workflows configured for this repository.
```

View File

@@ -1,6 +1,12 @@
--- ---
description: Groom and improve issues. Without argument, reviews all open issues. With argument, grooms specific issue. name: groom
description: >
Groom and improve issues. Without argument, reviews all open issues. With argument,
grooms specific issue. Use when grooming backlog, improving issues, or when user
says /groom.
model: sonnet
argument-hint: [issue-number] argument-hint: [issue-number]
user-invocable: true
--- ---
# Groom Issues # Groom Issues

View File

@@ -1,5 +1,12 @@
--- ---
description: Identify improvement opportunities based on product vision. Analyzes gaps between vision goals and current backlog. name: improve
description: >
Identify improvement opportunities based on product vision. Analyzes gaps between
vision goals and current backlog. Use when analyzing alignment, finding gaps, or
when user says /improve.
model: sonnet
context: fork
user-invocable: true
--- ---
# Improvement Analysis # Improvement Analysis

View File

@@ -0,0 +1,157 @@
---
name: issue-writing
model: haiku
description: Write clear, actionable issues with proper structure and acceptance criteria. Use when creating issues, writing bug reports, feature requests, or when the user needs help structuring an issue.
user-invocable: false
---
# Issue Writing
How to write clear, actionable issues.
## Issue Structure
### Title
- Start with action verb: "Add", "Fix", "Update", "Remove", "Refactor"
- Be specific: "Add user authentication" not "Auth stuff"
- Keep under 60 characters when possible
### Description
```markdown
## Summary
One paragraph explaining what and why.
## Acceptance Criteria
- [ ] Specific, testable requirement
- [ ] Another requirement
- [ ] User can verify this works
## Context
Additional background, links, or references.
## Technical Notes (optional)
Implementation hints or constraints.
```
## Writing Acceptance Criteria
Good criteria are:
- **Specific**: "User sees error message" not "Handle errors"
- **Testable**: Can verify pass/fail
- **User-focused**: What the user experiences
- **Independent**: Each stands alone
Examples:
```markdown
- [ ] Login form validates email format before submission
- [ ] Invalid credentials show "Invalid email or password" message
- [ ] Successful login redirects to dashboard
- [ ] Session persists across browser refresh
```
## Vertical Slices
Issues should be **vertical slices** that deliver user-visible value.
### The Demo Test
Before writing an issue, ask: **Can a user demo or test this independently?**
- **Yes** → Good issue scope
- **No** → Rethink the breakdown
### Good vs Bad Issue Titles
| Good (Vertical) | Bad (Horizontal) |
|-----------------|------------------|
| "User can save and reload diagram" | "Add persistence layer" |
| "Show error when login fails" | "Add error handling" |
| "Domain expert can list orders" | "Add query syntax to ADL" |
### Writing User-Focused Issues
Frame issues around user capabilities:
```markdown
# Bad: Technical task
Title: Add email service integration
# Good: User capability
Title: User receives confirmation email after signup
```
The technical work is the same, but the good title makes success criteria clear.
## Issue Types
### Bug Report
```markdown
## Summary
Description of the bug.
## Steps to Reproduce
1. Go to...
2. Click...
3. Observe...
## Expected Behavior
What should happen.
## Actual Behavior
What happens instead.
## Environment
- Browser/OS/Version
```
### Feature Request
```markdown
## Summary
What feature and why it's valuable.
## Acceptance Criteria
- [ ] ...
## User Story (optional)
As a [role], I want [capability] so that [benefit].
```
### Technical Task
```markdown
## Summary
What technical work needs to be done.
## Scope
- Include: ...
- Exclude: ...
## Acceptance Criteria
- [ ] ...
```
## Labels
Use labels to categorize:
- `bug`, `feature`, `enhancement`, `refactor`
- `priority/high`, `priority/low`
- Component labels specific to project
## Dependencies
Identify and link dependencies when creating issues:
1. **In the description**, document dependencies:
```markdown
## Dependencies
- Depends on #12 (must complete first)
- Related to #15 (informational)
```
2. **After creating the issue**, formally link blockers using tea CLI:
```bash
tea issues deps add <this-issue> <blocker-issue>
tea issues deps add 5 3 # Issue #5 is blocked by #3
```
This creates a formal dependency graph that tools can query.

View File

@@ -1,5 +1,11 @@
--- ---
description: View and manage the organization manifesto. Shows identity, personas, beliefs, and principles. name: manifesto
description: >
View and manage the organization manifesto. Shows identity, personas, beliefs,
and principles. Use when viewing manifesto, checking organization identity, or
when user says /manifesto.
model: haiku
user-invocable: true
--- ---
# Organization Manifesto # Organization Manifesto

View File

@@ -1,6 +1,13 @@
--- ---
description: Plan and create issues for a feature or improvement. Breaks down work into well-structured issues with vision alignment. name: plan-issues
description: >
Plan and create issues for a feature or improvement. Breaks down work into
well-structured issues with vision alignment. Use when planning a feature,
creating a roadmap, breaking down large tasks, or when user says /plan-issues.
model: sonnet
argument-hint: <feature-description> argument-hint: <feature-description>
context: fork
user-invocable: true
--- ---
# Plan Feature: $1 # Plan Feature: $1
@@ -15,12 +22,24 @@ argument-hint: <feature-description>
3. **Identify job**: Which job to be done does this enable? 3. **Identify job**: Which job to be done does this enable?
4. **Understand the feature**: Analyze what "$1" involves 4. **Understand the feature**: Analyze what "$1" involves
5. **Explore the codebase** if needed to understand context 5. **Explore the codebase** if needed to understand context
6. **Break down** into discrete, actionable issues:
6. **Discovery phase**: Before proposing issues, walk through the user workflow:
- Who is the specific user?
- What is their goal?
- What is their step-by-step workflow to reach that goal?
- What exists today?
- Where does the workflow break or have gaps?
- What's the MVP that delivers value?
Present this as a workflow walkthrough before proposing any issues.
7. **Break down** into discrete, actionable issues:
- Derive issues from the workflow gaps identified in discovery
- Each issue should be independently completable - Each issue should be independently completable
- Clear dependencies between issues - Clear dependencies between issues
- Appropriate scope (not too big, not too small) - Appropriate scope (not too big, not too small)
7. **Present the plan** (include vision alignment if vision exists): 8. **Present the plan** (include vision alignment if vision exists):
``` ```
## Proposed Issues for: $1 ## Proposed Issues for: $1
@@ -29,12 +48,15 @@ argument-hint: <feature-description>
Supports: [Milestone/Goal name] Supports: [Milestone/Goal name]
1. [Title] - Brief description 1. [Title] - Brief description
Addresses gap: [which workflow gap this solves]
Dependencies: none Dependencies: none
2. [Title] - Brief description 2. [Title] - Brief description
Addresses gap: [which workflow gap this solves]
Dependencies: #1 Dependencies: #1
3. [Title] - Brief description 3. [Title] - Brief description
Addresses gap: [which workflow gap this solves]
Dependencies: #1, #2 Dependencies: #1, #2
``` ```
@@ -44,7 +66,7 @@ argument-hint: <feature-description>
- This should be added as a non-goal - This should be added as a non-goal
- Proceed anyway (with justification) - Proceed anyway (with justification)
8. **Ask for approval** before creating issues 9. **Ask for approval** before creating issues
9. **Create issues** in dependency order (blockers first) 10. **Create issues** in dependency order (blockers first)
10. **Link dependencies** using `tea issues deps add <issue> <blocker>` for each dependency 11. **Link dependencies** using `tea issues deps add <issue> <blocker>` for each dependency
11. **Present summary** with links to created issues and dependency graph 12. **Present summary** with links to created issues and dependency graph

153
old/skills/pr/SKILL.md Normal file
View File

@@ -0,0 +1,153 @@
---
name: pr
description: >
Create a PR from current branch. Auto-generates title and description from branch
name and commits. Use when creating pull requests, submitting changes, or when
user says /pr.
model: haiku
user-invocable: true
---
# Create Pull Request
@~/.claude/skills/gitea/SKILL.md
Quick PR creation from current branch - lighter than full `/work-issue` flow for when you're already on a branch with commits.
## Prerequisites
- Current branch is NOT main/master
- Branch has commits ahead of main
- Changes have been pushed to origin (or will be pushed)
## Process
### 1. Verify Branch State
```bash
# Check current branch
git branch --show-current
# Ensure we're not on main
# If on main, abort with message: "Cannot create PR from main branch"
# Check for commits ahead of main
git log main..HEAD --oneline
```
### 2. Push if Needed
```bash
# Check if branch is tracking remote
git status -sb
# If not pushed or behind, push with upstream
git push -u origin <branch-name>
```
### 3. Generate PR Title
**Option A: Branch contains issue number** (e.g., `issue-42-add-feature`)
Extract issue number and use format: `[Issue #<number>] <issue-title>`
```bash
tea issues <number> # Get the actual issue title
```
**Option B: No issue number**
Generate from branch name or recent commit messages:
- Convert branch name from kebab-case to title case: `add-user-auth` -> `Add user auth`
- Or use the most recent commit subject line
### 4. Generate PR Description
Analyze the diff and commits to generate a description:
```bash
# Get diff against main
git diff main...HEAD --stat
# Get commit messages
git log main..HEAD --format="- %s"
```
Structure the description:
```markdown
## Summary
[1-2 sentences describing the overall change]
## Changes
[Bullet points summarizing commits or key changes]
[If issue linked: "Closes #<number>"]
```
### 5. Create PR
Use tea CLI to create the PR:
```bash
tea pulls create --title "<generated-title>" --description "<generated-description>"
```
Capture the PR number from the output (e.g., "Pull Request #42 created").
### 6. Auto-review
Inform the user that auto-review is starting, then spawn the `code-reviewer` agent in background:
```
Task tool with:
- subagent_type: "code-reviewer"
- run_in_background: true
- prompt: |
Review PR #<PR_NUMBER> in the repository at <REPO_PATH>.
1. Checkout the PR: tea pulls checkout <PR_NUMBER>
2. Get the diff: git diff main...HEAD
3. Analyze for code quality, bugs, security, style, test coverage
4. Post structured review comment with tea comment
5. Merge with rebase if LGTM, otherwise leave for user
```
### 7. Display Result
Show the user:
- PR URL/number
- Generated title and description
- Status of auto-review (spawned in background)
## Issue Linking
To detect if branch is linked to an issue:
1. Check branch name for patterns:
- `issue-<number>-*`
- `<number>-*`
- `*-#<number>`
2. If issue number found:
- Fetch issue title from Gitea
- Use `[Issue #N] <issue-title>` format for PR title
- Add `Closes #N` to description
## Example Output
```
Created PR #42: [Issue #15] Add /pr command
## Summary
Adds /pr command for quick PR creation from current branch.
## Changes
- Add commands/pr.md with auto-generation logic
- Support issue linking from branch name
Closes #15
---
Auto-review started in background. Check status with: tea pulls 42 --comments
```

View File

@@ -0,0 +1,204 @@
---
name: repo-conventions
model: haiku
description: Standard structure and conventions for Flowmade repositories. Use when creating new repos, reviewing repo structure, or setting up projects.
user-invocable: false
---
# Repository Conventions
Standard structure and conventions for Flowmade repositories.
## Repository Layout
All product repos should follow this structure relative to the architecture repo:
```
org/
├── architecture/ # Organizational source of truth
│ ├── manifesto.md # Organization identity and beliefs
│ ├── skills/ # User-invocable and background skills
│ └── agents/ # Subtask handlers
├── product-a/ # Product repository
│ ├── vision.md # Product vision (extends manifesto)
│ ├── CLAUDE.md # AI assistant instructions
│ ├── .gitea/workflows/ # CI/CD pipelines
│ └── ...
└── product-b/
└── ...
```
## Required Files
### vision.md
Every product repo needs a vision that extends the organization manifesto.
```markdown
# Vision
This product vision builds on the [organization manifesto](../architecture/manifesto.md).
## Who This Product Serves
### [Persona Name]
[Product-specific description]
*Extends: [Org persona] (from manifesto)*
## What They're Trying to Achieve
| Product Job | Enables Org Job |
|-------------|-----------------|
| "[Product job]" | "[Org job from manifesto]" |
## The Problem
[Pain points this product addresses]
## The Solution
[How this product solves those problems]
## Product Principles
### [Principle Name]
[Description]
*Extends: "[Org principle]"*
## Non-Goals
- **[Non-goal].** [Explanation]
```
### CLAUDE.md
Project-specific context for AI assistants. See [claude-md-writing skill](../claude-md-writing/SKILL.md) for detailed guidance.
```markdown
# [Project Name]
[One-line description]
## Organization Context
This repo is part of Flowmade. See:
- [Organization manifesto](../architecture/manifesto.md) - who we are, what we believe
- [Repository map](../architecture/repos.md) - how this fits in the bigger picture
- [Vision](./vision.md) - what this specific product does
## Setup
[How to get the project running locally]
## Project Structure
[Key directories and their purposes]
## Development
[How to build, test, run]
## Architecture
[Key architectural decisions and patterns]
```
### .gitea/workflows/ci.yaml
Standard CI pipeline. Adapt based on language/framework.
```yaml
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build
run: make build
- name: Test
run: make test
- name: Lint
run: make lint
```
## Naming Conventions
### Repository Names
- Lowercase with hyphens: `product-name`, `service-name`
- Descriptive but concise
- No prefixes like `flowmade-` (the org already provides context)
### Branch Names
- `main` - default branch, always deployable
- `issue-<number>-<short-description>` - feature branches
- No `develop` or `staging` branches - use main + feature flags
### Commit Messages
- Imperative mood: "Add feature" not "Added feature"
- First line: summary (50 chars)
- Body: explain why, not what (the diff shows what)
- Reference issues: "Fixes #42" or "Closes #42"
## Open vs Proprietary
Decisions about what to open-source are guided by the manifesto:
| Type | Open Source? | Reason |
|------|--------------|--------|
| Infrastructure tooling | Yes | Builds community, low competitive risk |
| Generic libraries | Yes | Ecosystem benefits, adoption |
| Core platform IP | No | Differentiator, revenue source |
| Domain-specific features | No | Product value |
When uncertain, default to proprietary. Opening later is easier than closing.
## CI/CD Conventions
### Runners
- Use self-hosted ARM64 runners where possible (resource efficiency)
- KEDA-scaled runners for burst capacity
- Cache dependencies aggressively
### Deployments
- Main branch auto-deploys to staging
- Production requires manual approval or tag
- Use GitOps (ArgoCD) for Kubernetes deployments
## Dependencies
### Go Projects
- Use Go modules
- Vendor dependencies for reproducibility
- Pin major versions, allow minor updates
### General
- Prefer fewer, well-maintained dependencies
- Audit transitive dependencies
- Update regularly, don't let them rot
## Documentation
Following the manifesto principle "Encode, don't document":
- CLAUDE.md: How to work with this repo (for AI and humans)
- vision.md: Why this product exists
- Code comments: Only for non-obvious "why"
- No separate docs folder unless user-facing documentation

View File

@@ -1,11 +1,17 @@
--- ---
description: Run a retrospective on completed work. Captures insights as issues for later encoding into skills/commands/agents. name: retro
description: >
Run a retrospective on completed work. Captures insights as issues for later
encoding into skills/agents. Use when capturing learnings, running retrospectives,
or when user says /retro.
model: haiku
argument-hint: [task-description] argument-hint: [task-description]
user-invocable: true
--- ---
# Retrospective # Retrospective
Capture insights from completed work as issues on the architecture repo. Issues are later encoded into learnings and skills/commands/agents. Capture insights from completed work as issues on the architecture repo. Issues are later encoded into learnings and skills/agents.
@~/.claude/skills/vision-management/SKILL.md @~/.claude/skills/vision-management/SKILL.md
@~/.claude/skills/gitea/SKILL.md @~/.claude/skills/gitea/SKILL.md
@@ -13,7 +19,7 @@ Capture insights from completed work as issues on the architecture repo. Issues
## Flow ## Flow
``` ```
Retro (any repo) → Issue (architecture repo) → Encode: learning file + skill/command/agent Retro (any repo) → Issue (architecture repo) → Encode: learning file + skill/agent
``` ```
The retro creates the issue. Encoding happens when the issue is worked on. The retro creates the issue. Encoding happens when the issue is worked on.
@@ -29,7 +35,7 @@ The retro creates the issue. Encoding happens when the issue is worked on.
3. **Identify insights**: For each insight, determine: 3. **Identify insights**: For each insight, determine:
- **What was learned**: The specific insight - **What was learned**: The specific insight
- **Where to encode it**: Which skill, command, or agent should change? - **Where to encode it**: Which skill or agent should change?
- **Governance impact**: What does this mean for how we work? - **Governance impact**: What does this mean for how we work?
4. **Create issue on architecture repo**: Always create issues on `flowmade-one/architecture`: 4. **Create issue on architecture repo**: Always create issues on `flowmade-one/architecture`:
@@ -45,7 +51,6 @@ The retro creates the issue. Encoding happens when the issue is worked on.
## Suggested Encoding ## Suggested Encoding
- [ ] \`skills/xxx/SKILL.md\` - [what to add/change] - [ ] \`skills/xxx/SKILL.md\` - [what to add/change]
- [ ] \`commands/xxx.md\` - [what to add/change]
- [ ] \`agents/xxx/agent.md\` - [what to add/change] - [ ] \`agents/xxx/agent.md\` - [what to add/change]
## Governance ## Governance
@@ -78,14 +83,13 @@ When encoding a learning issue, the implementer should:
## Encoded In ## Encoded In
- `skills/xxx/SKILL.md` - [what was added/changed] - `skills/xxx/SKILL.md` - [what was added/changed]
- `commands/xxx.md` - [what was added/changed]
## Governance ## Governance
[What this means for how we work] [What this means for how we work]
``` ```
2. **Update skill/command/agent** with the encoded knowledge 2. **Update skill/agent** with the encoded knowledge
3. **Close the issue** with reference to the learning file and changes made 3. **Close the issue** with reference to the learning file and changes made
@@ -94,7 +98,7 @@ When encoding a learning issue, the implementer should:
| Insight Type | Encode In | | Insight Type | Encode In |
|--------------|-----------| |--------------|-----------|
| How to use a tool | `skills/[tool]/SKILL.md` | | How to use a tool | `skills/[tool]/SKILL.md` |
| Workflow improvement | `commands/[command].md` | | Workflow improvement | `skills/[skill]/SKILL.md` (user-invocable) |
| Subtask behavior | `agents/[agent]/agent.md` | | Subtask behavior | `agents/[agent]/agent.md` |
| Organization belief | `manifesto.md` | | Organization belief | `manifesto.md` |
| Product direction | `vision.md` (in product repo) | | Product direction | `vision.md` (in product repo) |
@@ -103,8 +107,8 @@ When encoding a learning issue, the implementer should:
Add appropriate labels to issues: Add appropriate labels to issues:
- `learning` - Always add this - `learning` - Always add this
- `prompt-improvement` - For command/skill text changes - `prompt-improvement` - For skill text changes
- `new-feature` - For new commands/skills/agents - `new-feature` - For new skills/agents
- `bug` - For things that are broken - `bug` - For things that are broken
## Guidelines ## Guidelines

View File

@@ -0,0 +1,90 @@
---
name: review-pr
description: >
Review a Gitea pull request. Fetches PR details, diff, and comments. Includes
both code review and software architecture review. Use when reviewing pull requests,
checking code quality, or when user says /review-pr.
model: sonnet
argument-hint: <pr-number>
user-invocable: true
---
# Review PR #$1
@~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/software-architecture/SKILL.md
## 1. Gather Information
1. **View PR details** with `--comments` flag to see description, metadata, and discussion
2. **Get the diff** to review the changes:
```bash
tea pulls checkout <number>
git diff main...HEAD
```
## 2. Code Review
Review the changes and provide feedback on:
- Code quality and style
- Potential bugs or logic errors
- Test coverage
- Documentation updates
## 3. Software Architecture Review
Spawn the software-architect agent for architectural analysis:
```
Task tool with:
- subagent_type: "software-architect"
- prompt: |
ANALYSIS_TYPE: pr-review
TARGET: <pr-number>
CONTEXT: [Include the PR diff and description]
```
The architecture review checks:
- **Pattern consistency**: Changes follow existing codebase patterns
- **Dependency direction**: Dependencies flow correctly (toward domain layer)
- **Breaking changes**: API changes are flagged and justified
- **Module boundaries**: Changes respect existing package boundaries
- **Error handling**: Errors wrapped with context, proper error types used
## 4. Present Findings
Structure the review with two sections:
### Code Review
- Quality, bugs, style issues
- Test coverage gaps
- Documentation needs
### Architecture Review
- Summary of architectural concerns from agent
- Pattern violations or anti-patterns detected
- Dependency or boundary issues
- Breaking change assessment
## 5. User Actions
Ask the user what action to take:
- **Merge**: Post review summary as comment, then merge with rebase style
- **Request changes**: Leave feedback without merging
- **Comment only**: Add a comment for discussion
## Merging
Always use tea CLI for merges to preserve user attribution:
```bash
tea pulls merge <number> --style rebase
```
For review comments, use `tea comment` since `tea pulls review` is interactive-only:
```bash
tea comment <number> "<review summary>"
```
> **Warning**: Never use the Gitea API with admin credentials for user-facing operations like merging. This causes the merge to be attributed to the admin account instead of the user.

View File

@@ -1,6 +1,8 @@
--- ---
name: roadmap-planning name: roadmap-planning
model: haiku
description: Plan features and break down work into implementable issues. Use when planning a feature, creating a roadmap, breaking down large tasks, or when the user needs help organizing work into issues. description: Plan features and break down work into implementable issues. Use when planning a feature, creating a roadmap, breaking down large tasks, or when the user needs help organizing work into issues.
user-invocable: false
--- ---
# Roadmap Planning # Roadmap Planning
@@ -14,7 +16,33 @@ How to plan features and create issues for implementation.
- Who benefits and how? - Who benefits and how?
- What's the success criteria? - What's the success criteria?
### 2. Break Down the Work ### 2. Discovery Phase
Before breaking down work into issues, understand the user's workflow:
| Question | Why It Matters |
|----------|----------------|
| **Who** is the user? | Specific persona, not "users" |
| **What's their goal?** | The job they're trying to accomplish |
| **What's their workflow?** | Step-by-step actions to reach the goal |
| **What exists today?** | Current state and gaps |
| **What's the MVP?** | Minimum to deliver value |
**Walk through the workflow step by step:**
1. User starts at: [starting point]
2. User does: [action 1]
3. System responds: [what happens]
4. User does: [action 2]
5. ... continue until goal is reached
**Identify the gaps:**
- Where does the workflow break today?
- Which steps are missing or painful?
- What's the smallest change that unblocks value?
**Derive issues from workflow gaps** - not from guessing what might be needed. Each issue should address a specific gap in the user's workflow.
### 3. Break Down the Work
- Identify distinct components - Identify distinct components
- Define boundaries between pieces - Define boundaries between pieces
- Aim for issues that are: - Aim for issues that are:
@@ -22,16 +50,58 @@ How to plan features and create issues for implementation.
- Independently testable - Independently testable
- Clear in scope - Clear in scope
### 3. Identify Dependencies ### 4. Identify Dependencies
- Which pieces must come first? - Which pieces must come first?
- What can be parallelized? - What can be parallelized?
- Are there external blockers? - Are there external blockers?
### 4. Create Issues ### 5. Create Issues
- Follow issue-writing patterns - Follow issue-writing patterns
- Reference dependencies explicitly - Reference dependencies explicitly
- Use consistent labeling - Use consistent labeling
## Vertical vs Horizontal Slices
**Prefer vertical slices** - each issue should deliver user-visible value.
| Vertical (Good) | Horizontal (Bad) |
|-----------------|------------------|
| "User can save and reload their diagram" | "Add persistence layer" + "Add save API" + "Add load API" |
| "Domain expert can list all orders" | "Add query syntax to ADL" + "Add query runtime" + "Add query UI" |
| "User can reset forgotten password" | "Add email service" + "Add reset token model" + "Add reset form" |
### The Demo Test
Ask: **Can a user demo or test this issue independently?**
- **Yes** → Good vertical slice
- **No** → Probably a horizontal slice, break differently
### Break by User Capability, Not Technical Layer
Instead of thinking "what technical components do we need?", think "what can the user do after this issue is done?"
```
# Bad: Technical layers
├── Add database schema
├── Add API endpoint
├── Add frontend form
# Good: User capabilities
├── User can create a draft
├── User can publish the draft
├── User can edit published content
```
### When Horizontal Slices Are Acceptable
Sometimes horizontal slices are necessary:
- **Infrastructure setup** - Database, CI/CD, deployment (do once, enables everything)
- **Security foundations** - Auth system before any protected features
- **Shared libraries** - When multiple features need the same foundation
Even then, keep them minimal and follow immediately with vertical slices that use them.
## Breaking Down Features ## Breaking Down Features
### By Layer ### By Layer

View File

@@ -1,5 +1,11 @@
--- ---
description: View current issues as a roadmap. Shows open issues organized by status and dependencies. name: roadmap
description: >
View current issues as a roadmap. Shows open issues organized by status and
dependencies. Use when viewing roadmap, checking issue status, or when user
says /roadmap.
model: haiku
user-invocable: true
--- ---
# Roadmap View # Roadmap View

View File

@@ -0,0 +1,633 @@
---
name: software-architecture
model: haiku
description: >
Architectural patterns for building systems: DDD, Event Sourcing, event-driven communication.
Use when implementing features, reviewing code, planning issues, refining architecture,
or making design decisions. Ensures alignment with organizational beliefs about
auditability, domain modeling, and independent evolution.
user-invocable: false
---
# Software Architecture
Architectural patterns and best practices. This skill is auto-triggered when implementing, reviewing, or planning work that involves architectural decisions.
## Architecture Beliefs
These outcome-focused beliefs (from our organization manifesto) guide architectural decisions:
| Belief | Why It Matters |
|--------|----------------|
| **Auditability by default** | Systems should remember what happened, not just current state |
| **Business language in code** | Domain experts' words should appear in the codebase |
| **Independent evolution** | Parts should change without breaking other parts |
| **Explicit over implicit** | Intent and side effects should be visible and traceable |
## Beliefs → Patterns
| Belief | Primary Pattern | Supporting Patterns |
|--------|-----------------|---------------------|
| Auditability by default | Event Sourcing | Immutable events, temporal queries |
| Business language in code | Domain-Driven Design | Ubiquitous language, aggregates, bounded contexts |
| Independent evolution | Event-driven communication | Bounded contexts, published language |
| Explicit over implicit | Commands and Events | Domain events, clear intent |
## Event Sourcing
**Achieves:** Auditability by default
Instead of storing current state, store the sequence of events that led to it.
**Core concepts:**
- **Events** are immutable facts about what happened, named in past tense: `OrderPlaced`, `PaymentReceived`
- **State** is derived by replaying events, not stored directly
- **Event store** is append-only - history is never modified
**Why this matters:**
- Complete audit trail for free
- Debug by replaying history
- Answer "what was the state at time X?"
- Recover from bugs by fixing logic and replaying
**Trade-offs:**
- More complex than CRUD for simple cases
- Requires thinking in events, not state
- Eventually consistent read models
## Domain-Driven Design
**Achieves:** Business language in code
The domain model reflects how the business thinks and talks.
**Core concepts:**
- **Ubiquitous language** - same terms in code, conversations, and documentation
- **Bounded contexts** - explicit boundaries where terms have consistent meaning
- **Aggregates** - clusters of objects that change together, with one root entity
- **Domain events** - capture what happened in business terms
**Why this matters:**
- Domain experts can read and validate the model
- New team members learn the domain through code
- Changes in business rules map clearly to code changes
**Trade-offs:**
- Upfront investment in understanding the domain
- Boundaries may need to shift as understanding grows
- Overkill for pure technical/infrastructure code
## Event-Driven Communication
**Achieves:** Independent evolution
Services communicate by publishing events, not calling each other directly.
**Core concepts:**
- **Publish events** when something important happens
- **Subscribe to events** you care about
- **No direct dependencies** between publisher and subscriber
- **Eventual consistency** - accept that not everything updates instantly
**Why this matters:**
- Add new services without changing existing ones
- Services can be deployed independently
- Natural resilience - if a subscriber is down, events queue
**Trade-offs:**
- Harder to trace request flow
- Eventual consistency requires different thinking
- Need infrastructure for reliable event delivery
## Commands and Events
**Achieves:** Explicit over implicit
Distinguish between requests (commands) and facts (events).
**Core concepts:**
- **Commands** express intent: `PlaceOrder`, `CancelSubscription`
- Commands can be rejected (validation, business rules)
- **Events** express facts: `OrderPlaced`, `SubscriptionCancelled`
- Events are immutable - what happened, happened
**Why this matters:**
- Clear separation of "trying to do X" vs "X happened"
- Commands validate, events just record
- Enables replay - reprocess events with new logic
## When to Diverge
These patterns are defaults, not mandates. Diverge intentionally when:
- **Simplicity wins** - a simple CRUD endpoint doesn't need event sourcing
- **Performance requires it** - sometimes synchronous calls are necessary
- **Team context** - patterns the team doesn't understand cause more harm than good
- **Prototyping** - validate ideas before investing in full architecture
When diverging, document the decision in the project's `vision.md` Architecture section.
## Project-Level Architecture
Each project documents architectural choices in `vision.md`:
```markdown
## Architecture
This project follows organization architecture patterns.
### Alignment
- Event sourcing for [which aggregates/domains]
- Bounded contexts: [list contexts and their responsibilities]
- Event-driven communication between [which services]
### Intentional Divergences
| Area | Standard Pattern | What We Do Instead | Why |
|------|------------------|-------------------|-----|
```
## Go-Specific Best Practices
### Package Organization
**Good package structure:**
```
project/
├── cmd/ # Application entry points
│ └── server/
│ └── main.go
├── internal/ # Private packages
│ ├── domain/ # Core business logic
│ │ ├── user/
│ │ └── order/
│ ├── service/ # Application services
│ ├── repository/ # Data access
│ └── handler/ # HTTP/gRPC handlers
├── pkg/ # Public, reusable packages
└── go.mod
```
**Package naming:**
- Short, concise, lowercase: `user`, `order`, `auth`
- Avoid generic names: `util`, `common`, `helpers`, `misc`
- Name after what it provides, not what it contains
- One package per concept, not per file
**Package cohesion:**
- A package should have a single, focused responsibility
- Package internal files can use internal types freely
- Minimize exported types - export interfaces, hide implementations
### Interfaces
**Accept interfaces, return structs:**
```go
// Good: Accept interface, return concrete type
func NewUserService(repo UserRepository) *UserService {
return &UserService{repo: repo}
}
// Bad: Accept and return interface
func NewUserService(repo UserRepository) UserService {
return &userService{repo: repo}
}
```
**Define interfaces at point of use:**
```go
// Good: Interface defined where it's used (consumer owns the interface)
package service
type UserRepository interface {
FindByID(ctx context.Context, id string) (*User, error)
}
// Bad: Interface defined with implementation (producer owns the interface)
package repository
type UserRepository interface {
FindByID(ctx context.Context, id string) (*User, error)
}
```
**Keep interfaces small:**
- Prefer single-method interfaces
- Large interfaces indicate missing abstraction
- Compose small interfaces when needed
### Error Handling
**Wrap errors with context:**
```go
// Good: Wrap with context
if err != nil {
return fmt.Errorf("fetching user %s: %w", id, err)
}
// Bad: Return bare error
if err != nil {
return err
}
```
**Use sentinel errors for expected conditions:**
```go
var ErrNotFound = errors.New("not found")
var ErrConflict = errors.New("conflict")
// Check with errors.Is
if errors.Is(err, ErrNotFound) {
// handle not found
}
```
**Error types for rich errors:**
```go
type ValidationError struct {
Field string
Message string
}
func (e *ValidationError) Error() string {
return fmt.Sprintf("%s: %s", e.Field, e.Message)
}
// Check with errors.As
var valErr *ValidationError
if errors.As(err, &valErr) {
// handle validation error
}
```
### Dependency Injection
**Constructor injection:**
```go
type UserService struct {
repo UserRepository
logger Logger
}
func NewUserService(repo UserRepository, logger Logger) *UserService {
return &UserService{
repo: repo,
logger: logger,
}
}
```
**Wire dependencies in main:**
```go
func main() {
// Create dependencies
db := database.Connect()
logger := slog.Default()
// Wire up services
userRepo := repository.NewUserRepository(db)
userService := service.NewUserService(userRepo, logger)
userHandler := handler.NewUserHandler(userService)
// Start server
http.Handle("/users", userHandler)
http.ListenAndServe(":8080", nil)
}
```
**Avoid global state:**
- No `init()` for service initialization
- No package-level variables for dependencies
- Pass context explicitly, don't store in structs
### Testing
**Table-driven tests:**
```go
func TestUserService_Create(t *testing.T) {
tests := []struct {
name string
input CreateUserInput
want *User
wantErr error
}{
{
name: "valid user",
input: CreateUserInput{Email: "test@example.com"},
want: &User{Email: "test@example.com"},
},
{
name: "invalid email",
input: CreateUserInput{Email: "invalid"},
wantErr: ErrInvalidEmail,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// arrange, act, assert
})
}
}
```
**Test doubles:**
- Use interfaces for test doubles
- Prefer hand-written mocks over generated ones for simple cases
- Use `testify/mock` or `gomock` for complex mocking needs
**Test package naming:**
- `package user_test` for black-box testing (preferred)
- `package user` for white-box testing when needed
## Generic Architecture Patterns
### Layered Architecture
```
┌─────────────────────────────────┐
│ Presentation │ HTTP handlers, CLI, gRPC
├─────────────────────────────────┤
│ Application │ Use cases, orchestration
├─────────────────────────────────┤
│ Domain │ Business logic, entities
├─────────────────────────────────┤
│ Infrastructure │ Database, external services
└─────────────────────────────────┘
```
**Rules:**
- Dependencies point downward only
- Upper layers depend on interfaces, not implementations
- Domain layer has no external dependencies
### SOLID Principles
**Single Responsibility (S):**
- Each module has one reason to change
- Split code that changes for different reasons
**Open/Closed (O):**
- Open for extension, closed for modification
- Add new behavior through new types, not changing existing ones
**Liskov Substitution (L):**
- Subtypes must be substitutable for their base types
- Interfaces should be implementable without surprises
**Interface Segregation (I):**
- Clients shouldn't depend on interfaces they don't use
- Prefer many small interfaces over few large ones
**Dependency Inversion (D):**
- High-level modules shouldn't depend on low-level modules
- Both should depend on abstractions
### Dependency Direction
```
┌──────────────┐
│ Domain │
│ (no deps) │
└──────────────┘
┌────────────┴────────────┐
│ │
┌───────┴───────┐ ┌───────┴───────┐
│ Application │ │Infrastructure │
│ (uses domain) │ │(implements │
└───────────────┘ │ domain intf) │
▲ └───────────────┘
┌───────┴───────┐
│ Presentation │
│(calls app) │
└───────────────┘
```
**Key insight:** Infrastructure implements domain interfaces, doesn't define them. This inverts the "natural" dependency direction.
### Module Boundaries
**Signs of good boundaries:**
- Modules can be understood in isolation
- Changes are localized within modules
- Clear, minimal public API
- Dependencies flow in one direction
**Signs of bad boundaries:**
- Circular dependencies between modules
- "Shotgun surgery" - small changes require many file edits
- Modules reach into each other's internals
- Unclear ownership of concepts
## Repository Health Indicators
### Positive Indicators
| Indicator | What to Look For |
|-----------|------------------|
| Clear structure | Obvious package organization, consistent naming |
| Small interfaces | Most interfaces have 1-3 methods |
| Explicit dependencies | Constructor injection, no globals |
| Test coverage | Unit tests for business logic, integration tests for boundaries |
| Error handling | Wrapped errors, typed errors for expected cases |
| Documentation | CLAUDE.md accurate, code comments explain "why" |
### Warning Signs
| Indicator | What to Look For |
|-----------|------------------|
| God packages | `utils/`, `common/`, `helpers/` with 20+ files |
| Circular deps | Package A imports B, B imports A |
| Deep nesting | 4+ levels of directory nesting |
| Huge files | Files with 500+ lines |
| Interface pollution | Interfaces for everything, even single implementations |
| Global state | Package-level variables, `init()` for setup |
### Metrics to Track
- **Package fan-out:** How many packages does each package import?
- **Cyclomatic complexity:** How complex are the functions?
- **Test coverage:** What percentage of code is tested?
- **Import depth:** How deep is the import tree?
## Review Checklists
### Repository Audit Checklist
Use this when evaluating overall repository health.
**Structure:**
- [ ] Clear package organization following Go conventions
- [ ] No circular dependencies between packages
- [ ] Appropriate use of `internal/` for private packages
- [ ] `cmd/` for application entry points
**Dependencies:**
- [ ] Dependencies flow inward (toward domain)
- [ ] Interfaces defined at point of use (not with implementation)
- [ ] No global state or package-level dependencies
- [ ] Constructor injection throughout
**Code Quality:**
- [ ] Consistent naming conventions
- [ ] No "god" packages (utils, common, helpers)
- [ ] Errors wrapped with context
- [ ] Small, focused interfaces
**Testing:**
- [ ] Unit tests for domain logic
- [ ] Integration tests for boundaries (DB, HTTP)
- [ ] Tests are readable and maintainable
- [ ] Test coverage for critical paths
**Documentation:**
- [ ] CLAUDE.md is accurate and helpful
- [ ] vision.md explains the product purpose
- [ ] Code comments explain "why", not "what"
### Issue Refinement Checklist
Use this when reviewing issues for architecture impact.
**Scope:**
- [ ] Issue is a vertical slice (user-visible value)
- [ ] Changes are localized to specific packages
- [ ] No cross-cutting concerns hidden in implementation
**Design:**
- [ ] Follows existing patterns in the codebase
- [ ] New abstractions are justified
- [ ] Interface changes are backward compatible (or breaking change is documented)
**Dependencies:**
- [ ] New dependencies are minimal and justified
- [ ] No new circular dependencies introduced
- [ ] Integration points are clearly defined
**Testability:**
- [ ] Acceptance criteria are testable
- [ ] New code can be unit tested in isolation
- [ ] Integration test requirements are clear
### PR Review Checklist
Use this when reviewing pull requests for architecture concerns.
**Structure:**
- [ ] Changes respect existing package boundaries
- [ ] New packages follow naming conventions
- [ ] No new circular dependencies
**Interfaces:**
- [ ] Interfaces are defined where used
- [ ] Interfaces are minimal and focused
- [ ] Breaking interface changes are justified
**Dependencies:**
- [ ] Dependencies injected via constructors
- [ ] No new global state
- [ ] External dependencies properly abstracted
**Error Handling:**
- [ ] Errors wrapped with context
- [ ] Sentinel errors for expected conditions
- [ ] Error types for rich error information
**Testing:**
- [ ] New code has appropriate test coverage
- [ ] Tests are clear and maintainable
- [ ] Edge cases covered
## Anti-Patterns to Flag
### God Packages
**Problem:** Packages like `utils/`, `common/`, `helpers/` become dumping grounds.
**Symptoms:**
- 20+ files in one package
- Unrelated functions grouped together
- Package imported by everything
**Fix:** Extract cohesive packages based on what they provide: `validation`, `httputil`, `timeutil`.
### Circular Dependencies
**Problem:** Package A imports B, and B imports A (directly or transitively).
**Symptoms:**
- Import cycle compile errors
- Difficulty understanding code flow
- Changes cascade unexpectedly
**Fix:**
- Extract shared types to a third package
- Use interfaces to invert dependency
- Merge packages if truly coupled
### Leaky Abstractions
**Problem:** Implementation details leak through abstraction boundaries.
**Symptoms:**
- Database types in domain layer
- HTTP types in service layer
- Framework types in business logic
**Fix:** Define types at each layer, map between them explicitly.
### Anemic Domain Model
**Problem:** Domain objects are just data containers, logic is elsewhere.
**Symptoms:**
- Domain types have only getters/setters
- All logic in "service" classes
- Domain types can be in invalid states
**Fix:** Put behavior with data. Domain types should enforce their own invariants.
### Shotgun Surgery
**Problem:** Small changes require editing many files across packages.
**Symptoms:**
- Feature adds touch 10+ files
- Similar changes in multiple places
- Copy-paste between packages
**Fix:** Consolidate related code. If things change together, they belong together.
### Feature Envy
**Problem:** Code in one package is more interested in another package's data.
**Symptoms:**
- Many calls to another package's methods
- Pulling data just to compute something
- Logic that belongs elsewhere
**Fix:** Move the code to where the data lives, or extract the behavior to a shared place.
### Premature Abstraction
**Problem:** Creating interfaces and abstractions before they're needed.
**Symptoms:**
- Interfaces with single implementations
- "Factory" and "Manager" classes everywhere
- Configuration for things that never change
**Fix:** Write concrete code first. Extract abstractions when you have multiple implementations or need to break dependencies.
### Deep Hierarchy
**Problem:** Excessive layers of abstraction or inheritance.
**Symptoms:**
- 5+ levels of embedding/composition
- Hard to trace code flow
- Changes require understanding many layers
**Fix:** Prefer composition over inheritance. Flatten hierarchies where possible.

View File

@@ -0,0 +1,349 @@
---
name: spawn-issues
description: Orchestrate parallel issue implementation with review cycles
model: haiku
argument-hint: <issue-number> [<issue-number>...]
allowed-tools: Bash, Task, Read, TaskOutput
user-invocable: true
---
# Spawn Issues (Orchestrator)
Orchestrate parallel issue implementation: spawn workers, review PRs, fix feedback, until all approved.
## Arguments
One or more issue numbers separated by spaces: `$ARGUMENTS`
Example: `/spawn-issues 42 43 44`
## Orchestration Flow
```
Concurrent Pipeline - each issue flows independently:
Issue #42 ──► worker ──► PR #55 ──► review ──► fix? ──► ✓
Issue #43 ──► worker ──► PR #56 ──► review ──► ✓
Issue #44 ──► worker ──► PR #57 ──► review ──► fix ──► ✓
As each step completes, immediately:
1. Print a status update
2. Start the next step for that issue
Don't wait for all workers before reviewing - pipeline each issue.
```
## Status Updates
Print a brief status update whenever any step completes:
```
[#42] Worker completed → PR #55 created
[#43] Worker completed → PR #56 created
[#42] Review: needs work → spawning fixer
[#43] Review: approved ✓
[#42] Fix completed → re-reviewing
[#44] Worker completed → PR #57 created
[#42] Review: approved ✓
[#44] Review: approved ✓
All done! Final summary:
| Issue | PR | Status |
|-------|-----|----------|
| #42 | #55 | approved |
| #43 | #56 | approved |
| #44 | #57 | approved |
```
## Implementation
### Step 1: Parse and Validate
Parse `$ARGUMENTS` into a list of issue numbers. If empty, inform the user:
```
Usage: /spawn-issues <issue-number> [<issue-number>...]
Example: /spawn-issues 42 43 44
```
### Step 2: Get Repository Info and Setup Worktrees
```bash
REPO_PATH=$(pwd)
REPO_NAME=$(basename $REPO_PATH)
# Create parent worktrees directory
mkdir -p "${REPO_PATH}/../worktrees"
WORKTREES_DIR="${REPO_PATH}/../worktrees"
```
For each issue, create the worktree upfront:
```bash
# Fetch latest from origin
cd "${REPO_PATH}"
git fetch origin
# Get issue details for branch naming
ISSUE_TITLE=$(tea issues <ISSUE_NUMBER> | grep "TITLE" | head -1)
BRANCH_NAME="issue-<ISSUE_NUMBER>-<kebab-title>"
# Create worktree for this issue
git worktree add "${WORKTREES_DIR}/${REPO_NAME}-issue-<ISSUE_NUMBER>" \
-b "${BRANCH_NAME}" origin/main
```
Track the worktree path for each issue.
### Step 3: Spawn All Issue Workers
For each issue number, spawn a background issue-worker agent and track its task_id:
```
Task tool with:
- subagent_type: "issue-worker"
- run_in_background: true
- prompt: <issue-worker prompt below>
```
Track state for each issue:
```
issues = {
42: { task_id: "xxx", stage: "implementing", pr: null, branch: null, review_iterations: 0 },
43: { task_id: "yyy", stage: "implementing", pr: null, branch: null, review_iterations: 0 },
44: { task_id: "zzz", stage: "implementing", pr: null, branch: null, review_iterations: 0 },
}
```
Print initial status:
```
Spawned 3 issue workers:
[#42] implementing...
[#43] implementing...
[#44] implementing...
```
**Issue Worker Prompt:**
```
You are an issue-worker agent. Implement issue #<NUMBER> autonomously.
Context:
- Repository path: <REPO_PATH>
- Repository name: <REPO_NAME>
- Issue number: <NUMBER>
- Worktree path: <WORKTREE_PATH>
Process:
1. Setup worktree:
cd <WORKTREE_PATH>
2. Get issue: tea issues <NUMBER> --comments
3. Plan with TodoWrite, implement the changes
4. Commit: git add -A && git commit -m "...\n\nCloses #<NUMBER>\n\nCo-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
5. Push: git push -u origin <branch-name>
6. Create PR: tea pulls create --title "[Issue #<NUMBER>] <title>" --description "Closes #<NUMBER>\n\n..."
Capture the PR number.
7. Cleanup: No cleanup needed - orchestrator handles worktree removal
8. Output EXACTLY this format (orchestrator parses it):
ISSUE_WORKER_RESULT
issue: <NUMBER>
pr: <PR_NUMBER>
branch: <branch-name>
status: <success|partial|failed>
title: <issue title>
summary: <1-2 sentence description>
Work autonomously. If blocked, note it in PR description and report status as partial/failed.
```
### Step 4: Event-Driven Pipeline
**Do NOT poll.** Wait for `<task-notification>` messages that arrive automatically when background tasks complete.
When a notification arrives:
1. Read the output file to get the result
2. Parse the result and print status update
3. Spawn the next stage (reviewer/fixer) in background
4. Continue waiting for more notifications
```
On <task-notification> for task_id X:
- Find which issue this task belongs to
- Read output file, parse result
- Print status update
- If not terminal state, spawn next agent in background
- Update issue state
- If all issues terminal, print final summary
```
**State transitions:**
```
implementing → (worker done) → reviewing → (approved) → DONE
→ (needs-work) → fixing → reviewing...
→ (3 iterations) → needs-manual-review
→ (worker failed) → FAILED
```
**On each notification, print status:**
```
[#42] Worker completed → PR #55 created, starting review
[#43] Worker completed → PR #56 created, starting review
[#42] Review: needs work → spawning fixer
[#43] Review: approved ✓
[#42] Fix completed → re-reviewing
[#44] Worker completed → PR #57 created, starting review
[#42] Review: approved ✓
[#44] Review: approved ✓
```
### Step 5: Spawn Reviewers and Fixers
When spawning reviewers/fixers, create worktrees for them and pass the path.
For review, create a review worktree from the PR branch:
```bash
cd "${REPO_PATH}"
git fetch origin
git worktree add "${WORKTREES_DIR}/${REPO_NAME}-review-<PR_NUMBER>" \
origin/<BRANCH_NAME>
```
Pass this worktree path to the reviewer/fixer agents.
**Code Reviewer:**
```
Task tool with:
- subagent_type: "code-reviewer"
- run_in_background: true
- prompt: <code-reviewer prompt below>
```
**Code Reviewer Prompt:**
```
You are a code-reviewer agent. Review PR #<PR_NUMBER> autonomously.
Context:
- Repository path: <REPO_PATH>
- PR number: <PR_NUMBER>
- Worktree path: <WORKTREE_PATH>
Process:
1. Move to worktree:
cd <WORKTREE_PATH>
2. Get PR details: tea pulls <PR_NUMBER> --comments
3. Review the diff: git diff origin/main...HEAD
4. Analyze changes for:
- Code quality and style
- Potential bugs or logic errors
- Test coverage
- Documentation
5. Post review comment: tea comment <PR_NUMBER> "<review summary>"
6. Cleanup: No cleanup needed - orchestrator handles worktree removal
7. Output EXACTLY this format:
REVIEW_RESULT
pr: <PR_NUMBER>
verdict: <approved|needs-work>
summary: <1-2 sentences>
Work autonomously. Be constructive but thorough.
```
**PR Fixer Prompt:** (see below)
### Step 6: Final Report
When all issues reach terminal state, display summary:
```
All done!
| Issue | PR | Status |
|-------|-----|---------------------|
| #42 | #55 | approved |
| #43 | #56 | approved |
| #44 | #57 | approved |
3 PRs created and approved
```
## PR Fixer
When spawning pr-fixer for a PR that needs work:
```
Task tool with:
- subagent_type: "pr-fixer"
- run_in_background: true
- prompt: <pr-fixer prompt below>
```
**PR Fixer Prompt:**
```
You are a pr-fixer agent. Address review feedback on PR #<NUMBER>.
Context:
- Repository path: <REPO_PATH>
- PR number: <NUMBER>
- Worktree path: <WORKTREE_PATH>
Process:
1. Move to worktree:
cd <WORKTREE_PATH>
2. Get feedback: tea pulls <NUMBER> --comments
3. Address each piece of feedback
4. Commit and push:
git add -A && git commit -m "Address review feedback\n\nCo-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
git push
5. Cleanup: No cleanup needed - orchestrator handles worktree removal
6. Output EXACTLY:
PR_FIXER_RESULT
pr: <NUMBER>
status: <fixed|partial|failed>
changes: <summary of fixes>
Work autonomously. If feedback is unclear, make reasonable judgment calls.
```
## Worktree Cleanup
After all issues reach terminal state, clean up all worktrees:
```bash
# Remove all worktrees created for this run
for worktree in "${WORKTREES_DIR}"/*; do
if [ -d "$worktree" ]; then
cd "${REPO_PATH}"
git worktree remove "$worktree" --force
fi
done
# Remove worktrees directory if empty
rmdir "${WORKTREES_DIR}" 2>/dev/null || true
```
**Important:** Always clean up worktrees, even if the orchestration failed partway through.
## Error Handling
- If an issue-worker fails, continue with others
- If a review fails, mark as "review-failed" and continue
- If pr-fixer fails after 3 iterations, mark as "needs-manual-review"
- Always report final status even if some items failed
- Always clean up all worktrees before exiting

View File

@@ -0,0 +1,124 @@
---
name: spawn-pr-fixes
description: Spawn parallel background agents to address PR review feedback
model: haiku
argument-hint: [pr-number...]
allowed-tools: Bash, Task, Read
user-invocable: true
---
# Spawn PR Fixes
Spawn background agents to address review feedback on multiple PRs in parallel. Each agent works in an isolated git worktree.
## Arguments
Optional PR numbers separated by spaces: `$ARGUMENTS`
- With arguments: `/spawn-pr-fixes 12 15 18` - fix specific PRs
- Without arguments: `/spawn-pr-fixes` - find and fix all PRs with requested changes
## Process
### Step 1: Get Repository Info
```bash
REPO_PATH=$(pwd)
REPO_NAME=$(basename $REPO_PATH)
```
### Step 2: Determine PRs to Fix
**If PR numbers provided**: Use those directly
**If no arguments**: Find PRs needing work
```bash
# List open PRs
tea pulls --state open
# For each PR, check if it has review comments requesting changes
tea pulls <number> --comments
```
Look for PRs where:
- Review comments exist that haven't been addressed
- PR is not approved yet
- PR is open (not merged/closed)
### Step 3: For Each PR
1. Fetch PR title using `tea pulls <number>`
2. Spawn background agent using Task tool:
```
Task tool with:
- subagent_type: "pr-fixer"
- run_in_background: true
- prompt: See agent prompt below
```
### Agent Prompt
For each PR, use this prompt:
```
You are a pr-fixer agent. Address review feedback on PR #<NUMBER> autonomously.
Context:
- Repository path: <REPO_PATH>
- Repository name: <REPO_NAME>
- PR number: <NUMBER>
Instructions from @agents/pr-fixer/agent.md:
1. Get PR details and review comments:
cd <REPO_PATH>
git fetch origin
tea pulls <NUMBER> --comments
2. Setup worktree from PR branch:
git worktree add ../<REPO_NAME>-pr-<NUMBER> origin/<branch-name>
cd ../<REPO_NAME>-pr-<NUMBER>
git checkout <branch-name>
3. Analyze feedback, create todos with TodoWrite
4. Address each piece of feedback
5. Commit and push:
git add -A && git commit with message "Address review feedback\n\n...\n\nCo-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
git push
6. Spawn code-reviewer synchronously (NOT in background) to re-review
7. If needs more work, fix and re-review (max 3 iterations)
8. Cleanup (ALWAYS do this):
cd <REPO_PATH> && git worktree remove ../<REPO_NAME>-pr-<NUMBER> --force
9. Output concise summary (5-10 lines max):
PR #<NUMBER>: <title>
Status: <fixed|partial|blocked>
Feedback addressed: <count> items
Review: <approved|needs-work|skipped>
Work autonomously. Make judgment calls on ambiguous feedback. If blocked, note it in a commit message.
```
### Step 4: Report
After spawning all agents, display:
```
Spawned <N> pr-fixer agents:
| PR | Title | Status |
|-----|--------------------------|------------|
| #12 | Add /commit command | spawned |
| #15 | Add /pr command | spawned |
| #18 | Add CI status | spawned |
Agents working in background. Monitor with:
- Check PR list: tea pulls
- Check worktrees: git worktree list
```

View File

@@ -0,0 +1,171 @@
---
name: update-claude-md
description: >
Update or create CLAUDE.md with current project context. Explores the project
and ensures organization context is present. Use when updating project docs,
adding CLAUDE.md, or when user says /update-claude-md.
model: haiku
context: fork
user-invocable: true
---
# Update CLAUDE.md
@~/.claude/skills/claude-md-writing/SKILL.md
@~/.claude/skills/repo-conventions/SKILL.md
Update or create CLAUDE.md for the current repository with proper organization context and current project state.
## Process
1. **Check for existing CLAUDE.md**: Look for `CLAUDE.md` in repo root
2. **If CLAUDE.md exists**:
- Read current content
- Identify which sections exist
- Note any custom content to preserve
3. **Explore the project**:
- Scan directory structure
- Identify language/framework (go.mod, package.json, Cargo.toml, etc.)
- Find key patterns (look for common directories, config files)
- Check for Makefile or build scripts
4. **Check organization context**:
- Does it have the "Organization Context" section?
- Does it link to `../architecture/manifesto.md`?
- Does it link to `../architecture/repos.md`?
- Does it link to `./vision.md`?
5. **Gather missing information**:
- If no one-line description: Ask user
- If no architecture section: Infer from code or ask user
6. **Update CLAUDE.md**:
**Always ensure these sections exist:**
```markdown
# [Project Name]
[One-line description]
## Organization Context
This repo is part of Flowmade. See:
- [Organization manifesto](../architecture/manifesto.md) - who we are, what we believe
- [Repository map](../architecture/repos.md) - how this fits in the bigger picture
- [Vision](./vision.md) - what this specific product does
## Setup
[From existing or ask user]
## Project Structure
[Generate from actual directory scan]
## Development
[From Makefile or existing]
## Architecture
[From existing or infer from code patterns]
```
7. **Preserve custom content**:
- Keep any additional sections the user added
- Don't remove information, only add/update
- If unsure, ask before removing
8. **Show diff and confirm**:
- Show what will change
- Ask user to confirm before writing
## Section-Specific Guidance
### Project Structure
Generate from actual directory scan:
```bash
# Scan top-level and key subdirectories
ls -la
ls pkg/ cmd/ internal/ src/ (as applicable)
```
Format as tree showing purpose:
```markdown
## Project Structure
\`\`\`
project/
├── cmd/ # Entry points
├── pkg/ # Shared packages
│ ├── domain/ # Business logic
│ └── infra/ # Infrastructure
└── internal/ # Private packages
\`\`\`
```
### Development Commands
Extract from Makefile if present:
```bash
grep -E "^[a-zA-Z_-]+:" Makefile | head -10
```
Or from package.json scripts, Cargo.toml, etc.
### Architecture
Look for patterns:
- Event sourcing: Check for aggregates, events, projections
- Clean architecture: Check for domain, application, infrastructure layers
- API style: REST, gRPC, GraphQL
If unsure, ask: "What are the key architectural patterns in this project?"
## Output Example
```
## Updating CLAUDE.md
### Current State
- Has description: ✓
- Has org context: ✗ (will add)
- Has setup: ✓
- Has structure: Outdated (will update)
- Has development: ✓
- Has architecture: ✗ (will add)
### Changes
+ Adding Organization Context section
~ Updating Project Structure (new directories found)
+ Adding Architecture section
### New Project Structure
\`\`\`
arcadia/
├── cmd/
├── pkg/
│ ├── aether/ # Event sourcing runtime
│ ├── iris/ # WASM UI framework
│ ├── adl/ # Domain language
│ └── ...
└── internal/
\`\`\`
Proceed with update? [y/n]
```
## Guidelines
- Always add Organization Context if missing
- Preserve existing custom sections
- Update Project Structure from actual filesystem
- Don't guess at Architecture - ask if unclear
- Show changes before writing
- Reference claude-md-writing skill for best practices

View File

@@ -1,6 +1,8 @@
--- ---
name: vision-management name: vision-management
model: haiku
description: Create, maintain, and evolve organization manifesto and product visions. Use when working with manifesto.md, vision.md, milestones, or aligning work with organizational direction. description: Create, maintain, and evolve organization manifesto and product visions. Use when working with manifesto.md, vision.md, milestones, or aligning work with organizational direction.
user-invocable: false
--- ---
# Vision Management # Vision Management
@@ -11,11 +13,11 @@ How to create, maintain, and evolve organizational direction at two levels: mani
| Level | Document | Purpose | Command | Location | | Level | Document | Purpose | Command | Location |
|-------|----------|---------|---------|----------| |-------|----------|---------|---------|----------|
| **Organization** | `manifesto.md` | Identity, shared personas, beliefs, principles | `/manifesto` | Architecture repo | | **Organization** | `manifesto.md` | Identity, shared personas, beliefs, principles | `/manifesto` | `../architecture/` (sibling repo) |
| **Product** | `vision.md` | Product-specific personas, jobs, solution | `/vision` | Product repos | | **Product** | `vision.md` | Product-specific personas, jobs, solution | `/vision` | Product repo root |
| **Goals** | Gitea milestones | Measurable progress toward vision | `/vision goals` | Per repo | | **Goals** | Gitea milestones | Measurable progress toward vision | `/vision goals` | Per repo |
Product vision inherits from and extends the organization manifesto. Product vision **inherits from and extends** the organization manifesto - it should never duplicate.
--- ---
@@ -74,32 +76,65 @@ What the organization explicitly does NOT do.
## Vision (Product Level) ## Vision (Product Level)
The vision defines what a specific product does. It lives in each product repo and extends the manifesto. The vision defines what a specific product does. It lives in each product repo and **extends the manifesto**.
### Vision Structure ### Vision Structure
```markdown ```markdown
# Vision # Vision
This product vision builds on the [organization manifesto](../architecture/manifesto.md).
## Who This Product Serves ## Who This Product Serves
Product-specific personas (may extend org personas).
- **Persona Name**: Product-specific context ### [Persona Name]
[Product-specific description]
*Extends: [Org persona] (from manifesto)*
## What They're Trying to Achieve ## What They're Trying to Achieve
Product-specific jobs to be done.
- "Help me [outcome] without [pain]" These trace back to organization-level jobs:
| Product Job | Enables Org Job |
|-------------|-----------------|
| "[Product-specific job]" | "[Org job from manifesto]" |
## The Problem ## The Problem
Pain points this product addresses.
[Pain points this product addresses]
## The Solution ## The Solution
How this product solves those problems.
[How this product solves those problems]
## Product Principles ## Product Principles
Product-specific principles (beyond org principles).
These extend the organization's guiding principles:
### [Principle Name]
[Description]
*Extends: "[Org principle]"*
## Non-Goals ## Non-Goals
What this product explicitly does NOT do.
These extend the organization's non-goals:
- **[Non-goal].** [Explanation]
## Architecture
This project follows organization architecture patterns (see software-architecture skill).
### Alignment
- [Which patterns we use and where]
### Intentional Divergences
| Area | Standard Pattern | What We Do Instead | Why |
|------|------------------|-------------------|-----|
``` ```
### When to Update Vision ### When to Update Vision
@@ -111,51 +146,58 @@ What this product explicitly does NOT do.
### Creating a Product Vision ### Creating a Product Vision
1. Reference the organization manifesto 1. **Start with the manifesto** - read it first
2. Define product-specific personas (can extend org personas) 2. Define product personas that extend org personas
3. Identify product-specific jobs to be done 3. Identify product jobs that trace back to org jobs
4. Articulate the problem this product solves 4. Articulate the problem this product solves
5. Define the solution approach 5. Define the solution approach
6. Set product-specific principles (if any) 6. Set product-specific principles (noting what they extend)
7. Document product non-goals 7. Document product non-goals
8. Create initial milestones 8. Create initial milestones
--- ---
## Relationship: Manifesto → Vision ## Inheritance Model
``` ```
Manifesto (org) Vision (product) Manifesto (org) Vision (product)
├── Shared Personas Product Personas (more specific) ├── Personas Product Personas (extend with specifics)
├── Org Jobs → Product Jobs (subset/extension) ├── Jobs Product Jobs (trace back to org jobs)
├── Beliefs (inherited, not duplicated) ├── Beliefs (inherited, never duplicated)
├── Principles Product Principles (additional) ├── Principles Product Principles (extend, note source)
└── Non-Goals Product Non-Goals (additional) └── Non-Goals Product Non-Goals (additive)
``` ```
### Inheritance Model ### Inheritance Rules
- **Personas**: Product personas can be more specific versions of org personas | Component | Rule | Format |
- **Jobs**: Product jobs should trace back to org-level jobs |-----------|------|--------|
- **Beliefs**: Inherited from manifesto, not duplicated in vision | **Personas** | Extend with product-specific context | `*Extends: [Org persona] (from manifesto)*` |
- **Principles**: Product can add specific principles; org principles apply automatically | **Jobs** | Trace back to org-level jobs | Table with Product Job → Org Job columns |
- **Non-Goals**: Product adds its own; org non-goals apply automatically | **Beliefs** | Inherited automatically | Never include in vision |
| **Principles** | Add product-specific, note what they extend | `*Extends: "[Org principle]"*` |
| **Non-Goals** | Additive | Org non-goals apply automatically |
### Example ### Example
**Manifesto** (organization): **Manifesto** (organization):
```markdown ```markdown
## Who We Serve ## Who We Serve
- **Solo Developer**: Individual shipping side projects, time-constrained - **Agencies & Consultancies**: Teams building solutions for clients
``` ```
**Vision** (product - e.g., CLI tool): **Vision** (product - architecture tooling):
```markdown ```markdown
## Who This Product Serves ## Who This Product Serves
- **Solo Developer (CLI user)**: Uses terminal daily, prefers keyboard over GUI
### Flowmade Developers
The team building Flowmade's platform. They need efficient, consistent AI workflows.
*Extends: Agencies & Consultancies (from manifesto) - we are our own first customer.*
``` ```
The product persona extends the org persona with product-specific context. The product persona extends the org persona with product-specific context and explicitly notes the connection.
--- ---
@@ -180,8 +222,8 @@ Success: /commit and /pr commands handle 80% of workflows"
### Milestone-to-Vision Alignment ### Milestone-to-Vision Alignment
Every milestone should trace to: Every milestone should trace to:
- A persona (from vision or manifesto) - A persona (from vision, which extends manifesto)
- A job to be done (from vision) - A job to be done (from vision, which traces to manifesto)
- A measurable outcome - A measurable outcome
--- ---
@@ -218,7 +260,7 @@ Manifesto → Vision → Milestones → Issues → Work → Retro → (updates)
``` ```
1. **Manifesto** defines organizational identity (very stable) 1. **Manifesto** defines organizational identity (very stable)
2. **Vision** defines product direction (stable) 2. **Vision** defines product direction, extends manifesto (stable)
3. **Milestones** define measurable goals (evolve) 3. **Milestones** define measurable goals (evolve)
4. **Issues** are work items toward goals 4. **Issues** are work items toward goals
5. **Work** implements the issues 5. **Work** implements the issues
@@ -232,9 +274,11 @@ Manifesto → Vision → Milestones → Issues → Work → Retro → (updates)
| Question | Answer | | Question | Answer |
|----------|--------| |----------|--------|
| Where do shared personas live? | `manifesto.md` in architecture repo | | Where do shared personas live? | `manifesto.md` in architecture repo |
| Where do product personas live? | `vision.md` in product repo | | Where do product personas live? | `vision.md` in product repo (extend org personas) |
| Where do beliefs live? | `manifesto.md` only (inherited) | | Where do beliefs live? | `manifesto.md` only (inherited, never duplicated) |
| Where do goals live? | Gitea milestones (per repo) | | Where do goals live? | Gitea milestones (per repo) |
| What command for org vision? | `/manifesto` | | What command for org vision? | `/manifesto` |
| What command for product vision? | `/vision` | | What command for product vision? | `/vision` |
| What repo for learnings? | Architecture repo | | What repo for learnings? | Architecture repo |
| How do product jobs relate to org jobs? | They trace back (show in table) |
| How do product principles relate? | They extend (note the source) |

214
old/skills/vision/SKILL.md Normal file
View File

@@ -0,0 +1,214 @@
---
name: vision
description: >
View the product vision and goal progress. Manages vision.md and Gitea milestones.
Use when viewing vision, managing goals, or when user says /vision.
model: haiku
argument-hint: [goals]
user-invocable: true
---
# Product Vision
@~/.claude/skills/vision-management/SKILL.md
@~/.claude/skills/gitea/SKILL.md
This skill manages **product-level** vision. For organization-level vision, use `/manifesto`.
## Architecture
| Level | Document | Purpose | Skill |
|-------|----------|---------|-------|
| **Organization** | `manifesto.md` | Who we are, shared personas, beliefs | `/manifesto` |
| **Product** | `vision.md` | Product-specific personas, jobs, solution | `/vision` |
| **Goals** | Gitea milestones | Measurable progress toward vision | `/vision goals` |
Product vision **inherits from and extends** the organization manifesto - it should never duplicate.
## Manifesto Location
The manifesto lives in the sibling `architecture` repo:
```
org/
├── architecture/
│ └── manifesto.md ← organization manifesto
├── product-a/
│ └── vision.md ← extends ../architecture/manifesto.md
└── product-b/
└── vision.md
```
Look for manifesto in this order:
1. `./manifesto.md` (if this IS the architecture repo)
2. `../architecture/manifesto.md` (sibling repo)
## Process
1. **Load organization context**: Find and read `manifesto.md` using the location rules above
- Extract personas (Who We Serve)
- Extract jobs to be done (What They're Trying to Achieve)
- Extract guiding principles
- Extract non-goals
- If not found, warn and continue without inheritance context
2. **Check for product vision**: Look for `vision.md` in the current repo root
3. **If no vision exists**:
- Show the organization manifesto summary
- Ask if the user wants to create a product vision
- Guide them through defining (with inheritance):
**Who This Product Serves**
- Show manifesto personas first
- Ask: "Which personas does this product serve? How does it extend or specialize them?"
- Product personas should reference org personas with product-specific context
**What They're Trying to Achieve**
- Show manifesto jobs first
- Ask: "What product-specific jobs does this enable? How do they trace back to org jobs?"
- Use a table format showing the connection
**The Problem**
- What pain points does this product solve?
**The Solution**
- How does this product address those jobs?
**Product Principles**
- Show manifesto principles first
- Ask: "Any product-specific principles? These should extend, not duplicate."
- Each principle should note what org principle it extends
**Product Non-Goals**
- Show manifesto non-goals first
- Ask: "Any product-specific non-goals?"
- Org non-goals apply automatically
- Create `vision.md` with proper inheritance markers
- Ask about initial goals, create as Gitea milestones
4. **If vision exists**:
- Display organization context summary
- Display the product vision from `vision.md`
- Validate inheritance (warn if vision duplicates rather than extends)
- Show current milestones and their progress: `tea milestones`
- Check if `$1` specifies an action:
- `goals`: Manage milestones (add, close, view progress)
- If no action specified, just display the current state
5. **Managing Goals (milestones)**:
```bash
# List milestones with progress
tea milestones
# Create a new goal
tea milestones create --title "<goal>" --description "For: <persona>
Job: <job to be done>
Success: <criteria>"
# View issues in a milestone
tea milestones issues <milestone-name>
# Close a completed goal
tea milestones close <milestone-name>
```
## Vision Structure Template
```markdown
# Vision
This product vision builds on the [organization manifesto](../architecture/manifesto.md).
## Who This Product Serves
### [Persona Name]
[Product-specific description]
*Extends: [Org persona] (from manifesto)*
## What They're Trying to Achieve
These trace back to organization-level jobs:
| Product Job | Enables Org Job |
|-------------|-----------------|
| "[Product-specific job]" | "[Org job from manifesto]" |
## The Problem
[Pain points this product addresses]
## The Solution
[How this product solves those problems]
## Product Principles
These extend the organization's guiding principles:
### [Principle Name]
[Description]
*Extends: "[Org principle]"*
## Non-Goals
These extend the organization's non-goals:
- **[Non-goal].** [Explanation]
```
## Output Format
```
## Organization Context
From manifesto.md:
- **Personas**: [list from manifesto]
- **Core beliefs**: [key beliefs]
- **Principles**: [list]
## Product: [Name]
### Who This Product Serves
- **[Persona 1]**: [Product-specific description]
↳ Extends: [Org persona]
### What They're Trying to Achieve
| Product Job | → Org Job |
|-------------|-----------|
| [job] | [org job it enables] |
### Vision Summary
[Problem/solution from vision.md]
### Goals (Milestones)
| Goal | For | Progress | Due |
|------|-----|----------|-----|
| [title] | [Persona] | 3/5 issues | [date] |
```
## Inheritance Rules
- **Personas**: Product personas extend org personas with product-specific context
- **Jobs**: Product jobs trace back to org-level jobs (show the connection)
- **Beliefs**: Inherited from manifesto, never duplicated in vision
- **Principles**: Product adds specific principles that extend org principles
- **Non-Goals**: Product adds its own; org non-goals apply automatically
## Guidelines
- Product vision builds on organization manifesto - extend, don't duplicate
- Every product persona should reference which org persona it extends
- Every product job should show which org job it enables
- Product principles should note which org principle they extend
- Use `/manifesto` for organization-level identity and beliefs
- Use `/vision` for product-specific direction and goals

View File

@@ -0,0 +1,24 @@
---
name: work-issue
description: >
Work on a Gitea issue. Fetches issue details and sets up branch for implementation.
Use when working on issues, implementing features, or when user says /work-issue.
model: haiku
argument-hint: <issue-number>
user-invocable: true
---
# Work on Issue #$1
@~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/software-architecture/SKILL.md
1. **View the issue** with `--comments` flag to understand requirements and context
2. **Create a branch**: `git checkout -b issue-$1-<short-kebab-title>`
3. **Plan**: Use TodoWrite to break down the work based on acceptance criteria
4. **Check architecture**: Review the project's vision.md Architecture section for project-specific patterns and divergences
5. **Implement** the changes following architectural patterns (DDD, event sourcing where appropriate)
6. **Commit** with message referencing the issue
7. **Push** the branch to origin
8. **Create PR** with title "[Issue #$1] <title>" and body "Closes #$1"
9. **Auto-review**: Inform the user that auto-review is starting, then spawn the `code-reviewer` agent in background (using `run_in_background: true`) with the PR number

71
repos.md Normal file
View File

@@ -0,0 +1,71 @@
# Repository Map
Central registry of all Flowmade repositories.
## How to Use This
Each repo's CLAUDE.md should reference this map for organization context. When working in any repo, Claude can check here to understand how it fits in the bigger picture.
**Status markers:**
- **Active** - Currently in use
- **Splitting** - Being broken into smaller repos
- **Planned** - Will be created (from split or new)
## Repositories
### Organization
| Repo | Purpose | Status | Visibility |
|------|---------|--------|------------|
| architecture | Org source of truth: manifesto, Claude tooling, learnings | Active | Public |
### Platform
| Repo | Purpose | Status | Visibility |
|------|---------|--------|------------|
| arcadia | Monorepo containing platform code | Splitting | Private |
| aether | Event sourcing runtime with bytecode VM | Planned (from Arcadia) | Private |
| iris | WASM UI framework | Planned (from Arcadia) | Public |
| eskit | ES primitives (aggregates, events, projections, NATS) | Planned (from Arcadia) | Public |
| adl | Domain language compiler | Planned (from Arcadia) | Private |
| studio | Visual process designer, EventStorming tools | Planned (from Arcadia) | Private |
### Infrastructure
| Repo | Purpose | Status | Visibility |
|------|---------|--------|------------|
| gitserver | K8s-native git server (proves ES/IRIS stack) | Planned | Public |
## Relationships
```
arcadia (splitting into):
├── eskit (standalone, foundational)
├── iris (standalone)
├── aether (imports eskit)
├── adl (imports aether)
└── studio (imports aether, iris, adl)
gitserver (will use):
├── eskit (event sourcing)
└── iris (UI)
```
## Open Source Strategy
See [repo-conventions skill](skills/repo-conventions/SKILL.md) for classification criteria.
**Open source** (public):
- Generic libraries that benefit from community (eskit, iris)
- Infrastructure tooling that builds awareness (gitserver)
- Organization practices and tooling (architecture)
**Proprietary** (private):
- Core platform IP (aether VM, adl compiler)
- Product features (studio)
## Related
- [Manifesto](manifesto.md) - Organization identity and beliefs
- [Issue #53](https://git.flowmade.one/flowmade-one/architecture/issues/53) - Git server proposal
- [Issue #54](https://git.flowmade.one/flowmade-one/architecture/issues/54) - Arcadia split planning

View File

@@ -1,5 +1,4 @@
{ {
"model": "opus",
"permissions": { "permissions": {
"allow": [ "allow": [
"Bash(git:*)", "Bash(git:*)",
@@ -10,13 +9,6 @@
"WebSearch" "WebSearch"
] ]
}, },
"statusLine": {
"type": "command",
"command": "input=$(cat); current_dir=$(echo \"$input\" | jq -r '.workspace.current_dir'); model=$(echo \"$input\" | jq -r '.model.display_name'); style=$(echo \"$input\" | jq -r '.output_style.name'); git_info=\"\"; if [ -d \"$current_dir/.git\" ]; then cd \"$current_dir\" && branch=$(git branch --show-current 2>/dev/null) && status=$(git status --porcelain 2>/dev/null | wc -l | tr -d ' ') && git_info=\" [$branch$([ \"$status\" != \"0\" ] && echo \"*\")]\"; fi; printf \"\\033[2m$(whoami)@$(hostname -s) $(basename \"$current_dir\")$git_info | $model ($style)\\033[0m\""
},
"enabledPlugins": {
"gopls-lsp@claude-plugins-official": true
},
"hooks": { "hooks": {
"PreToolUse": [ "PreToolUse": [
{ {
@@ -30,5 +22,12 @@
] ]
} }
] ]
},
"statusLine": {
"type": "command",
"command": "input=$(cat); current_dir=$(echo \"$input\" | jq -r '.workspace.current_dir'); model=$(echo \"$input\" | jq -r '.model.display_name'); style=$(echo \"$input\" | jq -r '.output_style.name'); git_info=\"\"; if [ -d \"$current_dir/.git\" ]; then cd \"$current_dir\" && branch=$(git branch --show-current 2>/dev/null) && status=$(git status --porcelain 2>/dev/null | wc -l | tr -d ' ') && git_info=\" [$branch$([ \"$status\" != \"0\" ] && echo \"*\")]\"; fi; printf \"\\033[2m$(whoami)@$(hostname -s) $(basename \"$current_dir\")$git_info | $model ($style)\\033[0m\""
},
"enabledPlugins": {
"gopls-lsp@claude-plugins-official": true
} }
} }

View File

@@ -0,0 +1,358 @@
---
name: capability-writing
description: >
Guide for designing and creating capabilities for the architecture repository.
A capability is a cohesive set of components (skill + agent).
Use when creating new skills, agents, or extending the
AI workflow system. Includes templates, design guidance, and conventions.
user-invocable: false
---
# Capability Writing
How to design and create capabilities for the architecture repository using Anthropic's latest best practices (January 2025).
## Core Principles (NEW)
### 1. Conciseness is Critical
**Default assumption: Claude already knows.**
- Don't explain git, tea, standard CLI tools
- Don't explain concepts Claude understands
- Only add domain-specific context
- Keep main SKILL.md under 500 lines
**Bad:** "Git is a version control system. The commit command saves changes..."
**Good:** "`git commit -m 'feat: add feature'`"
### 2. Progressive Disclosure
Skills can bundle reference files and scripts that load/execute on-demand:
```
skill-name/
├── SKILL.md # Main workflow (200-500 lines)
├── best-practices.md # Detailed guidance (loaded when referenced)
├── examples/
│ ├── example1.md
│ └── example2.md
├── reference/
│ ├── api-docs.md
│ └── checklists.md
└── scripts/ # Bundled with this skill
├── validate.sh # Executed, not loaded into context
└── process.sh
```
**Benefits:**
- Main SKILL.md stays concise
- Reference files load only when Claude references them
- Scripts execute without consuming context tokens
- Each skill is self-contained
### 3. Script Bundling
Bundle error-prone bash operations as scripts within the skill:
**Instead of inline bash:**
```markdown
5. Create PR: `tea pulls create --title "..." --description "..."`
```
**Bundle a script:**
```markdown
5. **Create PR**: `./scripts/create-pr.sh $issue "$title"`
```
```bash
# In skill-name/scripts/create-pr.sh
#!/bin/bash
set -e
# Script handles errors, retries, validation
```
**When to bundle scripts:**
- Operations with complex error handling
- Operations that need retries
- Operations with multiple validation steps
- Fragile bash operations
### 4. Degrees of Freedom
Match instruction style to task fragility:
| Degree | When | Example |
|--------|------|---------|
| **High** (text) | Multiple valid approaches | "Review code quality and suggest improvements" |
| **Medium** (template) | Preferred pattern with variation | "Use this template, customize as needed" |
| **Low** (script) | Fragile operation, exact sequence | "Run: `./scripts/validate.sh`" |
### 5. Model Selection (UPDATED)
**New guidance:** Default to Haiku, justify if not.
| Model | Use When | Cost vs Haiku |
|-------|----------|---------------|
| **Haiku** | Simple workflows, validated steps, with scripts | Baseline |
| **Sonnet** | When Haiku testing shows <80% success rate | 12x more expensive |
| **Opus** | Deep reasoning, architectural judgment | 60x more expensive |
**Haiku works well when:**
- Steps are simple and validated
- Instructions are concise
- Error-prone operations use scripts
- Outputs have structured templates
**Test with Haiku first.** Only upgrade if needed.
## Component Overview
| Component | Location | Purpose | Example |
|-----------|----------|---------|---------|
| **User-invocable Skill** | `skills/name/SKILL.md` | Workflow users trigger with `/name` | /work-issue, /dashboard |
| **Background Skill** | `skills/name/SKILL.md` | Knowledge auto-loaded when needed | gitea, issue-writing |
| **Agent** | `agents/name/AGENT.md` | Isolated subtask handler | code-reviewer |
## When to Use Each Component
### Decision Tree
```
Start here: What do you need?
|
+--> Just knowledge to apply automatically?
| --> Background skill (user-invocable: false)
|
+--> User-initiated workflow?
| --> User-invocable skill (user-invocable: true)
|
+--> Complex isolated work needing focused context?
| --> User-invocable skill + Agent
|
+--> New domain expertise + workflow + isolated work?
--> Full capability (background skill + user-invocable skill + agent)
```
**Detailed decision criteria:** See [best-practices.md](best-practices.md)
## Component Templates
### User-Invocable Skill Template
```yaml
---
name: skill-name
description: >
What this skill does and when to use it.
Use when [trigger conditions] or when user says /skill-name.
model: haiku
argument-hint: <required> [optional]
user-invocable: true
---
# Skill Title
@~/.claude/skills/relevant-skill/SKILL.md
Brief intro if needed.
1. **First step**: What to do
2. **Second step**: What to do next
3. **Ask for approval** before significant actions
4. **Execute** the approved actions
5. **Present results** with links and summary
```
**Complete template with all fields:** See [templates/user-invocable-skill.md](templates/user-invocable-skill.md)
### Background Skill Template
```yaml
---
name: skill-name
description: >
What this skill teaches and when to use it.
Include trigger conditions in description.
user-invocable: false
---
# Skill Name
Brief description of what this skill covers.
## Core Concepts
## Patterns and Templates
## Guidelines
## Examples
```
**Complete template:** See [templates/background-skill.md](templates/background-skill.md)
### Agent Template
```yaml
---
name: agent-name
description: What this agent does and when to spawn it
model: haiku
skills: skill1, skill2
disallowedTools:
- Edit # For read-only agents
- Write
---
You are a [role] specialist that [primary function].
## When Invoked
1. **Gather context**
2. **Analyze**
3. **Act**
4. **Report**
```
**Complete template:** See [templates/agent.md](templates/agent.md)
**Helper script template:** See [templates/helper-script.sh](templates/helper-script.sh)
## Structure Examples
### Simple Skill (< 300 lines, no scripts)
```
skills/simple-skill/
└── SKILL.md
```
### Progressive Disclosure (with reference files)
```
skills/complex-skill/
├── SKILL.md (~200 lines)
├── reference/
│ ├── detailed-guide.md
│ └── api-reference.md
└── examples/
└── usage-examples.md
```
### With Bundled Scripts
```
skills/skill-with-scripts/
├── SKILL.md
├── reference/
│ └── error-handling.md
└── scripts/
├── validate.sh
└── process.sh
```
**Detailed examples:** See [examples/](examples/) folder
## Referencing Skills
### In User-Invocable Skills
Use `@` file reference syntax to guarantee background skill content is loaded:
```markdown
@~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/issue-writing/SKILL.md
```
**Important:** Do NOT use phrases like "Use the gitea skill" - file references guarantee the content is available.
### In Agents
List skill names in frontmatter (not paths):
```yaml
---
name: product-manager
skills: gitea, issue-writing, backlog-grooming
---
```
## Common Patterns
### Approval Workflow
```markdown
4. **Present plan** for approval
5. **If approved**, create the issues
6. **Present summary** with links
```
### Conditional Behavior
```markdown
## If issue number provided ($1):
1. Fetch specific issue
2. Process it
## If no argument (batch mode):
1. List all issues
2. Process each
```
### Spawning Agents
```markdown
9. **Auto-review**: Spawn the `code-reviewer` agent with the PR number
```
### Read-Only Agents
```yaml
---
name: code-reviewer
disallowedTools:
- Edit
- Write
---
```
## Quick Reference
**Frontmatter fields:** See [reference/frontmatter-fields.md](reference/frontmatter-fields.md)
**Model selection:** See [reference/model-selection.md](reference/model-selection.md)
**Anti-patterns:** See [reference/anti-patterns.md](reference/anti-patterns.md)
**Best practices:** See [best-practices.md](best-practices.md)
## Naming Conventions
| Component | Convention | Examples |
|-----------|------------|----------|
| Skill folder | kebab-case | `software-architecture`, `work-issue` |
| Skill file | UPPERCASE | `SKILL.md` |
| Agent folder | kebab-case | `code-reviewer`, `issue-worker` |
| Agent file | UPPERCASE | `AGENT.md` |
**Skills:** Name after domain/action (good: `gitea`, `work-issue`; bad: `utils`, `helpers`)
**Agents:** Name by role/persona (good: `code-reviewer`; bad: `helper`, `agent1`)
## Checklists
### Before Creating a User-Invocable Skill
- [ ] Workflow is used multiple times
- [ ] User explicitly triggers it (not automatic)
- [ ] Clear start and end points
- [ ] Frontmatter has `user-invocable: true`
- [ ] Description includes "Use when... or when user says /skill-name"
- [ ] Background skills referenced via `@~/.claude/skills/<name>/SKILL.md`
- [ ] Approval checkpoints before significant actions
- [ ] File at `skills/<name>/SKILL.md`
- [ ] **Model defaults to `haiku`** unless justified
### Before Creating a Background Skill
- [ ] Knowledge is used in multiple places (not just once)
- [ ] Existing skills do not already cover this domain
- [ ] Content is specific and actionable (not generic)
- [ ] Frontmatter has `user-invocable: false`
- [ ] Description includes trigger terms
- [ ] File at `skills/<name>/SKILL.md`
### Before Creating an Agent
- [ ] Built-in agents (Explore, Plan) are not sufficient
- [ ] Context isolation or skill composition is needed
- [ ] Clear role/persona emerges
- [ ] `model` selection is deliberate (default to `haiku`)
- [ ] `skills` list is right-sized (not too many)
- [ ] File at `agents/<name>/AGENT.md`

View File

@@ -0,0 +1,500 @@
# Skill Authoring Best Practices
Based on Anthropic's latest agent skills documentation (January 2025).
## Core Principles
### Concise is Key
> "The context window is a public good. Default assumption: Claude is already very smart."
**Only add context Claude doesn't already have.**
**Challenge each piece of information:**
- "Does Claude really need this explanation?"
- "Can I assume Claude knows this?"
- "Does this paragraph justify its token cost?"
**Good example (concise):**
```markdown
## Extract PDF text
Use pdfplumber:
\`\`\`python
import pdfplumber
with pdfplumber.open("file.pdf") as pdf:
text = pdf.pages[0].extract_text()
\`\`\`
```
**Bad example (verbose):**
```markdown
## Extract PDF text
PDF (Portable Document Format) files are a common file format that contains text,
images, and other content. To extract text from a PDF, you'll need to use a library.
There are many libraries available for PDF processing, but we recommend pdfplumber
because it's easy to use and handles most cases well. First, you'll need to install
it using pip. Then you can use the code below...
```
The concise version assumes Claude knows what PDFs are and how libraries work.
### Set Appropriate Degrees of Freedom
Match the level of specificity to the task's fragility and variability.
#### High Freedom (Text-Based Instructions)
Use when multiple approaches are valid:
```markdown
## Code Review Process
1. Analyze code structure and organization
2. Check for potential bugs or edge cases
3. Suggest improvements for readability
4. Verify adherence to project conventions
```
#### Medium Freedom (Templates/Pseudocode)
Use when there's a preferred pattern but variation is acceptable:
```markdown
## Generate Report
Use this template and customize as needed:
\`\`\`python
def generate_report(data, format="markdown", include_charts=True):
# Process data
# Generate output in specified format
# Optionally include visualizations
\`\`\`
```
#### Low Freedom (Exact Scripts)
Use when operations are fragile and error-prone:
```markdown
## Database Migration
Run exactly this script:
\`\`\`bash
python scripts/migrate.py --verify --backup
\`\`\`
Do not modify the command or add additional flags.
```
**Analogy:** Think of Claude as a robot exploring a path:
- **Narrow bridge with cliffs**: One safe way forward. Provide specific guardrails (low freedom)
- **Open field**: Many paths lead to success. Give general direction (high freedom)
### Progressive Disclosure
Split large skills into layers that load on-demand.
#### Three Levels of Loading
| Level | When Loaded | Token Cost | Content |
|-------|------------|------------|---------|
| **Level 1: Metadata** | Always (at startup) | ~100 tokens | `name` and `description` from frontmatter |
| **Level 2: Instructions** | When skill is triggered | Under 5k tokens | SKILL.md body with instructions |
| **Level 3: Resources** | As needed | Unlimited | Referenced files, scripts |
#### Organizing Large Skills
**Pattern 1: High-level guide with references**
```markdown
# PDF Processing
## Quick Start
\`\`\`python
import pdfplumber
with pdfplumber.open("file.pdf") as pdf:
text = pdf.pages[0].extract_text()
\`\`\`
## Advanced Features
**Form filling**: See [FORMS.md](FORMS.md)
**API reference**: See [REFERENCE.md](REFERENCE.md)
**Examples**: See [EXAMPLES.md](EXAMPLES.md)
```
Claude loads FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
**Pattern 2: Domain-specific organization**
For skills with multiple domains:
```
bigquery-skill/
├── SKILL.md (overview and navigation)
└── reference/
├── finance.md (revenue, billing metrics)
├── sales.md (opportunities, pipeline)
├── product.md (API usage, features)
└── marketing.md (campaigns, attribution)
```
When user asks about revenue, Claude reads only `reference/finance.md`.
**Pattern 3: Conditional details**
```markdown
# DOCX Processing
## Creating Documents
Use docx-js. See [DOCX-JS.md](DOCX-JS.md).
## Editing Documents
For simple edits, modify XML directly.
**For tracked changes**: See [REDLINING.md](REDLINING.md)
**For OOXML details**: See [OOXML.md](OOXML.md)
```
#### Avoid Deeply Nested References
**Keep references one level deep from SKILL.md.**
**Bad (too deep):**
```
SKILL.md → advanced.md → details.md → actual info
```
**Good (one level):**
```
SKILL.md → {advanced.md, reference.md, examples.md}
```
#### Structure Longer Files with TOC
For reference files >100 lines, include a table of contents:
```markdown
# API Reference
## Contents
- Authentication and setup
- Core methods (create, read, update, delete)
- Advanced features (batch operations, webhooks)
- Error handling patterns
- Code examples
## Authentication and Setup
...
```
This ensures Claude can see the full scope even with partial reads.
## Script Bundling
### When to Bundle Scripts
Bundle scripts for:
- **Error-prone operations**: Complex bash with retry logic
- **Fragile sequences**: Operations requiring exact order
- **Validation steps**: Checking conditions before proceeding
- **Reusable utilities**: Operations used in multiple steps
**Benefits of bundled scripts:**
- More reliable than generated code
- Save tokens (no code in context)
- Save time (no code generation)
- Ensure consistency
### Script Structure
```bash
#!/bin/bash
# script-name.sh - Brief description
#
# Usage: script-name.sh <param1> <param2>
#
# Example: script-name.sh issue-42 "Fix bug"
set -e # Exit on error
# Input validation
if [ $# -lt 2 ]; then
echo "Usage: $0 <param1> <param2>"
exit 1
fi
param1=$1
param2=$2
# Main logic with error handling
if ! some_command; then
echo "ERROR: Command failed"
exit 1
fi
# Success output
echo "SUCCESS: Operation completed"
```
### Referencing Scripts in Skills
**Make clear whether to execute or read:**
**Execute (most common):**
```markdown
7. **Create PR**: `./scripts/create-pr.sh $1 "$title"`
```
**Read as reference (for understanding complex logic):**
```markdown
See `./scripts/analyze-form.py` for the field extraction algorithm
```
### Solving, Not Punting
Scripts should handle error conditions, not punt to Claude.
**Good (handles errors):**
```python
def process_file(path):
try:
with open(path) as f:
return f.read()
except FileNotFoundError:
print(f"File {path} not found, creating default")
with open(path, 'w') as f:
f.write('')
return ''
except PermissionError:
print(f"Cannot access {path}, using default")
return ''
```
**Bad (punts to Claude):**
```python
def process_file(path):
return open(path).read() # Fails, Claude has to figure it out
```
## Workflow Patterns
### Plan-Validate-Execute
Add verification checkpoints to catch errors early.
**Example: Workflow with validation**
```markdown
## PDF Form Filling
Copy this checklist:
\`\`\`
Progress:
- [ ] Step 1: Analyze form (run analyze_form.py)
- [ ] Step 2: Create field mapping (edit fields.json)
- [ ] Step 3: Validate mapping (run validate_fields.py)
- [ ] Step 4: Fill form (run fill_form.py)
- [ ] Step 5: Verify output (run verify_output.py)
\`\`\`
**Step 1: Analyze**
Run: `python scripts/analyze_form.py input.pdf`
**Step 2: Create Mapping**
Edit `fields.json`
**Step 3: Validate**
Run: `python scripts/validate_fields.py fields.json`
Fix any errors before continuing.
**Step 4: Fill**
Run: `python scripts/fill_form.py input.pdf fields.json output.pdf`
**Step 5: Verify**
Run: `python scripts/verify_output.py output.pdf`
If verification fails, return to Step 2.
```
### Feedback Loops
**Pattern:** Run validator → fix errors → repeat
**Example: Document editing**
```markdown
1. Make edits to `word/document.xml`
2. **Validate**: `python scripts/validate.py unpacked_dir/`
3. If validation fails:
- Review error message
- Fix issues
- Run validation again
4. **Only proceed when validation passes**
5. Rebuild: `python scripts/pack.py unpacked_dir/ output.docx`
6. Test output document
```
## Model Selection
### Decision Framework
```
Start with Haiku
|
v
Test on 3-5 representative tasks
|
+-- Success rate ≥80%? ---------> Use Haiku ✓
|
+-- Success rate <80%? --------> Try Sonnet
|
v
Test on same tasks
|
+-- Success ≥80%? --> Use Sonnet
|
+-- Still failing? --> Opus or redesign task
```
### Haiku Works Well When
- **Steps are simple and validated**
- **Instructions are concise** (no verbose explanations)
- **Error-prone operations use scripts** (deterministic)
- **Outputs have structured templates**
- **Checklists replace open-ended judgment**
### Testing with Multiple Models
Test skills with all models you plan to use:
1. **Create test cases:** 3-5 representative scenarios
2. **Run with Haiku:** Measure success rate, response quality
3. **Run with Sonnet:** Compare results
4. **Adjust instructions:** If Haiku struggles, add clarity or scripts
What works for Opus might need more detail for Haiku.
## Common Anti-Patterns
### Offering Too Many Options
**Bad (confusing):**
```markdown
You can use pypdf, or pdfplumber, or PyMuPDF, or pdf2image, or...
```
**Good (provide default):**
```markdown
Use pdfplumber for text extraction:
\`\`\`python
import pdfplumber
\`\`\`
For scanned PDFs requiring OCR, use pdf2image with pytesseract instead.
```
### Time-Sensitive Information
**Bad (will become wrong):**
```markdown
If you're doing this before August 2025, use the old API.
After August 2025, use the new API.
```
**Good (use "old patterns" section):**
```markdown
## Current Method
Use the v2 API: `api.example.com/v2/messages`
## Old Patterns
<details>
<summary>Legacy v1 API (deprecated 2025-08)</summary>
The v1 API used: `api.example.com/v1/messages`
This endpoint is no longer supported.
</details>
```
### Inconsistent Terminology
**Good (consistent):**
- Always "API endpoint"
- Always "field"
- Always "extract"
**Bad (inconsistent):**
- Mix "API endpoint", "URL", "API route", "path"
- Mix "field", "box", "element", "control"
- Mix "extract", "pull", "get", "retrieve"
### Windows-Style Paths
Always use forward slashes:
-**Good**: `scripts/helper.py`, `reference/guide.md`
-**Bad**: `scripts\helper.py`, `reference\guide.md`
Unix-style paths work cross-platform.
## Iterative Development
### Build Evaluations First
Create test cases BEFORE extensive documentation:
1. **Identify gaps**: Run Claude on tasks without skill, document failures
2. **Create evaluations**: Build 3-5 test scenarios
3. **Establish baseline**: Measure Claude's performance without skill
4. **Write minimal instructions**: Just enough to pass evaluations
5. **Iterate**: Execute evaluations, refine
### Develop Iteratively with Claude
**Use Claude to help write skills:**
1. **Complete a task without skill**: Work through problem, note what context you provide
2. **Identify reusable pattern**: What context is useful for similar tasks?
3. **Ask Claude to create skill**: "Create a skill that captures this pattern"
4. **Review for conciseness**: Remove unnecessary explanations
5. **Test on similar tasks**: Use skill with fresh Claude instance
6. **Iterate based on observation**: Where does Claude struggle?
Claude understands skill format natively - no special prompts needed.
## Checklist for Effective Skills
**Before publishing:**
### Core Quality
- [ ] Description is specific and includes key terms
- [ ] Description includes what skill does AND when to use it
- [ ] SKILL.md body under 500 lines
- [ ] Additional details in separate files (if needed)
- [ ] No time-sensitive information
- [ ] Consistent terminology throughout
- [ ] Examples are concrete, not abstract
- [ ] File references are one level deep
- [ ] Progressive disclosure used appropriately
- [ ] Workflows have clear steps
### Code and Scripts
- [ ] Scripts solve problems, don't punt to Claude
- [ ] Error handling is explicit and helpful
- [ ] No "magic numbers" (all values justified)
- [ ] Required packages listed and verified
- [ ] Scripts have clear documentation
- [ ] No Windows-style paths (all forward slashes)
- [ ] Validation steps for critical operations
- [ ] Feedback loops for quality-critical tasks
### Testing
- [ ] At least 3 test cases created
- [ ] Tested with Haiku (if that's the target)
- [ ] Tested with real usage scenarios
- [ ] Team feedback incorporated (if applicable)

View File

@@ -0,0 +1,129 @@
# Example: Progressive Disclosure Skill
A skill that uses reference files to keep the main SKILL.md concise.
## Structure
```
skills/database-query/
├── SKILL.md (~200 lines)
├── reference/
│ ├── schemas.md (table schemas)
│ ├── common-queries.md (frequently used queries)
│ └── optimization-tips.md (performance guidance)
└── examples/
├── simple-select.md
└── complex-join.md
```
## When to Use
- Skill content would be >500 lines
- Multiple domains or topics
- Reference documentation is large
- Want to keep main workflow concise
## Example: database-query (main SKILL.md)
```markdown
---
name: database-query
description: >
Help users query the PostgreSQL database with proper schemas and optimization.
Use when user needs to write SQL queries or mentions database/tables.
user-invocable: false
---
# Database Query Helper
Help write efficient, correct SQL queries for our PostgreSQL database.
## Quick Start
\`\`\`sql
SELECT id, name, created_at
FROM users
WHERE status = 'active'
LIMIT 10;
\`\`\`
## Table Schemas
We have 3 main schemas:
- **Users & Auth**: See [reference/schemas.md#users](reference/schemas.md#users)
- **Products**: See [reference/schemas.md#products](reference/schemas.md#products)
- **Orders**: See [reference/schemas.md#orders](reference/schemas.md#orders)
## Common Queries
For frequently requested queries, see [reference/common-queries.md](reference/common-queries.md):
- User activity reports
- Sales summaries
- Inventory status
## Writing Queries
1. **Identify tables**: Which schemas does this query need?
2. **Check schema**: Load relevant schema from reference
3. **Write query**: Use proper column names and types
4. **Optimize**: See [reference/optimization-tips.md](reference/optimization-tips.md)
## Examples
- **Simple select**: See [examples/simple-select.md](examples/simple-select.md)
- **Complex join**: See [examples/complex-join.md](examples/complex-join.md)
```
## Example: reference/schemas.md
```markdown
# Database Schemas
## Users
| Column | Type | Description |
|--------|------|-------------|
| id | UUID | Primary key |
| email | VARCHAR(255) | Unique email |
| name | VARCHAR(100) | Display name |
| status | ENUM('active','inactive','banned') | Account status |
| created_at | TIMESTAMP | Account creation |
| updated_at | TIMESTAMP | Last update |
## Products
| Column | Type | Description |
|--------|------|-------------|
| id | UUID | Primary key |
| name | VARCHAR(200) | Product name |
| price | DECIMAL(10,2) | Price in USD |
| inventory | INTEGER | Stock count |
| category_id | UUID | FK to categories |
## Orders
[...more tables...]
```
## Why This Works
- **Main file stays concise** (~200 lines)
- **Details load on-demand**: schemas.md loads when user asks about specific table
- **Fast for common cases**: Simple queries don't need reference files
- **Scalable**: Can add more schemas without bloating main file
## Loading Pattern
1. User: "Show me all active users"
2. Claude reads SKILL.md (sees Users schema reference)
3. Claude: "I'll load the users schema to get column names"
4. Claude reads reference/schemas.md#users
5. Claude writes correct query
## What Makes It Haiku-Friendly
- ✓ Main workflow is simple ("identify → check schema → write query")
- ✓ Reference files provide facts, not reasoning
- ✓ Clear pointers to where details are
- ✓ Examples show patterns

View File

@@ -0,0 +1,71 @@
# Example: Simple Workflow Skill
A basic skill with just a SKILL.md file - no scripts or reference files needed.
## Structure
```
skills/list-open-prs/
└── SKILL.md
```
## When to Use
- Skill is simple (<300 lines)
- No error-prone bash operations
- No need for reference documentation
- Straightforward workflow
## Example: list-open-prs
```markdown
---
name: list-open-prs
description: >
List all open pull requests for the current repository.
Use when user wants to see PRs or says /list-open-prs.
model: haiku
user-invocable: true
---
# List Open PRs
@~/.claude/skills/gitea/SKILL.md
Show all open pull requests in the current repository.
## Process
1. **Get repository info**
- `git remote get-url origin`
- Parse owner/repo from URL
2. **Fetch open PRs**
- `tea pulls list --state open --output simple`
3. **Format results** as table
| PR # | Title | Author | Created |
|------|-------|--------|---------|
| ... | ... | ... | ... |
## Guidelines
- Show most recent PRs first
- Include link to each PR
- If no open PRs, say "No open pull requests"
```
## Why This Works
- **Concise**: Entire skill fits in ~30 lines
- **Simple commands**: Just git and tea CLI
- **No error handling needed**: tea handles errors gracefully
- **Structured output**: Table format is clear
## What Makes It Haiku-Friendly
- ✓ Simple sequential steps
- ✓ Clear commands with no ambiguity
- ✓ Structured output format
- ✓ No complex decision-making

View File

@@ -0,0 +1,210 @@
# Example: Skill with Bundled Scripts
A skill that bundles helper scripts for error-prone operations.
## Structure
```
skills/deploy-to-staging/
├── SKILL.md
├── reference/
│ └── rollback-procedure.md
└── scripts/
├── validate-build.sh
├── deploy.sh
└── health-check.sh
```
## When to Use
- Operations have complex error handling
- Need retry logic
- Multiple validation steps
- Fragile bash commands
## Example: deploy-to-staging (main SKILL.md)
```markdown
---
name: deploy-to-staging
description: >
Deploy current branch to staging environment with validation and health checks.
Use when deploying to staging or when user says /deploy-to-staging.
model: haiku
user-invocable: true
---
# Deploy to Staging
Deploy current branch to staging with automated validation and rollback capability.
## Process
1. **Validate build**
- `./scripts/validate-build.sh`
- Checks tests pass, linter clean, no uncommitted changes
2. **Show deployment plan** for approval
- Branch name
- Latest commit
- Services that will be updated
3. **If approved, deploy**
- `./scripts/deploy.sh staging $branch`
- Script handles Docker build, push, k8s apply
4. **Health check**
- `./scripts/health-check.sh staging`
- Verifies all services are healthy
5. **Report results**
- Deployment URL
- Status of each service
- Rollback command if needed
## Rollback
If deployment fails, see [reference/rollback-procedure.md](reference/rollback-procedure.md)
```
## Example: scripts/validate-build.sh
```bash
#!/bin/bash
# validate-build.sh - Pre-deployment validation
#
# Checks:
# - Tests pass
# - Linter clean
# - No uncommitted changes
# - Docker builds successfully
set -e
RED='\033[0;31m'
GREEN='\033[0;32m'
NC='\033[0m'
error() {
echo -e "${RED}ERROR: $1${NC}" >&2
exit 1
}
success() {
echo -e "${GREEN}$1${NC}"
}
# Check 1: No uncommitted changes
if ! git diff-index --quiet HEAD --; then
error "Uncommitted changes detected. Commit or stash first."
fi
success "No uncommitted changes"
# Check 2: Tests pass
if ! npm test > /dev/null 2>&1; then
error "Tests failing. Fix tests before deploying."
fi
success "Tests pass"
# Check 3: Linter clean
if ! npm run lint > /dev/null 2>&1; then
error "Linter errors. Run 'npm run lint' to see details."
fi
success "Linter clean"
# Check 4: Docker builds
if ! docker build -t test-build . > /dev/null 2>&1; then
error "Docker build failed"
fi
success "Docker build successful"
echo ""
echo -e "${GREEN}✓ All validations passed${NC}"
```
## Example: scripts/deploy.sh
```bash
#!/bin/bash
# deploy.sh - Deploy to environment
#
# Usage: deploy.sh <environment> <branch>
# Example: deploy.sh staging feature-new-ui
set -e
ENVIRONMENT=$1
BRANCH=$2
if [ -z "$ENVIRONMENT" ] || [ -z "$BRANCH" ]; then
echo "Usage: $0 <environment> <branch>"
exit 1
fi
echo "Deploying $BRANCH to $ENVIRONMENT..."
# Build Docker image
docker build -t myapp:$BRANCH .
# Tag for registry
docker tag myapp:$BRANCH registry.example.com/myapp:$BRANCH
# Push to registry with retry
for i in {1..3}; do
if docker push registry.example.com/myapp:$BRANCH; then
break
fi
echo "Push failed, retrying ($i/3)..."
sleep 5
done
# Update Kubernetes deployment
kubectl set image deployment/myapp \
myapp=registry.example.com/myapp:$BRANCH \
-n $ENVIRONMENT
# Wait for rollout
kubectl rollout status deployment/myapp -n $ENVIRONMENT --timeout=5m
echo "Deployment complete!"
echo "URL: https://$ENVIRONMENT.example.com"
```
## Why This Works
**Script benefits:**
- **Deterministic**: Same behavior every time
- **Error handling**: Retries, clear messages
- **Validation**: Pre-flight checks prevent bad deployments
- **No token cost**: Scripts execute without loading code into context
**Skill stays simple:**
- Main SKILL.md is ~30 lines
- Just calls scripts in order
- No complex bash logic inline
- Easy to test scripts independently
## What Makes It Haiku-Friendly
- ✓ Skill has simple instructions ("run script X, then Y")
- ✓ Scripts handle all complexity
- ✓ Clear success/failure from script exit codes
- ✓ Validation prevents ambiguous states
- ✓ Structured output from scripts is easy to parse
## Testing Scripts
Scripts can be tested independently:
```bash
# Test validation
./scripts/validate-build.sh
# Test deployment (dry-run)
./scripts/deploy.sh staging test-branch --dry-run
# Test health check
./scripts/health-check.sh staging
```
This makes the skill more reliable than inline bash.

View File

@@ -0,0 +1,536 @@
# Anti-Patterns to Avoid
Common mistakes when creating skills and agents.
## Skill Design Anti-Patterns
### 1. Overly Broad Components
**Bad:** One skill that does everything
```yaml
---
name: project-management
description: Handles issues, PRs, releases, documentation, deployment, testing, CI/CD...
---
# Project Management
This skill does:
- Issue management
- Pull request reviews
- Release planning
- Documentation
- Deployment
- Testing
- CI/CD configuration
...
```
**Why it's bad:**
- Huge context window usage
- Hard to maintain
- Unclear when to trigger
- Tries to do too much
**Good:** Focused components
```yaml
---
name: issue-writing
description: How to write clear, actionable issues with acceptance criteria.
---
```
**Separate skills for:**
- `issue-writing` - Issue quality
- `review-pr` - PR reviews
- `gitea` - CLI reference
- Each does one thing well
---
### 2. Vague Instructions
**Bad:**
```markdown
1. Handle the issue
2. Do the work
3. Finish up
4. Let me know when done
```
**Why it's bad:**
- No clear actions
- Claude has to guess
- Inconsistent results
- Hard to validate
**Good:**
```markdown
1. **View issue**: `tea issues $1 --comments`
2. **Create branch**: `git checkout -b issue-$1-<title>`
3. **Plan work**: Use TodoWrite to break down steps
4. **Implement**: Make necessary changes
5. **Commit**: `git commit -m "feat: ..."`
6. **Create PR**: `tea pulls create --title "..." --description "..."`
```
---
### 3. Missing Skill References
**Bad:**
```markdown
Use the gitea skill to create an issue.
```
**Why it's bad:**
- Skills have ~20% auto-activation rate
- Claude might not load the skill
- Inconsistent results
**Good:**
```markdown
@~/.claude/skills/gitea/SKILL.md
Use `tea issues create --title "..." --description "..."`
```
**The `@` reference guarantees the skill content is loaded.**
---
### 4. God Skills
**Bad:** Single 1500-line skill covering everything
```
skills/database/SKILL.md (1500 lines)
- PostgreSQL
- MySQL
- MongoDB
- Redis
- All queries
- All optimization tips
- All schemas
```
**Why it's bad:**
- Exceeds recommended 500 lines
- Loads everything even if you need one thing
- Hard to maintain
- Wastes tokens
**Good:** Progressive disclosure
```
skills/database/
├── SKILL.md (200 lines - overview)
├── reference/
│ ├── postgres.md
│ ├── mysql.md
│ ├── mongodb.md
│ └── redis.md
└── schemas/
├── users.md
├── products.md
└── orders.md
```
Claude loads only what's needed.
---
### 5. Premature Agent Creation
**Bad:** Creating an agent for every task
```
agents/
├── issue-viewer/
├── branch-creator/
├── commit-maker/
├── pr-creator/
└── readme-updater/
```
**Why it's bad:**
- Overhead of spawning agents
- Most tasks don't need isolation
- Harder to follow workflow
- Slower execution
**Good:** Use agents only when needed:
- Context isolation (parallel work)
- Skill composition (multiple skills together)
- Specialist persona (architecture review)
**Simple tasks → Skills**
**Complex isolated work → Agents**
---
### 6. Verbose Explanations
**Bad:**
```markdown
Git is a distributed version control system that was created by Linus Torvalds in 2005. It allows multiple developers to work on the same codebase simultaneously while maintaining a complete history of all changes. When you want to save your changes, you use the git commit command, which creates a snapshot of your current working directory...
```
**Why it's bad:**
- Wastes tokens
- Claude already knows git
- Slows down loading
- Adds no value
**Good:**
```markdown
`git commit -m 'feat: add feature'`
```
**Assume Claude is smart. Only add domain-specific context.**
---
## Instruction Anti-Patterns
### 7. Offering Too Many Options
**Bad:**
```markdown
You can use pypdf, or pdfplumber, or PyMuPDF, or pdf2image, or camelot, or tabula, or...
```
**Why it's bad:**
- Decision paralysis
- Inconsistent choices
- No clear default
**Good:**
```markdown
Use pdfplumber for text extraction:
\`\`\`python
import pdfplumber
with pdfplumber.open("file.pdf") as pdf:
text = pdf.pages[0].extract_text()
\`\`\`
For scanned PDFs requiring OCR, use pdf2image + pytesseract instead.
```
**Provide default, mention alternative only when needed.**
---
### 8. Time-Sensitive Information
**Bad:**
```markdown
If you're doing this before August 2025, use the old API.
After August 2025, use the new API.
```
**Why it's bad:**
- Will become wrong
- Requires maintenance
- Confusing after the date
**Good:**
```markdown
## Current Method
Use v2 API: `api.example.com/v2/messages`
## Old Patterns
<details>
<summary>Legacy v1 API (deprecated 2025-08)</summary>
The v1 API: `api.example.com/v1/messages`
No longer supported.
</details>
```
---
### 9. Inconsistent Terminology
**Bad:** Mixing terms for the same thing
```markdown
1. Get the API endpoint
2. Call the URL
3. Hit the API route
4. Query the path
```
**Why it's bad:**
- Confusing
- Looks like different things
- Harder to search
**Good:** Pick one term and stick with it
```markdown
1. Get the API endpoint
2. Call the API endpoint
3. Check the API endpoint response
4. Retry the API endpoint if needed
```
---
### 10. Windows-Style Paths
**Bad:**
```markdown
Run: `scripts\helper.py`
See: `reference\guide.md`
```
**Why it's bad:**
- Fails on Unix systems
- Causes errors on Mac/Linux
**Good:**
```markdown
Run: `scripts/helper.py`
See: `reference/guide.md`
```
**Always use forward slashes. They work everywhere.**
---
## Script Anti-Patterns
### 11. Punting to Claude
**Bad script:**
```python
def process_file(path):
return open(path).read() # Let Claude handle errors
```
**Why it's bad:**
- Script fails with no helpful message
- Claude has to guess what happened
- Inconsistent error handling
**Good script:**
```python
def process_file(path):
try:
with open(path) as f:
return f.read()
except FileNotFoundError:
print(f"ERROR: File {path} not found")
print("Creating default file...")
with open(path, 'w') as f:
f.write('')
return ''
except PermissionError:
print(f"ERROR: Cannot access {path}")
print("Using default value")
return ''
```
**Scripts should solve problems, not punt to Claude.**
---
### 12. Magic Numbers
**Bad:**
```bash
TIMEOUT=47 # Why 47?
RETRIES=5 # Why 5?
DELAY=3.7 # Why 3.7?
```
**Why it's bad:**
- No one knows why these values
- Hard to adjust
- "Voodoo constants"
**Good:**
```bash
# HTTP requests typically complete in <30s
# Extra buffer for slow connections
TIMEOUT=30
# Three retries balances reliability vs speed
# Most intermittent failures resolve by retry 2
RETRIES=3
# Exponential backoff: 1s, 2s, 4s
INITIAL_DELAY=1
```
**Document why each value is what it is.**
---
## Model Selection Anti-Patterns
### 13. Always Using Sonnet/Opus
**Bad:**
```yaml
---
name: dashboard
model: opus # "Just to be safe"
---
```
**Why it's bad:**
- 60x more expensive than Haiku
- 5x slower
- Wasted cost for simple task
**Good:**
```yaml
---
name: dashboard
model: haiku # Tested: 5/5 tests passed
---
```
**Test with Haiku first. Only upgrade if needed.**
---
### 14. Never Testing Haiku
**Bad:**
```yaml
---
name: review-pr
model: sonnet # Assumed it needs Sonnet, never tested Haiku
---
```
**Why it's bad:**
- Might work fine with Haiku
- Missing 12x cost savings
- Missing 2.5x speed improvement
**Good:**
```yaml
---
name: review-pr
model: haiku # Tested: Haiku 4/5 (80%), good enough!
---
```
Or:
```yaml
---
name: review-pr
model: sonnet # Tested: Haiku 2/5 (40%), Sonnet 4/5 (80%)
---
```
**Always test Haiku first, document results.**
---
## Progressive Disclosure Anti-Patterns
### 15. Deeply Nested References
**Bad:**
```
SKILL.md → advanced.md → details.md → actual-info.md
```
**Why it's bad:**
- Claude may partially read nested files
- Information might be incomplete
- Hard to navigate
**Good:**
```
SKILL.md → {advanced.md, reference.md, examples.md}
```
**Keep references one level deep from SKILL.md.**
---
### 16. No Table of Contents for Long Files
**Bad:** 500-line reference file with no structure
```markdown
# Reference
(500 lines of content with no navigation)
```
**Why it's bad:**
- Hard to preview
- Claude might miss sections
- User can't navigate
**Good:**
```markdown
# Reference
## Contents
- Authentication and setup
- Core methods
- Advanced features
- Error handling
- Examples
## Authentication and Setup
...
```
**Files >100 lines should have TOC.**
---
## Checklist to Avoid Anti-Patterns
Before publishing a skill:
- [ ] Not overly broad (does one thing well)
- [ ] Instructions are specific (not vague)
- [ ] Skill references use `@` syntax
- [ ] Under 500 lines (or uses progressive disclosure)
- [ ] Only creates agents when needed
- [ ] Concise (assumes Claude knows basics)
- [ ] Provides default, not 10 options
- [ ] No time-sensitive information
- [ ] Consistent terminology
- [ ] Forward slashes for paths
- [ ] Scripts handle errors, don't punt
- [ ] No magic numbers in scripts
- [ ] Tested with Haiku first
- [ ] References are one level deep
- [ ] Long files have table of contents

View File

@@ -0,0 +1,278 @@
# Frontmatter Fields Reference
Complete documentation of all available frontmatter fields for skills and agents.
## Skill Frontmatter
### Required Fields
#### `name`
- **Type:** string
- **Required:** Yes
- **Format:** Lowercase, hyphens only, no spaces
- **Max length:** 64 characters
- **Must match:** Directory name
- **Cannot contain:** XML tags, reserved words ("anthropic", "claude")
- **Example:** `work-issue`, `code-review`, `gitea`
#### `description`
- **Type:** string (multiline supported with `>`)
- **Required:** Yes
- **Max length:** 1024 characters
- **Cannot contain:** XML tags
- **Should include:**
- What the skill does
- When to use it
- Trigger conditions
- **Example:**
```yaml
description: >
View, create, and manage Gitea issues and pull requests.
Use when working with issues, PRs, or when user mentions tea, gitea, issue numbers.
```
#### `user-invocable`
- **Type:** boolean
- **Required:** Yes
- **Values:** `true` or `false`
- **Usage:**
- `true`: User can trigger with `/skill-name`
- `false`: Background skill, auto-loaded when needed
### Optional Fields
#### `model`
- **Type:** string
- **Required:** No
- **Values:** `haiku`, `sonnet`, `opus`
- **Default:** Inherits from parent (usually haiku)
- **Guidance:** Default to `haiku`, only upgrade if needed
- **Example:**
```yaml
model: haiku # 12x cheaper than sonnet
```
#### `argument-hint`
- **Type:** string
- **Required:** No (only for user-invocable skills)
- **Format:** `<required>` for required params, `[optional]` for optional
- **Shows in UI:** Helps users know what arguments to provide
- **Example:**
```yaml
argument-hint: <issue-number>
argument-hint: <issue-number> [optional-title]
```
#### `context`
- **Type:** string
- **Required:** No
- **Values:** `fork`
- **Usage:** Set to `fork` for skills needing isolated context
- **When to use:** Heavy exploration tasks that would pollute main context
- **Example:**
```yaml
context: fork # For arch-review-repo, deep exploration
```
#### `allowed-tools`
- **Type:** list of strings
- **Required:** No
- **Usage:** Restrict which tools the skill can use
- **Example:**
```yaml
allowed-tools:
- Read
- Bash
- Grep
```
- **Note:** Rarely used, most skills have all tools
## Agent Frontmatter
### Required Fields
#### `name`
- **Type:** string
- **Required:** Yes
- **Same rules as skill name**
#### `description`
- **Type:** string
- **Required:** Yes
- **Should include:**
- What the agent does
- When to spawn it
- **Example:**
```yaml
description: >
Automated code review of pull requests for quality, bugs, security, and style.
Spawn when reviewing PRs or checking code quality.
```
### Optional Fields
#### `model`
- **Type:** string
- **Required:** No
- **Values:** `haiku`, `sonnet`, `opus`, `inherit`
- **Default:** `inherit` (uses parent's model)
- **Guidance:**
- Default to `haiku` for simple agents
- Use `sonnet` for balanced performance
- Reserve `opus` for deep reasoning
- **Example:**
```yaml
model: haiku # Fast and cheap for code review checklist
```
#### `skills`
- **Type:** comma-separated list of skill names (not paths)
- **Required:** No
- **Usage:** Auto-load these skills when agent spawns
- **Format:** Just skill names, not paths
- **Example:**
```yaml
skills: gitea, issue-writing, code-review
```
- **Note:** Agent runtime loads skills automatically
#### `disallowedTools`
- **Type:** list of tool names
- **Required:** No
- **Common use:** Make agents read-only
- **Example:**
```yaml
disallowedTools:
- Edit
- Write
```
- **When to use:** Analysis agents that shouldn't modify code
#### `permissionMode`
- **Type:** string
- **Required:** No
- **Values:** `default`, `bypassPermissions`
- **Usage:** Rarely used, for agents that need to bypass permission prompts
- **Example:**
```yaml
permissionMode: bypassPermissions
```
## Examples
### Minimal User-Invocable Skill
```yaml
---
name: dashboard
description: Show open issues, PRs, and CI status.
user-invocable: true
---
```
### Full-Featured Skill
```yaml
---
name: work-issue
description: >
Implement a Gitea issue with full workflow: branch, plan, code, PR, review.
Use when implementing issues or when user says /work-issue.
model: haiku
argument-hint: <issue-number>
user-invocable: true
---
```
### Background Skill
```yaml
---
name: gitea
description: >
View, create, and manage Gitea issues and PRs using tea CLI.
Use when working with issues, PRs, viewing issue details, or when user mentions tea, gitea, issue numbers.
user-invocable: false
---
```
### Read-Only Agent
```yaml
---
name: code-reviewer
description: >
Automated code review of pull requests for quality, bugs, security, style, and test coverage.
model: sonnet
skills: gitea, code-review
disallowedTools:
- Edit
- Write
---
```
### Implementation Agent
```yaml
---
name: issue-worker
description: >
Autonomously implements a single issue in an isolated git worktree.
model: haiku
skills: gitea, issue-writing, software-architecture
---
```
## Validation Rules
### Name Validation
- Must be lowercase
- Must use hyphens (not underscores or spaces)
- Cannot contain: `anthropic`, `claude`
- Cannot contain XML tags `<`, `>`
- Max 64 characters
- Must match directory name exactly
### Description Validation
- Cannot be empty
- Max 1024 characters
- Cannot contain XML tags
- Should end with period
### Model Validation
- Must be one of: `haiku`, `sonnet`, `opus`, `inherit`
- Case-sensitive (must be lowercase)
## Common Mistakes
**Bad: Using paths in skills field**
```yaml
skills: ~/.claude/skills/gitea/SKILL.md # Wrong!
```
**Good: Just skill names**
```yaml
skills: gitea, issue-writing
```
**Bad: Reserved word in name**
```yaml
name: claude-helper # Contains "claude"
```
**Good: Descriptive name**
```yaml
name: code-helper
```
**Bad: Vague description**
```yaml
description: Helps with stuff
```
**Good: Specific description**
```yaml
description: >
Analyze Excel spreadsheets, create pivot tables, generate charts.
Use when analyzing Excel files, spreadsheets, or .xlsx files.
```

View File

@@ -0,0 +1,336 @@
# Model Selection Guide
Detailed guidance on choosing the right model for skills and agents.
## Cost Comparison
| Model | Input (per MTok) | Output (per MTok) | vs Haiku |
|-------|------------------|-------------------|----------|
| **Haiku** | $0.25 | $1.25 | Baseline |
| **Sonnet** | $3.00 | $15.00 | 12x more expensive |
| **Opus** | $15.00 | $75.00 | 60x more expensive |
**Example cost for typical skill call (2K input, 1K output):**
- Haiku: $0.00175
- Sonnet: $0.021 (12x more)
- Opus: $0.105 (60x more)
## Speed Comparison
| Model | Tokens/Second | vs Haiku |
|-------|---------------|----------|
| **Haiku** | ~100 | Baseline |
| **Sonnet** | ~40 | 2.5x slower |
| **Opus** | ~20 | 5x slower |
## Decision Framework
```
Start with Haiku by default
|
v
Test on 3-5 representative tasks
|
+-- Success rate ≥80%? ---------> ✓ Use Haiku
| (12x cheaper, 2-5x faster)
|
+-- Success rate <80%? --------> Try Sonnet
| |
| v
| Test on same tasks
| |
| +-- Success ≥80%? --> Use Sonnet
| |
| +-- Still failing? --> Opus or redesign
|
v
Document why you chose the model
```
## When Haiku Works Well
### ✓ Ideal for Haiku
**Simple sequential workflows:**
- `/dashboard` - Fetch and display
- `/roadmap` - List and format
- `/commit` - Generate message from diff
**Workflows with scripts:**
- Error-prone operations in scripts
- Skills just orchestrate script calls
- Validation is deterministic
**Structured outputs:**
- Tasks with clear templates
- Format is defined upfront
- No ambiguous formatting
**Reference/knowledge skills:**
- `gitea` - CLI reference
- `issue-writing` - Patterns and templates
- `software-architecture` - Best practices
### Examples of Haiku Success
**work-issue skill:**
- Sequential steps (view → branch → plan → implement → PR)
- Each step has clear validation
- Scripts handle error-prone operations
- Success rate: ~90%
**dashboard skill:**
- Fetch data (tea commands)
- Format as table
- Clear, structured output
- Success rate: ~95%
## When to Use Sonnet
### Use Sonnet When
**Haiku fails 20%+ of the time**
- Test with Haiku first
- If success rate <80%, upgrade to Sonnet
**Complex judgment required:**
- Code review (quality assessment)
- Issue grooming (clarity evaluation)
- Architecture decisions
**Nuanced reasoning:**
- Understanding implicit requirements
- Making trade-off decisions
- Applying context-dependent rules
### Examples of Sonnet Success
**review-pr skill:**
- Requires code understanding
- Judgment about quality/bugs
- Context-dependent feedback
- Originally tried Haiku: 65% success → Sonnet: 85%
**issue-worker agent:**
- Autonomous implementation
- Pattern matching
- Architectural decisions
- Originally tried Haiku: 70% success → Sonnet: 82%
## When to Use Opus
### Reserve Opus For
**Deep architectural reasoning:**
- `software-architect` agent
- Pattern recognition across large codebases
- Identifying subtle anti-patterns
- Trade-off analysis
**High-stakes decisions:**
- Breaking changes analysis
- System-wide refactoring plans
- Security architecture review
**Complex pattern recognition:**
- Requires sophisticated understanding
- Multiple layers of abstraction
- Long-term implications
### Examples of Opus Success
**software-architect agent:**
- Analyzes entire codebase
- Identifies 8 different anti-patterns
- Provides prioritized recommendations
- Sonnet: 68% success → Opus: 88%
**arch-review-repo skill:**
- Comprehensive architecture audit
- Cross-cutting concerns
- System-wide patterns
- Opus justified for depth
## Making Haiku More Effective
If Haiku is struggling, try these improvements **before** upgrading to Sonnet:
### 1. Add Validation Steps
**Instead of:**
```markdown
3. Implement changes and create PR
```
**Try:**
```markdown
3. Implement changes
4. Validate: Run `./scripts/validate.sh` (tests pass, linter clean)
5. Create PR: `./scripts/create-pr.sh`
```
### 2. Bundle Error-Prone Operations in Scripts
**Instead of:**
```markdown
5. Create PR: `tea pulls create --title "..." --description "..."`
```
**Try:**
```markdown
5. Create PR: `./scripts/create-pr.sh $issue "$title"`
```
### 3. Add Structured Output Templates
**Instead of:**
```markdown
Show the results
```
**Try:**
```markdown
Format results as:
| Issue | Status | Link |
|-------|--------|------|
| ... | ... | ... |
```
### 4. Add Explicit Checklists
**Instead of:**
```markdown
Review the code for quality
```
**Try:**
```markdown
Check:
- [ ] Code quality (readability, naming)
- [ ] Bugs (edge cases, null checks)
- [ ] Tests (coverage, assertions)
```
### 5. Make Instructions More Concise
**Instead of:**
```markdown
Git is a version control system. When you want to commit changes, you use the git commit command which saves your changes to the repository...
```
**Try:**
```markdown
`git commit -m 'feat: add feature'`
```
## Testing Methodology
### Create Test Suite
For each skill, create 3-5 test cases:
**Example: work-issue skill tests**
1. Simple bug fix issue
2. New feature with acceptance criteria
3. Issue missing acceptance criteria
4. Issue with tests that fail
5. Complex refactoring task
### Test with Haiku
```bash
# Set skill to Haiku
model: haiku
# Run all 5 tests
# Document success/failure for each
```
### Measure Success Rate
```
Success rate = (Successful tests / Total tests) × 100
```
**Decision:**
- ≥80% → Keep Haiku
- <80% → Try Sonnet
- <50% → Likely need Opus or redesign
### Test with Sonnet (if needed)
```bash
# Upgrade to Sonnet
model: sonnet
# Run same 5 tests
# Compare results
```
### Document Decision
```yaml
---
name: work-issue
model: haiku # Tested: 4/5 tests passed with Haiku (80%)
---
```
Or:
```yaml
---
name: review-pr
model: sonnet # Tested: Haiku 3/5 (60%), Sonnet 4/5 (80%)
---
```
## Common Patterns
### Pattern: Start Haiku, Upgrade if Needed
**Issue-worker agent evolution:**
1. **V1 (Haiku):** 70% success - struggled with pattern matching
2. **Analysis:** Added more examples, still 72%
3. **V2 (Sonnet):** 82% success - better code understanding
4. **Decision:** Keep Sonnet, document why
### Pattern: Haiku for Most, Sonnet for Complex
**Review-pr skill:**
- Static analysis steps: Haiku could handle
- Manual code review: Needs Sonnet judgment
- **Decision:** Use Sonnet for whole skill (simplicity)
### Pattern: Split Complex Skills
**Instead of:** One complex skill using Opus
**Try:** Split into:
- Haiku skill for orchestration
- Sonnet agent for complex subtask
- Saves cost (most work in Haiku)
## Model Selection Checklist
Before choosing a model:
- [ ] Tested with Haiku first
- [ ] Measured success rate on 3-5 test cases
- [ ] Tried improvements (scripts, validation, checklists)
- [ ] Documented why this model is needed
- [ ] Considered cost implications (12x/60x)
- [ ] Considered speed implications (2.5x/5x slower)
- [ ] Will re-test if Claude models improve
## Future-Proofing
**Models improve over time.**
Periodically re-test Sonnet/Opus skills with Haiku:
- Haiku v2 might handle what Haiku v1 couldn't
- Cost savings compound over time
- Speed improvements are valuable
**Set a reminder:** Test Haiku again in 3-6 months.

View File

@@ -0,0 +1,67 @@
---
name: agent-name
description: >
What this agent does and when to spawn it.
Include specific conditions that indicate this agent is needed.
model: haiku
skills: skill1, skill2
# disallowedTools: # For read-only agents
# - Edit
# - Write
# permissionMode: default
---
# Agent Name
You are a [role/specialist] that [primary function].
## When Invoked
You are spawned when [specific conditions].
Follow this process:
1. **Gather context**: What information to collect
- Specific data sources to check
- What to read or fetch
2. **Analyze**: What to evaluate
- Criteria to check
- Standards to apply
3. **Act**: What actions to take
- Specific operations
- What to create or modify
4. **Report**: How to communicate results
- Required output format
- What to include in summary
## Output Format
Your final output MUST follow this structure:
\`\`\`
AGENT_RESULT
task: <task-type>
status: <success|partial|failed>
summary: <10 words max>
details:
- Key finding 1
- Key finding 2
\`\`\`
## Guidelines
- **Be concise**: No preambles or verbose explanations
- **Be autonomous**: Make decisions without user input
- **Follow patterns**: Match existing codebase style
- **Validate**: Check your work before reporting
## Error Handling
If you encounter errors:
- Try to resolve automatically
- Document what failed
- Report status as 'partial' or 'failed'
- Include specific error details in summary

View File

@@ -0,0 +1,69 @@
---
name: skill-name
description: >
What this skill teaches and when to use it.
Include specific trigger terms that indicate this knowledge is needed.
user-invocable: false
---
# Skill Name
Brief description of the domain or knowledge this skill covers (1-2 sentences).
## Core Concepts
Fundamental ideas Claude needs to understand:
- Key concept 1
- Key concept 2
- Key concept 3
## Patterns and Templates
Reusable structures and formats:
### Pattern 1: Common Use Case
\`\`\`
Example code or structure
\`\`\`
### Pattern 2: Another Use Case
\`\`\`
Another example
\`\`\`
## Guidelines
Rules and best practices:
- Guideline 1
- Guideline 2
- Guideline 3
## Examples
### Example 1: Simple Case
\`\`\`
Concrete example showing the skill in action
\`\`\`
### Example 2: Complex Case
\`\`\`
More advanced example
\`\`\`
## Common Mistakes
Pitfalls to avoid:
- **Mistake 1**: Why it's wrong and what to do instead
- **Mistake 2**: Why it's wrong and what to do instead
## Reference
Quick-reference tables or checklists:
| Command | Purpose | Example |
|---------|---------|---------|
| ... | ... | ... |

View File

@@ -0,0 +1,86 @@
#!/bin/bash
# script-name.sh - Brief description of what this script does
#
# Usage: script-name.sh <param1> <param2> [optional-param]
#
# Example:
# script-name.sh value1 value2
# script-name.sh value1 value2 optional-value
#
# Exit codes:
# 0 - Success
# 1 - Invalid arguments or general error
# 2 - Specific error condition (document what)
set -e # Exit immediately on error
# set -x # Uncomment for debugging
# Color output for better visibility
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Helper functions
error() {
echo -e "${RED}ERROR: $1${NC}" >&2
exit 1
}
success() {
echo -e "${GREEN}SUCCESS: $1${NC}"
}
warn() {
echo -e "${YELLOW}WARNING: $1${NC}"
}
# Input validation
if [ $# -lt 2 ]; then
echo "Usage: $0 <param1> <param2> [optional-param]"
echo ""
echo "Description: Brief description of what this does"
echo ""
echo "Arguments:"
echo " param1 Description of param1"
echo " param2 Description of param2"
echo " optional-param Description of optional param (default: value)"
exit 1
fi
# Parse arguments
PARAM1=$1
PARAM2=$2
OPTIONAL_PARAM=${3:-"default-value"}
# Validate inputs
if [ -z "$PARAM1" ]; then
error "param1 cannot be empty"
fi
# Main logic
main() {
echo "Processing with param1=$PARAM1, param2=$PARAM2..."
# Step 1: Describe what this step does
if ! some_command "$PARAM1"; then
error "Failed to process param1"
fi
# Step 2: Another operation with error handling
result=$(another_command "$PARAM2" 2>&1)
if [ $? -ne 0 ]; then
error "Failed to process param2: $result"
fi
# Step 3: Validation
if [ ! -f "$result" ]; then
error "Expected file not found: $result"
fi
success "Operation completed successfully"
echo "$result" # Output for caller to parse
}
# Execute main function
main

View File

@@ -0,0 +1,65 @@
---
name: skill-name
description: >
Clear description of what this skill does and when to use it.
Use when [specific trigger conditions] or when user says /skill-name.
model: haiku
argument-hint: <required-param> [optional-param]
user-invocable: true
# context: fork # Use for skills needing isolated context
# allowed-tools: # Restrict tools if needed
# - Read
# - Bash
---
# Skill Title
@~/.claude/skills/relevant-background-skill/SKILL.md
Brief intro explaining the skill's purpose (1-2 sentences max).
## Process
1. **First step**: Clear action with specific command or instruction
- `command or tool to use`
- What to look for or validate
2. **Second step**: Next action
- Specific details
- Expected output
3. **Ask for approval** before significant actions
- Show what will be created/modified
- Wait for user confirmation
4. **Execute** the approved actions
- Run commands/create files
- Handle errors gracefully
5. **Present results** with links and summary
- Structured output (table or list)
- Links to created resources
## Guidelines
- Keep responses concise
- Use structured output (tables, lists)
- No preambles or sign-offs
- Validate inputs before acting
## Output Format
Use this structure for responses:
\`\`\`
## Summary
[1-2 sentences]
## Results
| Item | Status | Link |
|------|--------|------|
| ... | ... | ... |
## Next Steps
- ...
\`\`\`

View File

@@ -0,0 +1,197 @@
---
name: create-capability
description: >
Create a new capability (skill, agent, or a cohesive set) for the architecture
repository. Use when creating new skills, agents, extending AI workflows, or when
user says /create-capability.
model: claude-haiku-4-5
argument-hint: <description>
user-invocable: true
---
# Create Capability
@~/.claude/skills/capability-writing/SKILL.md
Create new capabilities following latest Anthropic best practices (progressive disclosure, script bundling, Haiku-first).
## Process
1. **Understand the capability**: Analyze "$1" to understand what the user wants to build
- What domain or workflow does this cover?
- What user need does it address?
- What existing capabilities might overlap?
2. **Determine components needed**: Based on the description, recommend which components:
| Pattern | When to Use |
|---------|-------------|
| Skill only (background) | Knowledge to apply automatically (reused across other skills) |
| Skill only (user-invocable) | User-invoked workflow |
| Skill + Agent | Workflow with isolated worker for complex subtasks |
| Full set | New domain expertise + workflow + isolated work |
Present recommendation with reasoning:
```
## Recommended Components for: $1
Based on your description, I recommend:
- **Skill**: `name` - [why this knowledge is needed]
- **Agent**: `name` - [why isolation/specialization is needed] (optional)
Reasoning: [explain why this combination fits the need]
```
3. **Analyze complexity** (NEW): For each skill, determine structure needed:
**Ask these questions:**
a) **Expected size**: Will this skill be >300 lines?
- If NO → Simple structure (just SKILL.md)
- If YES → Suggest progressive disclosure
b) **Error-prone operations**: Are there complex bash operations?
- Check for: PR creation, worktree management, complex git operations
- If YES → Suggest bundling scripts
c) **Degree of freedom**: What instruction style is appropriate?
- Multiple valid approaches → Text instructions (high freedom)
- Preferred pattern with variation → Templates (medium freedom)
- Fragile operations, exact sequence → Scripts (low freedom)
**Present structure recommendation:**
```
## Recommended Structure
Based on complexity analysis:
- **Size**: [Simple | Progressive disclosure]
- **Scripts**: [None | Bundle error-prone operations]
- **Degrees of freedom**: [High | Medium | Low]
Structure:
[Show folder structure diagram]
```
4. **Gather information**: For each recommended component, ask:
**For all components:**
- Name (kebab-case, descriptive)
- Description (one-line summary including trigger conditions)
**For Skills:**
- What domain/knowledge does this cover?
- What are the key concepts to teach?
- What patterns or templates should it include?
- Is it user-invocable (workflow) or background (reference)?
**For Agents:**
- What specialized role does this fill?
- What skills does it need?
- Should it be read-only (no Edit/Write)?
5. **Select appropriate models** (UPDATED):
**Default to Haiku, upgrade only if needed:**
| Model | Use For | Cost vs Haiku |
|-------|---------|---------------|
| `haiku` | Most skills and agents (DEFAULT) | Baseline |
| `sonnet` | When Haiku would struggle (<80% success rate) | 12x more expensive |
| `opus` | Deep reasoning, architectural analysis | 60x more expensive |
**Ask for justification if not Haiku:**
- "This looks like a simple workflow. Should we try Haiku first?"
- "Does this require complex reasoning that Haiku can't handle?"
For each component, recommend Haiku unless there's clear reasoning for Sonnet/Opus.
6. **Generate files**: Create content using templates from capability-writing skill
**Structure options:**
a) **Simple skill** (most common):
```
skills/skill-name/
└── SKILL.md
```
b) **Progressive disclosure** (for large skills):
```
skills/skill-name/
├── SKILL.md (~200-300 lines)
├── reference/
│ ├── detailed-guide.md
│ └── api-reference.md
└── examples/
└── usage-examples.md
```
c) **With bundled scripts** (for error-prone operations):
```
skills/skill-name/
├── SKILL.md
├── reference/
│ └── error-handling.md
└── scripts/
├── validate.sh
└── process.sh
```
**Ensure proper inter-references:**
- User-invocable skill references background skills via `@~/.claude/skills/name/SKILL.md`
- Agent lists skills in `skills:` frontmatter (names only, not paths)
- User-invocable skill spawns agent via Task tool if agent is part of the set
- Scripts are called with `./scripts/script-name.sh` in SKILL.md
7. **Present for approval**: Show all generated files with their full content:
```
## Generated Files
### skills/name/SKILL.md
[full content]
### skills/name/scripts/helper.sh (if applicable)
[full content]
### agents/name/AGENT.md (if applicable)
[full content]
Ready to create these files?
```
8. **Create files** in correct locations after approval:
- Create directories if needed
- `skills/<name>/SKILL.md`
- `skills/<name>/scripts/` (if scripts recommended)
- `skills/<name>/reference/` (if progressive disclosure)
- `agents/<name>/AGENT.md` (if agent recommended)
9. **Report success**:
```
## Capability Created: name
Files created:
- skills/name/SKILL.md
- skills/name/scripts/helper.sh (if applicable)
- agents/name/AGENT.md (if applicable)
## Guidelines (UPDATED)
- Follow all conventions from capability-writing skill
- **Default to Haiku** for all new skills/agents (12x cheaper, 2-5x faster)
- **Bundle scripts** for error-prone bash operations
- **Use progressive disclosure** for skills >500 lines
- Reference existing skills rather than duplicating knowledge
- Keep components focused - split if scope is too broad
- User-invocable skills should have approval checkpoints
- Skills should have descriptive `description` fields with trigger conditions
- **Be concise** - assume Claude knows basics
## Output Style
Be concise and direct:
- No preambles ("I'll help you...")
- No sign-offs ("Let me know...")
- Show structure diagrams clearly
- Use tables for comparisons
- One decision per section

View File

@@ -0,0 +1,136 @@
---
name: ddd-breakdown
description: >
Analyze product vision using DDD to identify bounded contexts and generate
implementation issues. Use when breaking down features into DDD-based vertical
slices, or when user says /ddd-breakdown.
model: haiku
argument-hint: [vision-file]
user-invocable: true
---
# DDD Breakdown
@~/.claude/skills/ddd/SKILL.md
@~/.claude/skills/issue-writing/SKILL.md
@~/.claude/skills/gitea/SKILL.md
Analyze product vision through a DDD lens to generate implementation issues.
## Process
1. **Locate manifesto and vision**:
**Manifesto** (organization-level):
```bash
# Always in architecture repo
cat ~/.claude/manifesto.md
# Or if in architecture repo:
cat ./manifesto.md
```
**Vision** (product-level):
```bash
# If argument provided: use that file
# Otherwise: look for vision.md in current repo
cat ./vision.md
```
Verify both files exist before proceeding.
2. **Spawn DDD analyst agent**:
Use Task tool to spawn `ddd-analyst` agent:
```
Analyze this product using DDD principles.
Manifesto: [path to manifesto.md]
Vision: [path to vision.md]
Codebase: [current working directory]
Identify bounded contexts, map features to DDD patterns, and generate
user stories with DDD implementation guidance.
```
The agent will:
- Analyze manifesto (personas, beliefs, domain language)
- Analyze vision (goals, features, milestones)
- Explore codebase (existing structure, boundaries, misalignments)
- Identify bounded contexts (intended vs actual)
- Map features to DDD patterns (aggregates, commands, events)
- Generate user stories with acceptance criteria and DDD guidance
3. **Review agent output**:
The agent returns structured analysis:
- Bounded contexts identified
- User stories per context
- Refactoring needs
- Suggested implementation order
Present this to the user for review.
4. **Confirm issue creation**:
Ask user:
- Create all issues?
- Select specific issues to create?
- Modify any stories before creating?
5. **Create issues in Gitea**:
For each approved user story:
```bash
tea issues create \
--title "[story title]" \
--description "[full story with DDD guidance]"
```
Apply labels:
- `feature` (or `refactor` for refactoring issues)
- `bounded-context/[context-name]`
- Any other relevant labels from the story
6. **Link dependencies**:
For stories with dependencies:
```bash
tea issues deps add <dependent-issue> <blocker-issue>
```
7. **Report results**:
Show created issues with links:
```
## Issues Created
### Context: [Context Name]
- #123: [Issue title]
- #124: [Issue title]
### Context: [Another Context]
- #125: [Issue title]
### Refactoring
- #126: [Issue title]
View all: [link to issues page]
```
## Guidelines
- **Manifesto is organization-wide**: Always read from architecture repo
- **Vision is product-specific**: Read from current repo or provided path
- **Let agent do the analysis**: Don't try to identify contexts yourself, spawn the agent
- **Review before creating**: Always show user the analysis before creating issues
- **Label by context**: Use `bounded-context/[name]` labels for filtering
- **Link dependencies**: Use `tea issues deps add` for blockers
- **Implementation order matters**: Create foundational issues (refactoring, core aggregates) first
## Tips
- Run this when starting a new product or major feature area
- Re-run periodically to identify drift between vision and code
- Use with `/vision` skill to manage product vision
- Combine with `/plan-issues` for additional breakdown
- Review with team before creating all issues

272
skills/ddd/SKILL.md Normal file
View File

@@ -0,0 +1,272 @@
---
name: ddd
description: >
Domain-Driven Design concepts: bounded contexts, aggregates, commands, events,
and tactical patterns. Use when analyzing domain models, identifying bounded
contexts, or mapping features to DDD patterns.
user-invocable: false
---
# Domain-Driven Design (DDD)
Strategic and tactical patterns for modeling complex domains.
## Strategic DDD: Bounded Contexts
### What is a Bounded Context?
A **bounded context** is a boundary within which a domain model is consistent. Same terms can mean different things in different contexts.
**Example:** "Order" means different things in different contexts:
- **Sales Context**: Order = customer purchase with payment and shipping
- **Fulfillment Context**: Order = pick list for warehouse
- **Accounting Context**: Order = revenue transaction
### Identifying Bounded Contexts
Look for:
1. **Different language**: Same term means different things
2. **Different models**: Same concept has different attributes/behavior
3. **Different teams**: Natural organizational boundaries
4. **Different lifecycles**: Entities created/destroyed at different times
5. **Different rate of change**: Some areas evolve faster than others
**From vision/manifesto:**
- Identify personas → each persona likely interacts with different contexts
- Identify core domain concepts → group related concepts into contexts
- Identify capabilities → capabilities often align with contexts
**From existing code:**
- Look for packages/modules that cluster related concepts
- Identify seams where code is loosely coupled
- Look for translation layers between subsystems
- Identify areas where same terms mean different things
### Context Boundaries
**Good boundaries:**
- Clear interfaces between contexts
- Each context owns its data
- Contexts communicate via events or APIs
- Minimal coupling between contexts
**Bad boundaries:**
- Shared database tables across contexts
- Direct object references across contexts
- Mixed concerns within a context
### Common Context Patterns
| Pattern | Description | Example |
|---------|-------------|---------|
| **Core Domain** | Your unique competitive advantage | Custom business logic |
| **Supporting Subdomain** | Necessary but not differentiating | User management |
| **Generic Subdomain** | Common problems, use off-the-shelf | Email sending, file storage |
## Tactical DDD: Building Blocks
### Aggregates
An **aggregate** is a cluster of entities and value objects treated as a unit for data changes.
**Rules:**
- One entity is the **aggregate root** (only entity referenced from outside)
- All changes go through the root
- Enforce business invariants within the aggregate
- Keep aggregates small (2-3 entities max when possible)
**Example:**
```
Order (root)
├── OrderLine
├── ShippingAddress
└── Payment
```
External code only references `Order`, never `OrderLine` directly.
**Identifying aggregates:**
- What entities always change together?
- What invariants must be enforced?
- What is the transactional boundary?
### Commands
**Commands** represent intent to change state. Named with imperative verbs.
**Format:** `[Verb][AggregateRoot]` or `[AggregateRoot][Verb]`
**Examples:**
- `PlaceOrder` or `OrderPlace`
- `CancelSubscription` or `SubscriptionCancel`
- `ApproveInvoice` or `InvoiceApprove`
**Commands:**
- Are handled by the aggregate root
- Either succeed completely or fail
- Can be rejected (return error)
- Represent user intent or system action
### Events
**Events** represent facts that happened in the past. Named in past tense.
**Format:** `[AggregateRoot][PastVerb]` or `[Something]Happened`
**Examples:**
- `OrderPlaced`
- `SubscriptionCancelled`
- `InvoiceApproved`
- `PaymentFailed`
**Events:**
- Are immutable (already happened)
- Can be published to other contexts
- Enable eventual consistency
- Create audit trail
### Value Objects
**Value Objects** are immutable objects defined by their attributes, not identity.
**Examples:**
- `Money` (amount + currency)
- `EmailAddress`
- `DateRange`
- `Address`
**Characteristics:**
- No identity (two with same values are equal)
- Immutable (cannot change, create new instance)
- Can contain validation logic
- Can contain behavior
**When to use:**
- Concept has no lifecycle (no create/update/delete)
- Equality is based on attributes, not identity
- Can be shared/reused
### Entities
**Entities** have identity that persists over time, even if attributes change.
**Examples:**
- `User` (ID remains same even if name/email changes)
- `Order` (ID remains same through lifecycle)
- `Product` (ID remains same even if price changes)
**Characteristics:**
- Has unique identifier
- Can change over time
- Identity matters more than attributes
## Mapping Features to DDD Patterns
### Process
For each feature from vision:
1. **Identify the bounded context**: Which context does this belong to?
2. **Identify the aggregate(s)**: What entities/value objects are involved?
3. **Identify commands**: What actions can users/systems take?
4. **Identify events**: What facts should be recorded when commands succeed?
5. **Identify value objects**: What concepts are attribute-defined, not identity-defined?
### Example: "User can place an order"
**Bounded Context:** Sales
**Aggregate:** `Order` (root)
- `OrderLine` (entity)
- `ShippingAddress` (value object)
- `Money` (value object)
**Commands:**
- `PlaceOrder`
- `AddOrderLine`
- `RemoveOrderLine`
- `UpdateShippingAddress`
**Events:**
- `OrderPlaced`
- `OrderLineAdded`
- `OrderLineRemoved`
- `ShippingAddressUpdated`
**Value Objects:**
- `Money` (amount, currency)
- `Address` (street, city, zip, country)
- `Quantity`
## Refactoring to DDD
When existing code doesn't follow DDD patterns:
### Identify Misalignments
**Anemic domain model:**
- Entities with only getters/setters
- Business logic in services, not entities
- **Fix:** Move behavior into aggregates
**God objects:**
- One entity doing too much
- **Fix:** Split into multiple aggregates or value objects
**Context leakage:**
- Same model shared across contexts
- **Fix:** Create context-specific models with translation layers
**Missing boundaries:**
- Everything in one module/package
- **Fix:** Identify bounded contexts, separate into modules
### Refactoring Strategies
**Extract bounded context:**
```markdown
As a developer, I want to extract [Context] into a separate module,
so that it has clear boundaries and can evolve independently
```
**Extract aggregate:**
```markdown
As a developer, I want to extract [Aggregate] from [GodObject],
so that it enforces its own invariants
```
**Introduce value object:**
```markdown
As a developer, I want to replace [primitive] with [ValueObject],
so that validation is centralized and the domain model is clearer
```
**Introduce event:**
```markdown
As a developer, I want to publish [Event] when [Command] succeeds,
so that other contexts can react to state changes
```
## Anti-Patterns
**Avoid:**
- Aggregates spanning multiple bounded contexts
- Shared mutable state across contexts
- Direct database access across contexts
- Aggregates with dozens of entities (too large)
- Value objects with identity
- Commands without clear aggregate ownership
- Events that imply future actions (use commands)
## Tips
- Start with strategic DDD (bounded contexts) before tactical patterns
- Bounded contexts align with team/organizational boundaries
- Keep aggregates small (single entity when possible)
- Use events for cross-context communication
- Value objects make impossible states impossible
- Refactor incrementally - don't rewrite everything at once

View File

@@ -1,50 +1,21 @@
--- ---
name: gitea name: gitea
model: haiku
description: View, create, and manage Gitea issues and pull requests using tea CLI. Use when working with issues, PRs, viewing issue details, creating pull requests, adding comments, merging PRs, or when the user mentions tea, gitea, issue numbers, or PR numbers. description: View, create, and manage Gitea issues and pull requests using tea CLI. Use when working with issues, PRs, viewing issue details, creating pull requests, adding comments, merging PRs, or when the user mentions tea, gitea, issue numbers, or PR numbers.
user-invocable: false
--- ---
# Gitea CLI (tea) # Gitea CLI (tea)
Command-line interface for interacting with Gitea repositories. Command-line interface for Gitea repositories. Use `tea` for issue/PR management in Gitea instances.
## Installation **Setup required?** See [reference/setup.md](reference/setup.md) for installation and authentication.
```bash
brew install tea
```
## Authentication
The `tea` CLI authenticates via `tea logins add`. Credentials are stored locally by tea.
```bash
tea logins add # Interactive login
tea logins add --url <url> --token <token> --name <name> # Non-interactive
tea logins list # Show configured logins
tea logins default <name> # Set default login
```
## Configuration
Config is stored at `~/Library/Application Support/tea/config.yml` (macOS).
To avoid needing `--login` on every command, set defaults:
```yaml
preferences:
editor: false
flag_defaults:
remote: origin
login: git.flowmade.one
```
## Repository Detection ## Repository Detection
`tea` automatically detects the repository from git remotes when run inside a git repository. Use `--remote <name>` to specify which remote to use. `tea` automatically detects the repository from git remotes when run inside a git repository. Use `--remote <name>` to specify which remote to use.
## Common Commands ## Issues
### Issues
```bash ```bash
# List issues # List issues
@@ -79,7 +50,7 @@ tea issues deps add 5 owner/repo#3 # Cross-repo dependency
tea issues deps remove <issue> <blocker> # Remove a dependency tea issues deps remove <issue> <blocker> # Remove a dependency
``` ```
### Pull Requests ## Pull Requests
```bash ```bash
# List PRs # List PRs
@@ -119,15 +90,7 @@ tea pulls merge <number> --style rebase-merge # Rebase then merge
tea pulls clean <number> # Delete local & remote branch tea pulls clean <number> # Delete local & remote branch
``` ```
### Repository ## Comments
```bash
tea repos # List repos
tea repos <owner>/<repo> # Repository info
tea clone <owner>/<repo> # Clone repository
```
### Comments
```bash ```bash
# Add comment to issue or PR # Add comment to issue or PR
@@ -143,7 +106,15 @@ tea comment 3 "## Review Summary
> **Warning**: Do not use heredoc syntax `$(cat <<'EOF'...EOF)` with `tea comment` - it causes the command to be backgrounded and fail silently. > **Warning**: Do not use heredoc syntax `$(cat <<'EOF'...EOF)` with `tea comment` - it causes the command to be backgrounded and fail silently.
### Notifications ## Repository
```bash
tea repos # List repos
tea repos <owner>/<repo> # Repository info
tea clone <owner>/<repo> # Clone repository
```
## Notifications
```bash ```bash
tea notifications # List notifications tea notifications # List notifications
@@ -177,22 +148,6 @@ tea issues -r owner/repo # Specify repo directly
- Use `--remote gitea` when you have multiple remotes (e.g., origin + gitea) - Use `--remote gitea` when you have multiple remotes (e.g., origin + gitea)
- The `tea pulls checkout` command is handy for reviewing PRs locally - The `tea pulls checkout` command is handy for reviewing PRs locally
## Actions / CI ## Advanced Topics
```bash - **CI/Actions debugging**: See [reference/actions-ci.md](reference/actions-ci.md)
# List workflow runs
tea actions runs # List all workflow runs
tea actions runs -o json # JSON output for parsing
# List jobs for a run
tea actions jobs <run-id> # Show jobs for a specific run
tea actions jobs <run-id> -o json # JSON output
# Get job logs
tea actions logs <job-id> # Display logs for a job
# Full workflow: find failed job logs
tea actions runs # Find the run ID
tea actions jobs <run-id> # Find the job ID
tea actions logs <job-id> # View the logs
```

View File

@@ -0,0 +1,45 @@
# Gitea Actions / CI
Commands for debugging CI/Actions workflow failures in Gitea.
## Workflow Runs
```bash
# List workflow runs
tea actions runs # List all workflow runs
tea actions runs -o json # JSON output for parsing
```
## Jobs
```bash
# List jobs for a run
tea actions jobs <run-id> # Show jobs for a specific run
tea actions jobs <run-id> -o json # JSON output
```
## Logs
```bash
# Get job logs
tea actions logs <job-id> # Display logs for a job
```
## Full Workflow: Find Failed Job Logs
```bash
# 1. Find the run ID
tea actions runs
# 2. Find the job ID from that run
tea actions jobs <run-id>
# 3. View the logs
tea actions logs <job-id>
```
## Tips
- Use `-o json` with runs/jobs for programmatic parsing
- Run IDs and Job IDs are shown in the output of the respective commands
- Logs are displayed directly to stdout (can pipe to `grep` or save to file)

View File

@@ -0,0 +1,49 @@
# Gitea CLI Setup
One-time installation and authentication setup for `tea` CLI.
## Installation
```bash
brew install tea
```
## Authentication
The `tea` CLI authenticates via `tea logins add`. Credentials are stored locally by tea.
```bash
tea logins add # Interactive login
tea logins add --url <url> --token <token> --name <name> # Non-interactive
tea logins list # Show configured logins
tea logins default <name> # Set default login
```
## Configuration
Config is stored at `~/Library/Application Support/tea/config.yml` (macOS).
To avoid needing `--login` on every command, set defaults:
```yaml
preferences:
editor: false
flag_defaults:
remote: origin
login: git.flowmade.one
```
## Example: Flowmade One Setup
```bash
# Install
brew install tea
# Add login (get token from https://git.flowmade.one/user/settings/applications)
tea logins add --name flowmade --url https://git.flowmade.one --token <your-token>
# Set as default
tea logins default flowmade
```
Now `tea` commands will automatically use the flowmade login when run in a repository with a git.flowmade.one remote.

View File

@@ -1,24 +1,25 @@
--- ---
name: issue-writing name: issue-writing
description: Write clear, actionable issues with proper structure and acceptance criteria. Use when creating issues, writing bug reports, feature requests, or when the user needs help structuring an issue. description: >
Write clear, actionable issues with user stories, vertical slices, and acceptance
criteria. Use when creating issues, writing bug reports, feature requests, or when
the user needs help structuring an issue.
user-invocable: false
--- ---
# Issue Writing # Issue Writing
How to write clear, actionable issues. How to write clear, actionable issues that deliver user value.
## Issue Structure ## Primary Format: User Story
### Title Frame issues as user capabilities, not technical tasks:
- Start with action verb: "Add", "Fix", "Update", "Remove", "Refactor"
- Be specific: "Add user authentication" not "Auth stuff"
- Keep under 60 characters when possible
### Description
```markdown ```markdown
## Summary Title: As a [persona], I want to [action], so that [benefit]
One paragraph explaining what and why.
## User Story
As a [persona], I want to [action], so that [benefit]
## Acceptance Criteria ## Acceptance Criteria
- [ ] Specific, testable requirement - [ ] Specific, testable requirement
@@ -32,7 +33,72 @@ Additional background, links, or references.
Implementation hints or constraints. Implementation hints or constraints.
``` ```
## Writing Acceptance Criteria **Example:**
```markdown
Title: As a domain expert, I want to save my diagram, so that I can resume work later
## User Story
As a domain expert, I want to save my diagram to the cloud, so that I can resume
work later from any device.
## Acceptance Criteria
- [ ] User can click "Save" button in toolbar
- [ ] Diagram persists to cloud storage
- [ ] User sees confirmation message on successful save
- [ ] Saved diagram appears in recent files list
## Context
Users currently lose work when closing the browser. This is the #1 requested feature.
```
## Vertical Slices
Issues should be **vertical slices** that deliver user-visible value.
### The Demo Test
Before writing an issue, ask: **Can a user demo or test this independently?**
- **Yes** → Good issue scope
- **No** → Rethink the breakdown
### Good vs Bad Issue Titles
| Good (Vertical) | Bad (Horizontal) |
|-----------------|------------------|
| "As a user, I want to save my diagram" | "Add persistence layer" |
| "As a user, I want to see errors when login fails" | "Add error handling" |
| "As a domain expert, I want to list orders" | "Add query syntax to ADL" |
The technical work is the same, but vertical slices make success criteria clear and deliver demonstrable value.
## Writing User Stories
### Format
```
As a [persona], I want [capability], so that [benefit]
```
**Persona:** From manifesto or product vision (e.g., domain expert, developer, product owner)
**Capability:** What the user can do (not how it's implemented)
**Benefit:** Why this matters to the user
### Examples
```markdown
✓ As a developer, I want to run tests locally, so that I can verify changes before pushing
✓ As a product owner, I want to view open issues, so that I can prioritize work
✓ As a domain expert, I want to export my model as JSON, so that I can share it with my team
✗ As a developer, I want a test runner (missing benefit)
✗ I want to add authentication (missing persona and benefit)
✗ As a user, I want the system to be fast (not specific/testable)
```
## Acceptance Criteria
Good criteria are: Good criteria are:
- **Specific**: "User sees error message" not "Handle errors" - **Specific**: "User sees error message" not "Handle errors"
@@ -40,7 +106,7 @@ Good criteria are:
- **User-focused**: What the user experiences - **User-focused**: What the user experiences
- **Independent**: Each stands alone - **Independent**: Each stands alone
Examples: **Examples:**
```markdown ```markdown
- [ ] Login form validates email format before submission - [ ] Login form validates email format before submission
- [ ] Invalid credentials show "Invalid email or password" message - [ ] Invalid credentials show "Invalid email or password" message
@@ -48,10 +114,13 @@ Examples:
- [ ] Session persists across browser refresh - [ ] Session persists across browser refresh
``` ```
## Issue Types ## Alternative Formats
### Bug Report ### Bug Report
```markdown ```markdown
Title: Fix [specific problem] in [area]
## Summary ## Summary
Description of the bug. Description of the bug.
@@ -70,37 +139,48 @@ What happens instead.
- Browser/OS/Version - Browser/OS/Version
``` ```
### Feature Request
```markdown
## Summary
What feature and why it's valuable.
## Acceptance Criteria
- [ ] ...
## User Story (optional)
As a [role], I want [capability] so that [benefit].
```
### Technical Task ### Technical Task
Use sparingly - prefer user stories when possible.
```markdown ```markdown
Title: [Action] [component/area]
## Summary ## Summary
What technical work needs to be done. What technical work needs to be done and why.
## Scope ## Scope
- Include: ... - Include: ...
- Exclude: ... - Exclude: ...
## Acceptance Criteria ## Acceptance Criteria
- [ ] ... - [ ] Measurable technical outcome
- [ ] Another measurable outcome
```
## Issue Sizing
Issues should be **small enough to complete in 1-3 days**.
**Too large?** Split into smaller vertical slices:
```markdown
# Too large
As a user, I want full authentication, so that my data is secure
# Better: Split into slices
1. As a user, I want to register with email/password, so that I can create an account
2. As a user, I want to log in with my credentials, so that I can access my data
3. As a user, I want to reset my password, so that I can regain access if I forget it
``` ```
## Labels ## Labels
Use labels to categorize: Use labels to categorize:
- `bug`, `feature`, `enhancement`, `refactor` - Type: `bug`, `feature`, `enhancement`, `refactor`
- `priority/high`, `priority/low` - Priority: `priority/high`, `priority/medium`, `priority/low`
- Component labels specific to project - Component: Project-specific (e.g., `auth`, `api`, `ui`)
- DDD: `bounded-context/[name]`, `aggregate`, `command`, `event` (when applicable)
## Dependencies ## Dependencies
@@ -120,3 +200,13 @@ Identify and link dependencies when creating issues:
``` ```
This creates a formal dependency graph that tools can query. This creates a formal dependency graph that tools can query.
## Anti-Patterns
**Avoid:**
- Generic titles: "Fix bugs", "Improve performance"
- Technical jargon without context: "Refactor service layer"
- Missing acceptance criteria
- Horizontal slices: "Build API", "Add database tables"
- Vague criteria: "Make it better", "Improve UX"
- Issues too large to complete in a sprint

130
software-architecture.md Normal file
View File

@@ -0,0 +1,130 @@
# Software Architecture
> **For Claude:** This content is mirrored in `skills/software-architecture/SKILL.md` which is auto-triggered when relevant. You don't need to load this file directly.
This document describes the architectural patterns we use to achieve our [architecture beliefs](./manifesto.md#architecture-beliefs). It serves as human-readable organizational documentation.
## Beliefs to Patterns
| Belief | Primary Pattern | Supporting Patterns |
|--------|-----------------|---------------------|
| Auditability by default | Event Sourcing | Immutable events, temporal queries |
| Business language in code | Domain-Driven Design | Ubiquitous language, aggregates, bounded contexts |
| Independent evolution | Event-driven communication | Bounded contexts, published language |
| Explicit over implicit | Commands and Events | Domain events, clear intent |
## Event Sourcing
**Achieves:** Auditability by default
Instead of storing current state, we store the sequence of events that led to it.
**Core concepts:**
- **Events** are immutable facts about what happened, named in past tense: `OrderPlaced`, `PaymentReceived`
- **State** is derived by replaying events, not stored directly
- **Event store** is append-only - history is never modified
**Why this matters:**
- Complete audit trail for free
- Debug by replaying history
- Answer "what was the state at time X?"
- Recover from bugs by fixing logic and replaying
**Trade-offs:**
- More complex than CRUD for simple cases
- Requires thinking in events, not state
- Eventually consistent read models
## Domain-Driven Design
**Achieves:** Business language in code
The domain model reflects how the business thinks and talks.
**Core concepts:**
- **Ubiquitous language** - same terms in code, conversations, and documentation
- **Bounded contexts** - explicit boundaries where terms have consistent meaning
- **Aggregates** - clusters of objects that change together, with one root entity
- **Domain events** - capture what happened in business terms
**Why this matters:**
- Domain experts can read and validate the model
- New team members learn the domain through code
- Changes in business rules map clearly to code changes
**Trade-offs:**
- Upfront investment in understanding the domain
- Boundaries may need to shift as understanding grows
- Overkill for pure technical/infrastructure code
## Event-Driven Communication
**Achieves:** Independent evolution
Services communicate by publishing events, not calling each other directly.
**Core concepts:**
- **Publish events** when something important happens
- **Subscribe to events** you care about
- **No direct dependencies** between publisher and subscriber
- **Eventual consistency** - accept that not everything updates instantly
**Why this matters:**
- Add new services without changing existing ones
- Services can be deployed independently
- Natural resilience - if a subscriber is down, events queue
**Trade-offs:**
- Harder to trace request flow
- Eventual consistency requires different thinking
- Need infrastructure for reliable event delivery
## Commands and Events
**Achieves:** Explicit over implicit
Distinguish between requests (commands) and facts (events).
**Core concepts:**
- **Commands** express intent: `PlaceOrder`, `CancelSubscription`
- Commands can be rejected (validation, business rules)
- **Events** express facts: `OrderPlaced`, `SubscriptionCancelled`
- Events are immutable - what happened, happened
**Why this matters:**
- Clear separation of "trying to do X" vs "X happened"
- Commands validate, events just record
- Enables replay - reprocess events with new logic
## When to Diverge
These patterns are defaults, not mandates. Diverge intentionally when:
- **Simplicity wins** - a simple CRUD endpoint doesn't need event sourcing
- **Performance requires it** - sometimes synchronous calls are necessary
- **Team context** - patterns the team doesn't understand cause more harm than good
- **Prototyping** - validate ideas before investing in full architecture
When diverging, document the decision in the project's vision.md (see below).
## Project-Level Architecture
Each project should document its architectural choices in `vision.md` under an **Architecture** section:
```markdown
## Architecture
This project follows organization architecture patterns.
### Alignment
- Event sourcing for [which aggregates/domains]
- Bounded contexts: [list contexts and their responsibilities]
- Event-driven communication between [which services]
### Intentional Divergences
| Area | Standard Pattern | What We Do Instead | Why |
|------|------------------|-------------------|-----|
| [area] | [expected pattern] | [actual approach] | [reasoning] |
```
This creates traceability: org beliefs → patterns → project decisions.