Compare commits

...

21 Commits

Author SHA1 Message Date
502601bf21 Address review feedback: clarify approval gate before file creation
- Made step 11 explicitly a gate before file creation
- Added explicit conditional ("If user declines/approves") flow
- Added note at Phase 6 start: "Only execute this phase after user approval"
- Added error handling notes for directory creation and file writes

The step ordering was already correct (approval step 11 before file creation
steps 12-13), but the flow is now more explicit about the conditional nature.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 19:51:01 +01:00
bf28e6b825 Add /create-capability command with validation and interactive guidance
Creates the /create-capability command that scaffolds new skills, commands,
and agents with:

- Interactive guidance questions for component selection
- Model selection recommendations based on task complexity
- Comprehensive validation before file creation:
  - Frontmatter validation (required fields, valid model, tools, skills)
  - Content validation (trigger conditions, step instructions, sections)
  - Convention checks (file names, directory structure, duplicates)
- Anti-pattern warnings with actionable recommendations
- Clear error messages for validation failures
- Option to proceed despite warnings

Closes #76
Also addresses #75 (dependency)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 19:19:43 +01:00
7ed31432ee Fix subagent_type in spawn-pr-fixes and review-pr commands
- spawn-pr-fixes: "general-purpose" → "pr-fixer"
- review-pr: Added explicit subagent_type: "software-architect"

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 16:14:14 +01:00
e1c19c12c3 Fix spawn-issues to use correct subagent_type for each agent
- Issue worker: "general-purpose" → "issue-worker"
- Code reviewer: Added explicit subagent_type: "code-reviewer"
- PR fixer: Added explicit subagent_type: "pr-fixer"

Using the wrong agent type caused permission loops when spawning
background agents.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 16:13:09 +01:00
c9a72bf1d3 Add capability-writing skill with templates and design guidance
Creates a skill that teaches how to design and create capabilities
(skill + command + agent combinations) for the architecture repository.

Includes:
- Component templates for skills, commands, and agents
- Decision tree and matrix for when to use each component
- Model selection guidance (haiku/sonnet/opus)
- Naming conventions and anti-patterns to avoid
- References to detailed documentation in docs/
- Checklists for creating each component type

Closes #74

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 15:42:50 +01:00
f8d4640d4f Add architecture beliefs to manifesto and enhance software-architecture skill
- Add Architecture Beliefs section to manifesto with outcome-focused beliefs:
  auditability, business language in code, independent evolution, explicit over implicit
- Create software-architecture.md as human-readable documentation
- Enhance software-architecture skill with beliefs→patterns mapping (DDD, Event
  Sourcing, event-driven communication) and auto-trigger description
- Update work-issue command to reference skill and check project architecture
- Update issue-worker agent with software-architecture skill
- Add Architecture section template to vision-management skill

The skill is now auto-triggered when implementing, reviewing, or planning
architectural work. Project-level architecture choices go in vision.md.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 14:52:40 +01:00
73caf4e4cf Fix spawn-issues: use worktrees for code reviewers
The code reviewer prompt was minimal and didn't specify worktree setup,
causing parallel reviewers to interfere with each other by checking out
different branches in the same directory.

Changes:
- Add worktree setup/cleanup to code reviewer prompt (like issue-worker/fixer)
- Add branch tracking to issue state
- Add note about passing branch name to reviewers
- Expand reviewer prompt with full review process

This ensures each reviewer works in isolation at:
  ../<repo>-review-<pr-number>

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 01:14:16 +01:00
095b5e7982 Add /arch-refine-issue command for architectural issue refinement
Creates a new command that refines issues with architectural perspective
by spawning the software-architect agent to analyze the codebase before
proposing implementation guidance. The command:

- Fetches issue details and spawns software-architect agent
- Analyzes existing patterns and affected components
- Identifies architectural concerns and dependencies
- Proposes refined description with technical notes
- Allows user to apply, edit, or skip the refinement

Closes #59

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 00:10:42 +00:00
8f0b50b9ce Enhance /review-pr with software architecture review
Add software architecture review as a standard part of PR review process:
- Reference software-architecture skill for patterns and checklists
- Spawn software-architect agent for architectural analysis
- Add checks for pattern consistency, dependency direction, breaking changes,
  module boundaries, and error handling
- Structure review output with separate Code Review and Architecture Review
  sections

Closes #60

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 00:09:50 +00:00
3a64d68889 Add /arch-review-repo command for repository architecture reviews
Creates a new command that spawns the software-architect agent to perform
comprehensive architecture audits. The command analyzes directory structure,
package organization, patterns, anti-patterns, dependencies, and test coverage,
then presents prioritized recommendations with a health score.

Closes #58

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 01:05:47 +01:00
c27659f1dd Update spawn-issues to event-driven pattern
Replace polling loop with task-notification based orchestration.
Background tasks send notifications when complete - no need to poll.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 01:03:17 +01:00
392228a34f Add software-architect agent for architectural analysis
Create the software-architect agent that performs deep architectural
analysis on codebases. The agent:

- References software-architecture skill for patterns and checklists
- Supports three analysis types: repo-audit, issue-refine, pr-review
- Analyzes codebase structure and patterns
- Applies architectural review checklists from the skill
- Identifies anti-patterns (god packages, circular deps, etc.)
- Generates prioritized recommendations (P0-P3)
- Returns structured ARCHITECT_ANALYSIS_RESULT for calling commands

Closes #57

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 00:59:46 +01:00
7d4facfedc Fix code-reviewer agent: heredoc bug and branch cleanup
- Add warning about heredoc syntax with tea comment (causes backgrounding)
- Add tea pulls clean step after merging PRs
- Agent already references gitea skill which documents the heredoc issue

Closes #62

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 23:50:13 +00:00
8ed646857a Add software-architecture skill
Creates the foundational skill that encodes software architecture
best practices, review checklists, and patterns for Go and generic
architecture guidance.

Closes #56

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 00:04:04 +01:00
22962c22cf Update spawn-issues to concurrent pipeline with status updates
- Each issue flows independently through: implement → review → fix → review
- Don't wait for all workers before starting reviews
- Print status update as each step completes
- Poll loop checks all tasks, advances each issue independently
- State machine: implementing → reviewing → fixing → approved/failed

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 18:11:05 +01:00
3afe930a27 Refactor spawn-issues as orchestrator
spawn-issues now orchestrates the full workflow:
- Phase 1: Spawn issue-workers in parallel, wait for completion
- Phase 2: Review loop - spawn code-reviewer, if needs work spawn pr-fixer
- Phase 3: Report final status

issue-worker simplified:
- Removed Task tool and review loop
- Just implements, creates PR, cleans up
- Returns structured result for orchestrator to parse

Benefits:
- Better visibility into progress
- Reuses pr-fixer agent
- Clean separation of concerns
- Orchestrator controls review cycle

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 17:33:22 +01:00
7dffdc4e77 Add review loop to spawn-issues agent prompt
The inline prompt in spawn-issues.md was missing the review loop
that was added to issue-worker/agent.md. Now includes:
- Step 7: Spawn code-reviewer synchronously, fix and re-review if needed
- Step 9: Concise final summary output

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 17:26:21 +01:00
d3bc674b4a Add /spawn-pr-fixes command and pr-fixer agent
New command to spawn parallel agents that address PR review feedback:
- /spawn-pr-fixes 12 15 18 - fix specific PRs
- /spawn-pr-fixes - auto-find PRs with requested changes

pr-fixer agent workflow:
- Creates worktree from PR branch
- Reads review comments
- Addresses each piece of feedback
- Commits and pushes fixes
- Runs code-reviewer synchronously
- Loops until approved (max 3 iterations)
- Cleans up worktree
- Outputs concise summary

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 17:14:24 +01:00
0692074e16 Add review loop and concise summary to issue-worker agent
- Add Task tool to spawn code-reviewer synchronously
- Add review loop: fix issues and re-review until approved (max 3 iterations)
- Add final summary format for cleaner output to spawning process
- Reviewer works in same worktree, cleanup only after review completes

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 17:07:42 +01:00
c67595b421 Add skills frontmatter to issue-worker agent
Background agents need skills specified in frontmatter rather than
using @ syntax which may not expand for Task-spawned agents.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 16:57:22 +01:00
a7d7d60440 Add /spawn-issues command for parallel issue work
New command that spawns background agents to work on multiple
issues simultaneously, each in an isolated git worktree.

- commands/spawn-issues.md: Entry point, parses args, spawns agents
- agents/issue-worker/agent.md: Autonomous agent that implements
  a single issue (worktree setup, implement, PR, cleanup)

Worktrees are automatically cleaned up after PR creation.
Branch remains on remote for follow-up work if needed.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 16:50:34 +01:00
17 changed files with 2648 additions and 14 deletions

View File

@@ -16,6 +16,7 @@ make install
| Component | Purpose | | Component | Purpose |
|-----------|---------| |-----------|---------|
| `manifesto.md` | Organization vision, personas, beliefs, principles | | `manifesto.md` | Organization vision, personas, beliefs, principles |
| `software-architecture.md` | Architectural patterns (human docs, mirrored in skill) |
| `learnings/` | Historical record and governance | | `learnings/` | Historical record and governance |
| `commands/` | AI workflow entry points (/work-issue, /manifesto, etc.) | | `commands/` | AI workflow entry points (/work-issue, /manifesto, etc.) |
| `skills/` | Tool and practice knowledge | | `skills/` | Tool and practice knowledge |
@@ -28,6 +29,7 @@ make install
``` ```
architecture/ architecture/
├── manifesto.md # Organization vision and beliefs ├── manifesto.md # Organization vision and beliefs
├── software-architecture.md # Patterns linked to beliefs (DDD, ES)
├── learnings/ # Captured learnings and governance ├── learnings/ # Captured learnings and governance
├── commands/ # Slash commands (/work-issue, /dashboard) ├── commands/ # Slash commands (/work-issue, /dashboard)
├── skills/ # Knowledge modules (auto-triggered) ├── skills/ # Knowledge modules (auto-triggered)

View File

@@ -25,7 +25,9 @@ You will receive a PR number to review. Follow this process:
- **Test Coverage**: Missing tests, untested edge cases - **Test Coverage**: Missing tests, untested edge cases
3. Generate a structured review comment 3. Generate a structured review comment
4. Post the review using `tea comment <number> "<review body>"` 4. Post the review using `tea comment <number> "<review body>"`
5. **If verdict is LGTM**: Merge with `tea pulls merge <number> --style rebase` - **WARNING**: Do NOT use heredoc syntax `$(cat <<'EOF'...)` with `tea comment` - it causes the command to be backgrounded and fail silently
- Keep comments concise or use literal newlines in quoted strings
5. **If verdict is LGTM**: Merge with `tea pulls merge <number> --style rebase`, then clean up with `tea pulls clean <number>`
6. **If verdict is NOT LGTM**: Do not merge; leave for the user to address 6. **If verdict is NOT LGTM**: Do not merge; leave for the user to address
## Review Comment Format ## Review Comment Format

View File

@@ -0,0 +1,130 @@
---
name: issue-worker
description: Autonomous agent that implements a single issue in an isolated git worktree
tools: Bash, Read, Write, Edit, Glob, Grep, TodoWrite
skills: gitea, issue-writing, software-architecture
---
# Issue Worker Agent
Autonomously implements a single issue in an isolated git worktree. Creates a PR and returns - the orchestrator handles review.
## Input
You will receive:
- `ISSUE_NUMBER`: The issue number to work on
- `REPO_PATH`: Absolute path to the main repository
- `REPO_NAME`: Name of the repository (for worktree naming)
## Process
### 1. Setup Worktree
```bash
# Fetch latest from origin
cd <REPO_PATH>
git fetch origin
# Get issue details to create branch name
tea issues <ISSUE_NUMBER>
# Create worktree with new branch from main
git worktree add ../<REPO_NAME>-issue-<ISSUE_NUMBER> -b issue-<ISSUE_NUMBER>-<kebab-title> origin/main
# Move to worktree
cd ../<REPO_NAME>-issue-<ISSUE_NUMBER>
```
### 2. Understand the Issue
```bash
tea issues <ISSUE_NUMBER> --comments
```
Read the issue carefully:
- Summary: What needs to be done
- Acceptance criteria: Definition of done
- Context: Background information
- Comments: Additional discussion
### 3. Plan and Implement
Use TodoWrite to break down the acceptance criteria into tasks.
Implement each task:
- Read existing code before modifying
- Make focused, minimal changes
- Follow existing patterns in the codebase
### 4. Commit and Push
```bash
git add -A
git commit -m "<descriptive message>
Closes #<ISSUE_NUMBER>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
git push -u origin issue-<ISSUE_NUMBER>-<kebab-title>
```
### 5. Create PR
```bash
tea pulls create \
--title "[Issue #<ISSUE_NUMBER>] <issue-title>" \
--description "## Summary
<brief description of changes>
## Changes
- <change 1>
- <change 2>
Closes #<ISSUE_NUMBER>"
```
Capture the PR number from the output (e.g., "Pull Request #42 created").
### 6. Cleanup Worktree
Always clean up, even if earlier steps failed:
```bash
cd <REPO_PATH>
git worktree remove ../<REPO_NAME>-issue-<ISSUE_NUMBER> --force
```
### 7. Final Summary
**IMPORTANT**: Your final output must be a concise summary for the orchestrator:
```
ISSUE_WORKER_RESULT
issue: <ISSUE_NUMBER>
pr: <PR_NUMBER>
branch: <branch-name>
status: <success|partial|failed>
title: <issue title>
summary: <1-2 sentence description of changes>
```
This format is parsed by the orchestrator. Do NOT include verbose logs - only this summary.
## Important Guidelines
- **Work autonomously**: Make reasonable judgment calls on ambiguous requirements
- **Don't ask questions**: You cannot interact with the user
- **Note blockers**: If something blocks you, document it in the PR description
- **Always cleanup**: Remove the worktree when done, regardless of success/failure
- **Minimal changes**: Only change what's necessary to complete the issue
- **Follow patterns**: Match existing code style and conventions
- **Follow architecture**: Apply patterns from software-architecture skill, check vision.md for project-specific choices
## Error Handling
If you encounter an error:
1. Try to recover if possible
2. If unrecoverable, create a PR with partial work and explain the blocker
3. Always run the cleanup step
4. Report status as "partial" or "failed" in summary

139
agents/pr-fixer/agent.md Normal file
View File

@@ -0,0 +1,139 @@
---
name: pr-fixer
description: Autonomous agent that addresses PR review feedback in an isolated git worktree
tools: Bash, Read, Write, Edit, Glob, Grep, TodoWrite, Task
skills: gitea, code-review
---
# PR Fixer Agent
Autonomously addresses review feedback on a pull request in an isolated git worktree.
## Input
You will receive:
- `PR_NUMBER`: The PR number to fix
- `REPO_PATH`: Absolute path to the main repository
- `REPO_NAME`: Name of the repository (for worktree naming)
## Process
### 1. Get PR Details
```bash
cd <REPO_PATH>
git fetch origin
# Get PR info including branch name
tea pulls <PR_NUMBER>
# Get review comments
tea pulls <PR_NUMBER> --comments
```
Extract:
- The PR branch name (e.g., `issue-42-add-feature`)
- All review comments and requested changes
### 2. Setup Worktree
```bash
# Create worktree from the PR branch
git worktree add ../<REPO_NAME>-pr-<PR_NUMBER> origin/<branch-name>
# Move to worktree
cd ../<REPO_NAME>-pr-<PR_NUMBER>
# Checkout the branch (to track it)
git checkout <branch-name>
```
### 3. Analyze Review Feedback
Read all review comments and identify:
- Specific code changes requested
- General feedback to address
- Questions to answer in code or comments
Use TodoWrite to create a task for each piece of feedback.
### 4. Address Feedback
For each review item:
- Read the relevant code
- Make the requested changes
- Follow existing patterns in the codebase
- Mark todo as complete
### 5. Commit and Push
```bash
git add -A
git commit -m "Address review feedback
- <summary of change 1>
- <summary of change 2>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
git push
```
### 6. Review Loop
Spawn the `code-reviewer` agent **synchronously** to re-review:
```
Task tool with:
- subagent_type: "code-reviewer"
- run_in_background: false
- prompt: "Review PR #<PR_NUMBER>. Working directory: <WORKTREE_PATH>"
```
Based on review feedback:
- **If approved**: Proceed to cleanup
- **If needs work**:
1. Address the new feedback
2. Commit and push the fixes
3. Trigger another review
4. Repeat until approved (max 3 iterations to avoid infinite loops)
### 7. Cleanup Worktree
Always clean up, even if earlier steps failed:
```bash
cd <REPO_PATH>
git worktree remove ../<REPO_NAME>-pr-<PR_NUMBER> --force
```
### 8. Final Summary
**IMPORTANT**: Your final output must be a concise summary (5-10 lines max) for the spawning process:
```
PR #<NUMBER>: <title>
Status: <fixed|partial|blocked>
Feedback addressed: <count> items
Review: <approved|needs-work|skipped>
Commits: <number of commits pushed>
Notes: <any blockers or important details>
```
Do NOT include verbose logs or intermediate output - only this final summary.
## Important Guidelines
- **Work autonomously**: Make reasonable judgment calls on ambiguous feedback
- **Don't ask questions**: You cannot interact with the user
- **Note blockers**: If feedback is unclear or contradictory, document it in a commit message
- **Always cleanup**: Remove the worktree when done, regardless of success/failure
- **Minimal changes**: Only change what's necessary to address the feedback
- **Follow patterns**: Match existing code style and conventions
## Error Handling
If you encounter an error:
1. Try to recover if possible
2. If unrecoverable, push partial work and explain in a comment
3. Always run the cleanup step

View File

@@ -0,0 +1,185 @@
---
name: software-architect
description: Performs architectural analysis on codebases. Analyzes structure, identifies patterns and anti-patterns, and generates prioritized recommendations. Spawned by commands for deep, isolated analysis.
# Model: opus provides strong architectural reasoning and pattern recognition
model: opus
skills: software-architecture
tools: Bash, Read, Glob, Grep, TodoWrite
disallowedTools:
- Edit
- Write
---
# Software Architect Agent
Performs deep architectural analysis on codebases. Returns structured findings for calling commands to present or act upon.
## Input
You will receive one of the following analysis requests:
- **Repository Audit**: Full codebase health assessment
- **Issue Refinement**: Architectural analysis for a specific issue
- **PR Review**: Architectural concerns in a pull request diff
The caller will specify:
- `ANALYSIS_TYPE`: "repo-audit" | "issue-refine" | "pr-review"
- `TARGET`: Repository path, issue number, or PR number
- `CONTEXT`: Additional context (issue description, PR diff, specific concerns)
## Process
### 1. Gather Information
Based on analysis type, collect relevant data:
**For repo-audit:**
```bash
# Understand project structure
ls -la <path>
ls -la <path>/cmd <path>/internal <path>/pkg 2>/dev/null
# Check for key files
cat <path>/CLAUDE.md
cat <path>/go.mod 2>/dev/null
cat <path>/package.json 2>/dev/null
# Analyze package structure
find <path> -name "*.go" -type f | head -50
find <path> -name "*.ts" -type f | head -50
```
**For issue-refine:**
```bash
tea issues <number> --comments
# Then examine files likely affected by the issue
```
**For pr-review:**
```bash
tea pulls checkout <number>
git diff main...HEAD
```
### 2. Apply Analysis Framework
Use the software-architecture skill checklists based on analysis type:
**Repository Audit**: Apply full Repository Audit Checklist
- Structure: Package organization, naming, circular dependencies
- Dependencies: Flow direction, interface ownership, DI patterns
- Code Quality: Naming, god packages, error handling, interfaces
- Testing: Unit tests, integration tests, coverage
- Documentation: CLAUDE.md, vision.md, code comments
**Issue Refinement**: Apply Issue Refinement Checklist
- Scope: Vertical slice, localized changes, hidden cross-cutting concerns
- Design: Follows patterns, justified abstractions, interface compatibility
- Dependencies: Minimal new deps, no circular deps, clear integration points
- Testability: Testable criteria, unit testable, integration test clarity
**PR Review**: Apply PR Review Checklist
- Structure: Respects boundaries, naming conventions, no circular deps
- Interfaces: Defined where used, minimal, breaking changes justified
- Dependencies: Constructor injection, no global state, abstractions
- Error Handling: Wrapped with context, sentinel errors, error types
- Testing: Coverage, clarity, edge cases
### 3. Identify Anti-Patterns
Scan for anti-patterns documented in the skill:
- **God Packages**: utils/, common/, helpers/ with many files
- **Circular Dependencies**: Package import cycles
- **Leaky Abstractions**: Implementation details crossing boundaries
- **Anemic Domain Model**: Data-only domain types, logic elsewhere
- **Shotgun Surgery**: Small changes require many file edits
- **Feature Envy**: Code too interested in another package's data
- **Premature Abstraction**: Interfaces before needed
- **Deep Hierarchy**: Excessive layers of abstraction
### 4. Generate Recommendations
Prioritize findings by impact and effort:
| Priority | Description |
|----------|-------------|
| P0 - Critical | Blocking issues, security vulnerabilities, data integrity risks |
| P1 - High | Significant tech debt, maintainability concerns, test gaps |
| P2 - Medium | Code quality improvements, pattern violations |
| P3 - Low | Style suggestions, minor optimizations |
## Output Format
Return structured results that calling commands can parse:
```markdown
ARCHITECT_ANALYSIS_RESULT
type: <repo-audit|issue-refine|pr-review>
target: <path|issue-number|pr-number>
status: <complete|partial|blocked>
## Summary
[1-2 paragraph overall assessment]
## Health Score
[For repo-audit only: A-F grade with brief justification]
## Findings
### Critical (P0)
- [Finding with specific location and recommendation]
### High Priority (P1)
- [Finding with specific location and recommendation]
### Medium Priority (P2)
- [Finding with specific location and recommendation]
### Low Priority (P3)
- [Finding with specific location and recommendation]
## Anti-Patterns Detected
- [Pattern name]: [Location and description]
## Recommendations
1. [Specific, actionable recommendation]
2. [Specific, actionable recommendation]
## Checklist Results
[Relevant checklist from skill with pass/fail/na for each item]
```
## Guidelines
- **Be specific**: Reference exact files, packages, and line numbers
- **Be actionable**: Every finding should have a clear path to resolution
- **Be proportionate**: Match depth of analysis to scope of request
- **Stay objective**: Focus on patterns and principles, not style preferences
- **Acknowledge strengths**: Note what the codebase does well
## Example Invocations
**Repository Audit:**
```
Analyze the architecture of the repository at /path/to/repo
ANALYSIS_TYPE: repo-audit
TARGET: /path/to/repo
CONTEXT: Focus on Go package organization and dependency flow
```
**Issue Refinement:**
```
Review issue #42 for architectural concerns before implementation
ANALYSIS_TYPE: issue-refine
TARGET: 42
CONTEXT: [Issue title and description]
```
**PR Architectural Review:**
```
Check PR #15 for architectural concerns
ANALYSIS_TYPE: pr-review
TARGET: 15
CONTEXT: [PR diff summary]
```

View File

@@ -0,0 +1,164 @@
---
description: Refine an issue with architectural perspective. Analyzes existing codebase patterns and provides implementation guidance.
argument-hint: <issue-number>
---
# Architecturally Refine Issue #$1
@~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/issue-writing/SKILL.md
## Overview
Refine an issue in the context of the project's architecture. This command:
1. Fetches the issue details
2. Spawns the software-architect agent to analyze the codebase
3. Identifies how the issue fits existing patterns
4. Proposes refined description and acceptance criteria
## Process
### Step 1: Fetch Issue Details
```bash
tea issues $1 --comments
```
Capture:
- Title
- Description
- Acceptance criteria
- Any existing discussion
### Step 2: Spawn Software-Architect Agent
Use the Task tool to spawn the software-architect agent for issue refinement analysis:
```
Task tool with:
- subagent_type: "software-architect"
- prompt: See prompt below
```
**Agent Prompt:**
```
Analyze the architecture for issue refinement.
ANALYSIS_TYPE: issue-refine
TARGET: $1
CONTEXT:
<issue title and description from step 1>
Repository path: <current working directory>
Focus on:
1. Understanding existing project structure and patterns
2. Identifying packages/modules that will be affected
3. Analyzing existing conventions and code style
4. Detecting potential architectural concerns
5. Suggesting implementation approach that fits existing patterns
```
### Step 3: Parse Agent Analysis
The software-architect agent returns structured output with:
- Summary of architectural findings
- Affected packages/modules
- Pattern recommendations
- Potential concerns (breaking changes, tech debt, pattern violations)
- Implementation suggestions
### Step 4: Present Refinement Proposal
Present the refined issue to the user with:
**1. Architectural Context**
- Affected packages/modules
- Existing patterns that apply
- Dependency implications
**2. Concerns and Risks**
- Breaking changes
- Tech debt considerations
- Pattern violations to avoid
**3. Proposed Refinement**
- Refined description with architectural context
- Updated acceptance criteria (if needed)
- Technical notes section
**4. Implementation Guidance**
- Suggested approach
- Files likely to be modified
- Recommended order of changes
### Step 5: User Decision
Ask the user what action to take:
- **Apply**: Update the issue with refined description and technical notes
- **Edit**: Let user modify the proposal before applying
- **Skip**: Keep the original issue unchanged
### Step 6: Update Issue (if approved)
If user approves, update the issue using tea CLI:
```bash
tea issues edit $1 --description "<refined description>"
```
Add a comment with the architectural analysis:
```bash
tea comment $1 "## Architectural Analysis
<findings from software-architect agent>
---
Generated by /arch-refine-issue"
```
## Output Format
Present findings in a clear, actionable format:
```markdown
## Architectural Analysis for Issue #$1
### Affected Components
- `package/name` - Description of impact
- `another/package` - Description of impact
### Existing Patterns
- Pattern 1: How it applies
- Pattern 2: How it applies
### Concerns
- [ ] Breaking change: description (if applicable)
- [ ] Tech debt: description (if applicable)
- [ ] Pattern violation risk: description (if applicable)
### Proposed Refinement
**Updated Description:**
<refined description>
**Updated Acceptance Criteria:**
- [ ] Original criteria (unchanged)
- [ ] New criteria based on analysis
**Technical Notes:**
<implementation guidance based on architecture>
### Recommended Approach
1. Step 1
2. Step 2
3. Step 3
```
## Error Handling
- If issue does not exist, inform user
- If software-architect agent fails, report partial analysis
- If tea CLI fails, show manual instructions

View File

@@ -0,0 +1,73 @@
---
description: Perform a full architecture review of the current repository. Analyzes structure, patterns, dependencies, and generates prioritized recommendations.
argument-hint:
context: fork
---
# Architecture Review
@~/.claude/skills/software-architecture/SKILL.md
## Process
1. **Identify the repository**: Use the current working directory as the repository path.
2. **Spawn the software-architect agent** for deep analysis:
```
ANALYSIS_TYPE: repo-audit
TARGET: <repository-path>
CONTEXT: Full repository architecture review
```
The agent will:
- Analyze directory structure and package organization
- Identify patterns and anti-patterns in the codebase
- Assess dependency graph and module boundaries
- Review test coverage approach
- Generate structured findings with prioritized recommendations
3. **Present the results** to the user in this format:
```markdown
## Repository Architecture Review: <repo-name>
### Structure: <Good|Needs Work>
- [Key observations about package organization]
- [Directory structure assessment]
- [Naming conventions evaluation]
### Patterns Identified
- [Positive patterns found in the codebase]
- [Architectural styles detected (layered, hexagonal, etc.)]
### Anti-Patterns Detected
- [Anti-pattern name]: [Location and description]
- [Anti-pattern name]: [Location and description]
### Concerns
- [Specific issues that need attention]
- [Technical debt areas]
### Recommendations (prioritized)
1. **P0 - Critical**: [Most urgent recommendation]
2. **P1 - High**: [Important improvement]
3. **P2 - Medium**: [Nice-to-have improvement]
4. **P3 - Low**: [Minor optimization]
### Health Score: <A|B|C|D|F>
[Brief justification for the grade]
```
4. **Offer follow-up actions**:
- Create issues for critical findings
- Generate a detailed report
- Review specific components in more depth
## Guidelines
- Be specific: Reference exact files, packages, and locations
- Be actionable: Every finding should have a clear path to resolution
- Be balanced: Acknowledge what the codebase does well
- Be proportionate: Focus on high-impact issues first
- Stay objective: Focus on patterns and principles, not style preferences

View File

@@ -0,0 +1,256 @@
---
description: Create new capabilities (skills, commands, agents) with validation and guided design decisions.
argument-hint: <description>
model: sonnet
---
# Create Capability
@~/.claude/skills/capability-writing/SKILL.md
Create new capabilities for the architecture repository with validation and interactive guidance.
## Process
### Phase 1: Understand Intent
1. **Parse the description** from `$1` or ask for one:
- "What capability do you want to add? Describe what it should do."
2. **Ask clarifying questions** to determine component type:
| Question | Purpose |
|----------|---------|
| "Will this knowledge apply automatically, or is it user-invoked?" | Skill vs Command |
| "Does this need isolated context for complex work?" | Agent needed? |
| "Is this read-only analysis or does it modify files?" | Tool restrictions |
| "Will this be used repeatedly, or is it one-time?" | Worth encoding? |
3. **Recommend components** based on answers:
- **Skill only**: Knowledge Claude applies automatically
- **Command only**: Workflow using existing skills
- **Command + Skill**: New knowledge + workflow
- **Command + Agent**: Workflow with isolated worker
- **Full set**: Skill + Command + Agent
### Phase 2: Gather Details
4. **Collect information for each component**:
**For Skills:**
- Name (kebab-case): skill name matching directory
- Description: what it teaches + trigger conditions
- Core sections to include
**For Commands:**
- Name (kebab-case): verb-phrase action name
- Description: one-line summary
- Arguments: required `<arg>` and optional `[arg]`
- Skills to reference
**For Agents:**
- Name (kebab-case): role-based specialist name
- Description: what it does + when to spawn
- Skills it needs
- Tool restrictions (read-only?)
### Phase 3: Model Selection
5. **Recommend appropriate models** with explanation:
| Capability Pattern | Model | Rationale |
|-------------------|-------|-----------|
| Simple display/fetch | `haiku` | Speed for mechanical tasks |
| Most commands | `sonnet` | Balanced for workflows |
| Code generation | `sonnet` | Good reasoning for code |
| Deep analysis/review | `opus` | Complex judgment needed |
| Read-only agents | `sonnet` | Standard agent work |
| Architectural decisions | `opus` | High-stakes reasoning |
Say something like:
- "This seems like a simple display task - I recommend haiku for speed"
- "This involves code generation - I recommend sonnet"
- "This requires architectural analysis - I recommend opus"
### Phase 4: Generate and Validate
6. **Generate file content** using templates from capability-writing skill
7. **Run validation checks** before showing preview:
#### Frontmatter Validation
| Check | Component | Rule |
|-------|-----------|------|
| Required fields | All | `name` for skills/agents, `description` for all |
| Model value | All | Must be `haiku`, `sonnet`, or `opus` (or absent) |
| Tools list | Agents | Only valid tool names: `Bash`, `Read`, `Write`, `Edit`, `Glob`, `Grep`, `Task`, `TodoWrite` |
| Skills reference | Agents | Each skill in list must exist in `skills/*/SKILL.md` |
#### Content Validation
| Check | Component | Rule |
|-------|-----------|------|
| Trigger conditions | Skills | Description must explain when to use (not just what) |
| Step instructions | Commands | Must have numbered steps with `**Step**:` format |
| Behavior sections | Agents | Must have "When Invoked" or process section |
| Skill references | Commands | `@~/.claude/skills/name/SKILL.md` paths must be valid |
#### Convention Checks
| Check | Rule |
|-------|------|
| Skill file name | Must be `SKILL.md` (uppercase) |
| Command file name | Must be lowercase kebab-case |
| Agent file name | Must be `AGENT.md` (uppercase) |
| Directory structure | `skills/<name>/`, `commands/`, `agents/<name>/` |
| No duplicates | Name must not match existing capability |
8. **Check for anti-patterns** and warn:
| Anti-pattern | Detection | Warning |
|--------------|-----------|---------|
| Trigger in body | Skill body contains "when to use" | "Move trigger conditions to description frontmatter" |
| No tool restrictions | Read-only agent without `disallowedTools` | "Consider adding `disallowedTools: [Edit, Write]` for read-only agents" |
| Missing skill refs | Command mentions domain without `@` reference | "Add explicit skill reference: `@~/.claude/skills/name/SKILL.md`" |
| Overly broad tools | Agent allows all tools but does specific task | "Consider restricting tools to what's actually needed" |
| Generic naming | Name like `utils`, `helper`, `misc` | "Use specific domain-focused naming" |
| God capability | Single component handling multiple unrelated concerns | "Consider splitting into focused components" |
### Phase 5: Present and Confirm
9. **Show validation results**:
```
## Validation Results
[PASS] Frontmatter: All required fields present
[PASS] Model: sonnet is valid
[WARN] Anti-pattern: Agent allows all tools but only reads files
Recommendation: Add disallowedTools: [Edit, Write]
[PASS] Conventions: File names follow patterns
[PASS] No duplicates: Name is unique
Proceed with warnings? (y/n)
```
10. **Show file preview** with full content:
```
## Files to Create
### skills/migration-review/SKILL.md
```yaml
---
name: migration-review
description: >
Knowledge for reviewing database migrations...
---
```
### commands/review-migration.md
```yaml
---
description: Review database migrations for safety and best practices
---
```
```
11. **Ask for approval** (gate before file creation):
- "Create these files? (y/n)"
- If warnings exist: "There are warnings. Proceed anyway? (y/n)"
- **If user declines**: Stop here. Offer to adjust the generated content or cancel.
- **If user approves**: Proceed to Phase 6.
### Phase 6: Create Files
**Only execute this phase after user approval in step 11.**
12. **Create directories** if needed:
```bash
mkdir -p skills/<name>
mkdir -p agents/<name>
```
If directory creation fails, report the error and stop.
13. **Write files** to correct locations:
- `skills/<name>/SKILL.md`
- `commands/<name>.md`
- `agents/<name>/AGENT.md`
If any file write fails, report which files were created and which failed.
14. **Report success**:
```
## Capability Created
Files created:
- skills/migration-review/SKILL.md
- commands/review-migration.md
Next steps:
1. Run `make install` to symlink to ~/.claude/
2. Test with: /review-migration
3. Refine based on usage
```
## Validation Error Messages
Provide clear, actionable error messages:
| Error | Message |
|-------|---------|
| Missing name | "Skill requires `name` field in frontmatter" |
| Invalid model | "Model must be `haiku`, `sonnet`, or `opus`. Got: `gpt-4`" |
| Unknown tool | "Unknown tool `Search` in disallowedTools. Valid tools: Bash, Read, Write, Edit, Glob, Grep, Task, TodoWrite" |
| Missing skill | "Agent references skill `foo` but `skills/foo/SKILL.md` does not exist" |
| Invalid path | "Skill reference `@~/.claude/skills/bar/SKILL.md` invalid - directory `skills/bar/` not found" |
| Duplicate name | "Capability `gitea` already exists at `skills/gitea/SKILL.md`" |
| Wrong file name | "Skill file must be named `SKILL.md`, not `skill.md`" |
## Example Session
```
User: /create-capability I want to add support for reviewing database migrations
Claude: I'll help you create a migration review capability. Let me ask a few questions:
1. Will this knowledge apply automatically when Claude sees migrations,
or should users explicitly invoke it?
User: Users should invoke it explicitly
Claude: 2. Does this need isolated context for deep analysis, or can it work
in the main conversation?
User: Isolated would be good for complex migrations
Claude: 3. Is this read-only analysis, or will it modify migration files?
User: Just analysis, read-only
Claude: Based on your answers, I recommend:
- **Command**: `/review-migration` - Entry point for users
- **Skill**: `migration-review` - Knowledge about safe migrations
- **Agent**: `migration-reviewer` - Isolated analysis (read-only)
For model selection:
- Command: `sonnet` (standard workflow)
- Agent: `sonnet` with `opus` upgrade path for complex cases
Let me gather details for each component...
[Continues with file generation, validation, and creation]
```
## Guidelines
- Always run full validation before showing preview
- Show warnings but allow override with explicit confirmation
- Validate skill references exist before creating dependent components
- Suggest related existing skills that could be referenced
- Keep generated content minimal - users can expand after testing

View File

@@ -1,20 +1,66 @@
--- ---
description: Review a Gitea pull request. Fetches PR details, diff, and comments. description: Review a Gitea pull request. Fetches PR details, diff, and comments. Includes both code review and software architecture review.
argument-hint: <pr-number> argument-hint: <pr-number>
--- ---
# Review PR #$1 # Review PR #$1
@~/.claude/skills/gitea/SKILL.md @~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/software-architecture/SKILL.md
## 1. Gather Information
1. **View PR details** with `--comments` flag to see description, metadata, and discussion 1. **View PR details** with `--comments` flag to see description, metadata, and discussion
2. **Get the diff** to review the changes 2. **Get the diff** to review the changes:
```bash
tea pulls checkout <number>
git diff main...HEAD
```
## 2. Code Review
Review the changes and provide feedback on: Review the changes and provide feedback on:
- Code quality - Code quality and style
- Potential bugs - Potential bugs or logic errors
- Test coverage - Test coverage
- Documentation - Documentation updates
## 3. Software Architecture Review
Spawn the software-architect agent for architectural analysis:
```
Task tool with:
- subagent_type: "software-architect"
- prompt: |
ANALYSIS_TYPE: pr-review
TARGET: <pr-number>
CONTEXT: [Include the PR diff and description]
```
The architecture review checks:
- **Pattern consistency**: Changes follow existing codebase patterns
- **Dependency direction**: Dependencies flow correctly (toward domain layer)
- **Breaking changes**: API changes are flagged and justified
- **Module boundaries**: Changes respect existing package boundaries
- **Error handling**: Errors wrapped with context, proper error types used
## 4. Present Findings
Structure the review with two sections:
### Code Review
- Quality, bugs, style issues
- Test coverage gaps
- Documentation needs
### Architecture Review
- Summary of architectural concerns from agent
- Pattern violations or anti-patterns detected
- Dependency or boundary issues
- Breaking change assessment
## 5. User Actions
Ask the user what action to take: Ask the user what action to take:
- **Merge**: Post review summary as comment, then merge with rebase style - **Merge**: Post review summary as comment, then merge with rebase style

303
commands/spawn-issues.md Normal file
View File

@@ -0,0 +1,303 @@
---
allowed-tools: Bash, Task, Read, TaskOutput
description: Orchestrate parallel issue implementation with review cycles
argument-hint: <issue-number> [<issue-number>...]
---
# Spawn Issues (Orchestrator)
Orchestrate parallel issue implementation: spawn workers, review PRs, fix feedback, until all approved.
## Arguments
One or more issue numbers separated by spaces: `$ARGUMENTS`
Example: `/spawn-issues 42 43 44`
## Orchestration Flow
```
Concurrent Pipeline - each issue flows independently:
Issue #42 ──► worker ──► PR #55 ──► review ──► fix? ──► ✓
Issue #43 ──► worker ──► PR #56 ──► review ──► ✓
Issue #44 ──► worker ──► PR #57 ──► review ──► fix ──► ✓
As each step completes, immediately:
1. Print a status update
2. Start the next step for that issue
Don't wait for all workers before reviewing - pipeline each issue.
```
## Status Updates
Print a brief status update whenever any step completes:
```
[#42] Worker completed → PR #55 created
[#43] Worker completed → PR #56 created
[#42] Review: needs work → spawning fixer
[#43] Review: approved ✓
[#42] Fix completed → re-reviewing
[#44] Worker completed → PR #57 created
[#42] Review: approved ✓
[#44] Review: approved ✓
All done! Final summary:
| Issue | PR | Status |
|-------|-----|----------|
| #42 | #55 | approved |
| #43 | #56 | approved |
| #44 | #57 | approved |
```
## Implementation
### Step 1: Parse and Validate
Parse `$ARGUMENTS` into a list of issue numbers. If empty, inform the user:
```
Usage: /spawn-issues <issue-number> [<issue-number>...]
Example: /spawn-issues 42 43 44
```
### Step 2: Get Repository Info
```bash
REPO_PATH=$(pwd)
REPO_NAME=$(basename $REPO_PATH)
```
### Step 3: Spawn All Issue Workers
For each issue number, spawn a background issue-worker agent and track its task_id:
```
Task tool with:
- subagent_type: "issue-worker"
- run_in_background: true
- prompt: <issue-worker prompt below>
```
Track state for each issue:
```
issues = {
42: { task_id: "xxx", stage: "implementing", pr: null, branch: null, review_iterations: 0 },
43: { task_id: "yyy", stage: "implementing", pr: null, branch: null, review_iterations: 0 },
44: { task_id: "zzz", stage: "implementing", pr: null, branch: null, review_iterations: 0 },
}
```
Print initial status:
```
Spawned 3 issue workers:
[#42] implementing...
[#43] implementing...
[#44] implementing...
```
**Issue Worker Prompt:**
```
You are an issue-worker agent. Implement issue #<NUMBER> autonomously.
Context:
- Repository path: <REPO_PATH>
- Repository name: <REPO_NAME>
- Issue number: <NUMBER>
Process:
1. Setup worktree:
cd <REPO_PATH> && git fetch origin
git worktree add ../<REPO_NAME>-issue-<NUMBER> -b issue-<NUMBER>-<short-title> origin/main
cd ../<REPO_NAME>-issue-<NUMBER>
2. Get issue: tea issues <NUMBER> --comments
3. Plan with TodoWrite, implement the changes
4. Commit: git add -A && git commit -m "...\n\nCloses #<NUMBER>\n\nCo-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
5. Push: git push -u origin <branch-name>
6. Create PR: tea pulls create --title "[Issue #<NUMBER>] <title>" --description "Closes #<NUMBER>\n\n..."
Capture the PR number.
7. Cleanup: cd <REPO_PATH> && git worktree remove ../<REPO_NAME>-issue-<NUMBER> --force
8. Output EXACTLY this format (orchestrator parses it):
ISSUE_WORKER_RESULT
issue: <NUMBER>
pr: <PR_NUMBER>
branch: <branch-name>
status: <success|partial|failed>
title: <issue title>
summary: <1-2 sentence description>
Work autonomously. If blocked, note it in PR description and report status as partial/failed.
```
### Step 4: Event-Driven Pipeline
**Do NOT poll.** Wait for `<task-notification>` messages that arrive automatically when background tasks complete.
When a notification arrives:
1. Read the output file to get the result
2. Parse the result and print status update
3. Spawn the next stage (reviewer/fixer) in background
4. Continue waiting for more notifications
```
On <task-notification> for task_id X:
- Find which issue this task belongs to
- Read output file, parse result
- Print status update
- If not terminal state, spawn next agent in background
- Update issue state
- If all issues terminal, print final summary
```
**State transitions:**
```
implementing → (worker done) → reviewing → (approved) → DONE
→ (needs-work) → fixing → reviewing...
→ (3 iterations) → needs-manual-review
→ (worker failed) → FAILED
```
**On each notification, print status:**
```
[#42] Worker completed → PR #55 created, starting review
[#43] Worker completed → PR #56 created, starting review
[#42] Review: needs work → spawning fixer
[#43] Review: approved ✓
[#42] Fix completed → re-reviewing
[#44] Worker completed → PR #57 created, starting review
[#42] Review: approved ✓
[#44] Review: approved ✓
```
### Step 5: Spawn Reviewers and Fixers
When spawning reviewers, pass the PR number AND branch name from the issue worker result.
Each reviewer/fixer uses its own worktree for isolation - this prevents parallel agents from interfering with each other.
**Code Reviewer:**
```
Task tool with:
- subagent_type: "code-reviewer"
- run_in_background: true
- prompt: <code-reviewer prompt below>
```
**Code Reviewer Prompt:**
```
You are a code-reviewer agent. Review PR #<PR_NUMBER> autonomously.
Context:
- Repository path: <REPO_PATH>
- Repository name: <REPO_NAME>
- PR number: <PR_NUMBER>
- PR branch: <BRANCH_NAME>
Process:
1. Setup worktree for isolated review:
cd <REPO_PATH> && git fetch origin
git worktree add ../<REPO_NAME>-review-<PR_NUMBER> origin/<BRANCH_NAME>
cd ../<REPO_NAME>-review-<PR_NUMBER>
2. Get PR details: tea pulls <PR_NUMBER> --comments
3. Review the diff: git diff origin/main...HEAD
4. Analyze changes for:
- Code quality and style
- Potential bugs or logic errors
- Test coverage
- Documentation
5. Post review comment: tea comment <PR_NUMBER> "<review summary>"
6. Cleanup: cd <REPO_PATH> && git worktree remove ../<REPO_NAME>-review-<PR_NUMBER> --force
7. Output EXACTLY this format:
REVIEW_RESULT
pr: <PR_NUMBER>
verdict: <approved|needs-work>
summary: <1-2 sentences>
Work autonomously. Be constructive but thorough.
```
**PR Fixer Prompt:** (see below)
### Step 6: Final Report
When all issues reach terminal state, display summary:
```
All done!
| Issue | PR | Status |
|-------|-----|---------------------|
| #42 | #55 | approved |
| #43 | #56 | approved |
| #44 | #57 | approved |
3 PRs created and approved
```
## PR Fixer
When spawning pr-fixer for a PR that needs work:
```
Task tool with:
- subagent_type: "pr-fixer"
- run_in_background: true
- prompt: <pr-fixer prompt below>
```
**PR Fixer Prompt:**
```
You are a pr-fixer agent. Address review feedback on PR #<NUMBER>.
Context:
- Repository path: <REPO_PATH>
- Repository name: <REPO_NAME>
- PR number: <NUMBER>
Process:
1. Get feedback: tea pulls <NUMBER> --comments
2. Setup worktree from PR branch:
cd <REPO_PATH> && git fetch origin
git worktree add ../<REPO_NAME>-pr-<NUMBER> origin/<branch-name>
cd ../<REPO_NAME>-pr-<NUMBER>
git checkout <branch-name>
3. Address each piece of feedback
4. Commit and push:
git add -A && git commit -m "Address review feedback\n\nCo-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
git push
5. Cleanup: cd <REPO_PATH> && git worktree remove ../<REPO_NAME>-pr-<NUMBER> --force
6. Output EXACTLY:
PR_FIXER_RESULT
pr: <NUMBER>
status: <fixed|partial|failed>
changes: <summary of fixes>
Work autonomously. If feedback is unclear, make reasonable judgment calls.
```
## Error Handling
- If an issue-worker fails, continue with others
- If a review fails, mark as "review-failed" and continue
- If pr-fixer fails after 3 iterations, mark as "needs-manual-review"
- Always report final status even if some items failed

121
commands/spawn-pr-fixes.md Normal file
View File

@@ -0,0 +1,121 @@
---
allowed-tools: Bash, Task, Read
description: Spawn parallel background agents to address PR review feedback
argument-hint: [pr-number...]
---
# Spawn PR Fixes
Spawn background agents to address review feedback on multiple PRs in parallel. Each agent works in an isolated git worktree.
## Arguments
Optional PR numbers separated by spaces: `$ARGUMENTS`
- With arguments: `/spawn-pr-fixes 12 15 18` - fix specific PRs
- Without arguments: `/spawn-pr-fixes` - find and fix all PRs with requested changes
## Process
### Step 1: Get Repository Info
```bash
REPO_PATH=$(pwd)
REPO_NAME=$(basename $REPO_PATH)
```
### Step 2: Determine PRs to Fix
**If PR numbers provided**: Use those directly
**If no arguments**: Find PRs needing work
```bash
# List open PRs
tea pulls --state open
# For each PR, check if it has review comments requesting changes
tea pulls <number> --comments
```
Look for PRs where:
- Review comments exist that haven't been addressed
- PR is not approved yet
- PR is open (not merged/closed)
### Step 3: For Each PR
1. Fetch PR title using `tea pulls <number>`
2. Spawn background agent using Task tool:
```
Task tool with:
- subagent_type: "pr-fixer"
- run_in_background: true
- prompt: See agent prompt below
```
### Agent Prompt
For each PR, use this prompt:
```
You are a pr-fixer agent. Address review feedback on PR #<NUMBER> autonomously.
Context:
- Repository path: <REPO_PATH>
- Repository name: <REPO_NAME>
- PR number: <NUMBER>
Instructions from @agents/pr-fixer/agent.md:
1. Get PR details and review comments:
cd <REPO_PATH>
git fetch origin
tea pulls <NUMBER> --comments
2. Setup worktree from PR branch:
git worktree add ../<REPO_NAME>-pr-<NUMBER> origin/<branch-name>
cd ../<REPO_NAME>-pr-<NUMBER>
git checkout <branch-name>
3. Analyze feedback, create todos with TodoWrite
4. Address each piece of feedback
5. Commit and push:
git add -A && git commit with message "Address review feedback\n\n...\n\nCo-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
git push
6. Spawn code-reviewer synchronously (NOT in background) to re-review
7. If needs more work, fix and re-review (max 3 iterations)
8. Cleanup (ALWAYS do this):
cd <REPO_PATH> && git worktree remove ../<REPO_NAME>-pr-<NUMBER> --force
9. Output concise summary (5-10 lines max):
PR #<NUMBER>: <title>
Status: <fixed|partial|blocked>
Feedback addressed: <count> items
Review: <approved|needs-work|skipped>
Work autonomously. Make judgment calls on ambiguous feedback. If blocked, note it in a commit message.
```
### Step 4: Report
After spawning all agents, display:
```
Spawned <N> pr-fixer agents:
| PR | Title | Status |
|-----|--------------------------|------------|
| #12 | Add /commit command | spawned |
| #15 | Add /pr command | spawned |
| #18 | Add CI status | spawned |
Agents working in background. Monitor with:
- Check PR list: tea pulls
- Check worktrees: git worktree list
```

View File

@@ -6,12 +6,14 @@ argument-hint: <issue-number>
# Work on Issue #$1 # Work on Issue #$1
@~/.claude/skills/gitea/SKILL.md @~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/software-architecture/SKILL.md
1. **View the issue** with `--comments` flag to understand requirements and context 1. **View the issue** with `--comments` flag to understand requirements and context
2. **Create a branch**: `git checkout -b issue-$1-<short-kebab-title>` 2. **Create a branch**: `git checkout -b issue-$1-<short-kebab-title>`
3. **Plan**: Use TodoWrite to break down the work based on acceptance criteria 3. **Plan**: Use TodoWrite to break down the work based on acceptance criteria
4. **Implement** the changes 4. **Check architecture**: Review the project's vision.md Architecture section for project-specific patterns and divergences
5. **Commit** with message referencing the issue 5. **Implement** the changes following architectural patterns (DDD, event sourcing where appropriate)
6. **Push** the branch to origin 6. **Commit** with message referencing the issue
7. **Create PR** with title "[Issue #$1] <title>" and body "Closes #$1" 7. **Push** the branch to origin
8. **Auto-review**: Inform the user that auto-review is starting, then spawn the `code-reviewer` agent in background (using `run_in_background: true`) with the PR number 8. **Create PR** with title "[Issue #$1] <title>" and body "Closes #$1"
9. **Auto-review**: Inform the user that auto-review is starting, then spawn the `code-reviewer` agent in background (using `run_in_background: true`) with the PR number

View File

@@ -51,6 +51,20 @@ We believe AI fundamentally changes how software is built:
- **Iteration speed is a competitive advantage.** The faster you can go from idea to deployed code to learning, the faster you improve. AI collapses the feedback loop. - **Iteration speed is a competitive advantage.** The faster you can go from idea to deployed code to learning, the faster you improve. AI collapses the feedback loop.
### Architecture Beliefs
We believe certain outcomes matter more than others when building systems:
- **Auditability by default.** Systems should remember what happened, not just current state. History is valuable - for debugging, compliance, understanding, and recovery.
- **Business language in code.** The words domain experts use should appear in the codebase. When code mirrors how the business thinks, everyone can reason about it.
- **Independent evolution.** Parts of the system should change without breaking other parts. Loose coupling isn't just nice - it's how small teams stay fast as systems grow.
- **Explicit over implicit.** Intent should be visible. Side effects should be traceable. When something important happens, the system should make that obvious.
See [software-architecture.md](./software-architecture.md) for the patterns we use to achieve these outcomes.
### Quality Without Ceremony ### Quality Without Ceremony
- Ship small, ship often - Ship small, ship often

View File

@@ -0,0 +1,424 @@
---
name: capability-writing
description: >
Guide for designing and creating capabilities for the architecture repository.
A capability is a cohesive set of components (skill + command + agent).
Use when creating new skills, commands, or agents, or when extending the
AI workflow system. Includes templates, design guidance, and conventions.
user-invocable: false
---
# Capability Writing
How to design and create capabilities for the architecture repository. A capability is often a cohesive set of components (skill + command + agent) that work together.
## Component Overview
| Component | Location | Purpose | Example |
|-----------|----------|---------|---------|
| **Skill** | `skills/name/SKILL.md` | Knowledge Claude applies automatically | software-architecture |
| **Command** | `commands/name.md` | User-invoked workflow entry point | /work-issue |
| **Agent** | `agents/name/AGENT.md` | Isolated subtask handler with focused context | code-reviewer |
## When to Use Each Component
### Decision Tree
```
Start here: What do you need?
|
+--> Just knowledge to apply automatically?
| --> Skill only
|
+--> User-initiated workflow using existing knowledge?
| --> Command (reference skills via @)
|
+--> Complex isolated work needing focused context?
| --> Command + Agent (agent uses skills)
|
+--> New domain expertise + workflow + isolated work?
--> Full capability (all three)
```
### Decision Matrix
| Need | Component | Example |
|------|-----------|---------|
| Knowledge Claude should apply automatically | Skill | software-architecture, issue-writing |
| User-invoked workflow | Command | /work-issue, /dashboard |
| Isolated subtask with focused context | Agent | code-reviewer, issue-worker |
| All three working together | Full capability | arch-review (skill + command + agent) |
### Signs You Need Each Component
**Create a Skill when:**
- You explain the same concepts repeatedly
- Quality is inconsistent without explicit guidance
- Multiple commands need the same knowledge
- There is a clear domain that does not fit existing skills
**Create a Command when:**
- Same workflow is used multiple times
- User explicitly triggers the action
- Approval checkpoints are needed
- Multiple tools need orchestration
**Create an Agent when:**
- Task requires deep exploration that would pollute main context
- Multiple skills work better together
- Batch processing or parallel execution is needed
- Specialist persona improves outputs
## Component Templates
### Skill Template
Location: `skills/<name>/SKILL.md`
```yaml
---
name: skill-name
description: >
What this skill teaches and when to use it.
Include trigger conditions in description (not body).
List specific capabilities users would mention.
user-invocable: false
---
# Skill Name
Brief description of what this skill covers.
## Core Concepts
Explain fundamental ideas Claude needs to understand.
## Patterns and Templates
Provide reusable structures and formats.
## Guidelines
List rules, best practices, and quality standards.
## Examples
Show concrete illustrations of the skill in action.
## Common Mistakes
Document pitfalls to avoid.
## Reference
Quick-reference tables, checklists, or commands.
```
**Frontmatter fields:**
| Field | Required | Description |
|-------|----------|-------------|
| `name` | Yes | Lowercase, hyphens, matches directory name |
| `description` | Yes | What it does + when to use (max 1024 chars) |
| `user-invocable` | No | Set `false` for reference-only skills |
| `model` | No | Specific model: `haiku`, `sonnet`, `opus` |
| `context` | No | Use `fork` for isolated context |
| `allowed-tools` | No | Restrict available tools |
### Command Template
Location: `commands/<name>.md`
```yaml
---
description: What this command does (one-line summary)
argument-hint: <required> [optional]
model: sonnet
---
# Command Title
@~/.claude/skills/relevant-skill/SKILL.md
1. **First step**: What to do
2. **Second step**: What to do next
3. **Ask for approval** before significant actions
4. **Execute** the approved actions
5. **Present results** with links and summary
```
**Frontmatter fields:**
| Field | Required | Description |
|-------|----------|-------------|
| `description` | Yes | One-line summary for help/listings |
| `argument-hint` | No | Shows expected args: `<required>`, `[optional]` |
| `model` | No | Override model: `haiku`, `sonnet`, `opus` |
| `context` | No | Use `fork` for isolated context |
| `allowed-tools` | No | Restrict available tools |
### Agent Template
Location: `agents/<name>/AGENT.md`
```yaml
---
name: agent-name
description: What this agent does and when to spawn it
model: sonnet
skills: skill1, skill2
disallowedTools:
- Edit
- Write
---
You are a [role] specialist that [primary function].
## When Invoked
Describe the process the agent follows:
1. **Gather context**: What information to collect
2. **Analyze**: What to evaluate
3. **Act**: What actions to take
4. **Report**: How to communicate results
## Output Format
Describe expected output structure.
## Guidelines
- Behavioral rules
- Constraints
- Quality standards
```
**Frontmatter fields:**
| Field | Required | Description |
|-------|----------|-------------|
| `name` | Yes | Lowercase, hyphens, matches directory name |
| `description` | Yes | What it does + when to spawn |
| `model` | No | `haiku`, `sonnet`, `opus`, or `inherit` |
| `skills` | No | Comma-separated skill names (not paths) |
| `disallowedTools` | No | Tools to block (e.g., Edit, Write for read-only) |
| `permissionMode` | No | `default` or `bypassPermissions` |
## Model Selection Guidance
| Model | Use When | Examples |
|-------|----------|----------|
| `haiku` | Simple fetch/display, formatting, mechanical tasks | /dashboard, /roadmap |
| `sonnet` | Most commands and agents, balanced performance | /work-issue, issue-worker, code-reviewer |
| `opus` | Deep reasoning, architectural analysis, complex judgment | software-architect, security auditor |
### Decision Criteria
- **Start with `sonnet`** - handles most tasks well
- **Use `haiku` for volume** - speed and cost matter at scale
- **Reserve `opus` for judgment** - when errors are costly or reasoning is complex
- **Consider the stakes** - higher consequence tasks warrant more capable models
## Naming Conventions
### File and Folder Names
| Component | Convention | Examples |
|-----------|------------|----------|
| Skill folder | kebab-case | `software-architecture`, `issue-writing` |
| Skill file | UPPERCASE | `SKILL.md` |
| Command file | kebab-case | `work-issue.md`, `review-pr.md` |
| Agent folder | kebab-case | `code-reviewer`, `issue-worker` |
| Agent file | UPPERCASE | `AGENT.md` |
### Naming Patterns
**Skills:** Name after the domain or knowledge area
- Good: `gitea`, `issue-writing`, `software-architecture`
- Bad: `utils`, `helpers`, `misc`
**Commands:** Use verb or verb-phrase (actions)
- Good: `work-issue`, `review-pr`, `create-issue`
- Bad: `issue-work`, `pr-review`, `issue`
**Agents:** Name by role or persona (recognizable specialist)
- Good: `code-reviewer`, `issue-worker`, `software-architect`
- Bad: `helper`, `do-stuff`, `agent1`
## Referencing Skills
### In Commands
Use the `@` file reference syntax to guarantee skill content is loaded:
```markdown
@~/.claude/skills/gitea/SKILL.md
@~/.claude/skills/issue-writing/SKILL.md
```
**Important:** Do NOT use phrases like "Use the gitea skill" - skills have only ~20% auto-activation rate. File references guarantee the content is available.
### In Agents
List skill names in the frontmatter (not paths):
```yaml
---
name: product-manager
skills: gitea, issue-writing, backlog-grooming
---
```
The agent runtime loads these skills automatically.
## Common Patterns
### Approval Workflow (Commands)
Always ask before significant actions:
```markdown
4. **Present plan** for approval
5. **If approved**, create the issues
6. **Present summary** with links
```
### Conditional Behavior (Commands)
Handle optional arguments with mode switching:
```markdown
## If issue number provided ($1):
1. Fetch specific issue
2. Process it
## If no argument (batch mode):
1. List all issues
2. Process each
```
### Spawning Agents from Commands
Delegate complex subtasks:
```markdown
9. **Auto-review**: Spawn the `code-reviewer` agent with the PR number
```
### Read-Only Agents
For analysis without modification:
```yaml
---
name: code-reviewer
disallowedTools:
- Edit
- Write
---
```
## Anti-Patterns to Avoid
### Overly Broad Components
**Bad:** One skill/command/agent that does everything
```markdown
# Project Management
Handles issues, PRs, releases, documentation, deployment...
```
**Good:** Focused components with clear responsibility
```markdown
# Issue Writing
How to write clear, actionable issues.
```
### Vague Instructions
**Bad:**
```markdown
1. Handle the issue
2. Do the work
3. Finish up
```
**Good:**
```markdown
1. **View the issue** with `--comments` flag
2. **Create branch**: `git checkout -b issue-$1-<title>`
3. **Commit** with message referencing the issue
```
### Missing Skill References
**Bad:**
```markdown
Use the gitea skill to create an issue.
```
**Good:**
```markdown
@~/.claude/skills/gitea/SKILL.md
Use `tea issues create --title "..." --description "..."`
```
### God Skills
**Bad:** Single skill with 1000+ lines covering unrelated topics
**Good:** Multiple focused skills that reference each other
### Premature Agent Creation
**Bad:** Creating an agent for every task
**Good:** Use agents only when you need:
- Context isolation
- Skill composition
- Parallel execution
- Specialist persona
## Detailed Documentation
For comprehensive guides, see the `docs/` directory:
- `docs/writing-skills.md` - Complete skill writing guide
- `docs/writing-commands.md` - Complete command writing guide
- `docs/writing-agents.md` - Complete agent writing guide
These documents include:
- Full frontmatter reference
- Annotated examples from the codebase
- Lifecycle management
- Integration checklists
## Checklists
### Before Creating a Skill
- [ ] Knowledge is used in multiple places (not just once)
- [ ] Existing skills do not already cover this domain
- [ ] Content is specific and actionable (not generic)
- [ ] Frontmatter has descriptive `description` with trigger terms
- [ ] File at `skills/<name>/SKILL.md`
### Before Creating a Command
- [ ] Workflow is repeatable (used multiple times)
- [ ] User explicitly triggers it (not automatic)
- [ ] Clear start and end points
- [ ] Skills referenced via `@~/.claude/skills/<name>/SKILL.md`
- [ ] Approval checkpoints before significant actions
- [ ] File at `commands/<name>.md`
### Before Creating an Agent
- [ ] Built-in agents (Explore, Plan) are not sufficient
- [ ] Context isolation or skill composition is needed
- [ ] Clear role/persona emerges
- [ ] `model` selection is deliberate (not just `inherit`)
- [ ] `skills` list is right-sized (not too many)
- [ ] File at `agents/<name>/AGENT.md`

View File

@@ -0,0 +1,632 @@
---
name: software-architecture
description: >
Architectural patterns for building systems: DDD, Event Sourcing, event-driven communication.
Use when implementing features, reviewing code, planning issues, refining architecture,
or making design decisions. Ensures alignment with organizational beliefs about
auditability, domain modeling, and independent evolution.
user-invocable: false
---
# Software Architecture
Architectural patterns and best practices. This skill is auto-triggered when implementing, reviewing, or planning work that involves architectural decisions.
## Architecture Beliefs
These outcome-focused beliefs (from our organization manifesto) guide architectural decisions:
| Belief | Why It Matters |
|--------|----------------|
| **Auditability by default** | Systems should remember what happened, not just current state |
| **Business language in code** | Domain experts' words should appear in the codebase |
| **Independent evolution** | Parts should change without breaking other parts |
| **Explicit over implicit** | Intent and side effects should be visible and traceable |
## Beliefs → Patterns
| Belief | Primary Pattern | Supporting Patterns |
|--------|-----------------|---------------------|
| Auditability by default | Event Sourcing | Immutable events, temporal queries |
| Business language in code | Domain-Driven Design | Ubiquitous language, aggregates, bounded contexts |
| Independent evolution | Event-driven communication | Bounded contexts, published language |
| Explicit over implicit | Commands and Events | Domain events, clear intent |
## Event Sourcing
**Achieves:** Auditability by default
Instead of storing current state, store the sequence of events that led to it.
**Core concepts:**
- **Events** are immutable facts about what happened, named in past tense: `OrderPlaced`, `PaymentReceived`
- **State** is derived by replaying events, not stored directly
- **Event store** is append-only - history is never modified
**Why this matters:**
- Complete audit trail for free
- Debug by replaying history
- Answer "what was the state at time X?"
- Recover from bugs by fixing logic and replaying
**Trade-offs:**
- More complex than CRUD for simple cases
- Requires thinking in events, not state
- Eventually consistent read models
## Domain-Driven Design
**Achieves:** Business language in code
The domain model reflects how the business thinks and talks.
**Core concepts:**
- **Ubiquitous language** - same terms in code, conversations, and documentation
- **Bounded contexts** - explicit boundaries where terms have consistent meaning
- **Aggregates** - clusters of objects that change together, with one root entity
- **Domain events** - capture what happened in business terms
**Why this matters:**
- Domain experts can read and validate the model
- New team members learn the domain through code
- Changes in business rules map clearly to code changes
**Trade-offs:**
- Upfront investment in understanding the domain
- Boundaries may need to shift as understanding grows
- Overkill for pure technical/infrastructure code
## Event-Driven Communication
**Achieves:** Independent evolution
Services communicate by publishing events, not calling each other directly.
**Core concepts:**
- **Publish events** when something important happens
- **Subscribe to events** you care about
- **No direct dependencies** between publisher and subscriber
- **Eventual consistency** - accept that not everything updates instantly
**Why this matters:**
- Add new services without changing existing ones
- Services can be deployed independently
- Natural resilience - if a subscriber is down, events queue
**Trade-offs:**
- Harder to trace request flow
- Eventual consistency requires different thinking
- Need infrastructure for reliable event delivery
## Commands and Events
**Achieves:** Explicit over implicit
Distinguish between requests (commands) and facts (events).
**Core concepts:**
- **Commands** express intent: `PlaceOrder`, `CancelSubscription`
- Commands can be rejected (validation, business rules)
- **Events** express facts: `OrderPlaced`, `SubscriptionCancelled`
- Events are immutable - what happened, happened
**Why this matters:**
- Clear separation of "trying to do X" vs "X happened"
- Commands validate, events just record
- Enables replay - reprocess events with new logic
## When to Diverge
These patterns are defaults, not mandates. Diverge intentionally when:
- **Simplicity wins** - a simple CRUD endpoint doesn't need event sourcing
- **Performance requires it** - sometimes synchronous calls are necessary
- **Team context** - patterns the team doesn't understand cause more harm than good
- **Prototyping** - validate ideas before investing in full architecture
When diverging, document the decision in the project's `vision.md` Architecture section.
## Project-Level Architecture
Each project documents architectural choices in `vision.md`:
```markdown
## Architecture
This project follows organization architecture patterns.
### Alignment
- Event sourcing for [which aggregates/domains]
- Bounded contexts: [list contexts and their responsibilities]
- Event-driven communication between [which services]
### Intentional Divergences
| Area | Standard Pattern | What We Do Instead | Why |
|------|------------------|-------------------|-----|
```
## Go-Specific Best Practices
### Package Organization
**Good package structure:**
```
project/
├── cmd/ # Application entry points
│ └── server/
│ └── main.go
├── internal/ # Private packages
│ ├── domain/ # Core business logic
│ │ ├── user/
│ │ └── order/
│ ├── service/ # Application services
│ ├── repository/ # Data access
│ └── handler/ # HTTP/gRPC handlers
├── pkg/ # Public, reusable packages
└── go.mod
```
**Package naming:**
- Short, concise, lowercase: `user`, `order`, `auth`
- Avoid generic names: `util`, `common`, `helpers`, `misc`
- Name after what it provides, not what it contains
- One package per concept, not per file
**Package cohesion:**
- A package should have a single, focused responsibility
- Package internal files can use internal types freely
- Minimize exported types - export interfaces, hide implementations
### Interfaces
**Accept interfaces, return structs:**
```go
// Good: Accept interface, return concrete type
func NewUserService(repo UserRepository) *UserService {
return &UserService{repo: repo}
}
// Bad: Accept and return interface
func NewUserService(repo UserRepository) UserService {
return &userService{repo: repo}
}
```
**Define interfaces at point of use:**
```go
// Good: Interface defined where it's used (consumer owns the interface)
package service
type UserRepository interface {
FindByID(ctx context.Context, id string) (*User, error)
}
// Bad: Interface defined with implementation (producer owns the interface)
package repository
type UserRepository interface {
FindByID(ctx context.Context, id string) (*User, error)
}
```
**Keep interfaces small:**
- Prefer single-method interfaces
- Large interfaces indicate missing abstraction
- Compose small interfaces when needed
### Error Handling
**Wrap errors with context:**
```go
// Good: Wrap with context
if err != nil {
return fmt.Errorf("fetching user %s: %w", id, err)
}
// Bad: Return bare error
if err != nil {
return err
}
```
**Use sentinel errors for expected conditions:**
```go
var ErrNotFound = errors.New("not found")
var ErrConflict = errors.New("conflict")
// Check with errors.Is
if errors.Is(err, ErrNotFound) {
// handle not found
}
```
**Error types for rich errors:**
```go
type ValidationError struct {
Field string
Message string
}
func (e *ValidationError) Error() string {
return fmt.Sprintf("%s: %s", e.Field, e.Message)
}
// Check with errors.As
var valErr *ValidationError
if errors.As(err, &valErr) {
// handle validation error
}
```
### Dependency Injection
**Constructor injection:**
```go
type UserService struct {
repo UserRepository
logger Logger
}
func NewUserService(repo UserRepository, logger Logger) *UserService {
return &UserService{
repo: repo,
logger: logger,
}
}
```
**Wire dependencies in main:**
```go
func main() {
// Create dependencies
db := database.Connect()
logger := slog.Default()
// Wire up services
userRepo := repository.NewUserRepository(db)
userService := service.NewUserService(userRepo, logger)
userHandler := handler.NewUserHandler(userService)
// Start server
http.Handle("/users", userHandler)
http.ListenAndServe(":8080", nil)
}
```
**Avoid global state:**
- No `init()` for service initialization
- No package-level variables for dependencies
- Pass context explicitly, don't store in structs
### Testing
**Table-driven tests:**
```go
func TestUserService_Create(t *testing.T) {
tests := []struct {
name string
input CreateUserInput
want *User
wantErr error
}{
{
name: "valid user",
input: CreateUserInput{Email: "test@example.com"},
want: &User{Email: "test@example.com"},
},
{
name: "invalid email",
input: CreateUserInput{Email: "invalid"},
wantErr: ErrInvalidEmail,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// arrange, act, assert
})
}
}
```
**Test doubles:**
- Use interfaces for test doubles
- Prefer hand-written mocks over generated ones for simple cases
- Use `testify/mock` or `gomock` for complex mocking needs
**Test package naming:**
- `package user_test` for black-box testing (preferred)
- `package user` for white-box testing when needed
## Generic Architecture Patterns
### Layered Architecture
```
┌─────────────────────────────────┐
│ Presentation │ HTTP handlers, CLI, gRPC
├─────────────────────────────────┤
│ Application │ Use cases, orchestration
├─────────────────────────────────┤
│ Domain │ Business logic, entities
├─────────────────────────────────┤
│ Infrastructure │ Database, external services
└─────────────────────────────────┘
```
**Rules:**
- Dependencies point downward only
- Upper layers depend on interfaces, not implementations
- Domain layer has no external dependencies
### SOLID Principles
**Single Responsibility (S):**
- Each module has one reason to change
- Split code that changes for different reasons
**Open/Closed (O):**
- Open for extension, closed for modification
- Add new behavior through new types, not changing existing ones
**Liskov Substitution (L):**
- Subtypes must be substitutable for their base types
- Interfaces should be implementable without surprises
**Interface Segregation (I):**
- Clients shouldn't depend on interfaces they don't use
- Prefer many small interfaces over few large ones
**Dependency Inversion (D):**
- High-level modules shouldn't depend on low-level modules
- Both should depend on abstractions
### Dependency Direction
```
┌──────────────┐
│ Domain │
│ (no deps) │
└──────────────┘
┌────────────┴────────────┐
│ │
┌───────┴───────┐ ┌───────┴───────┐
│ Application │ │Infrastructure │
│ (uses domain) │ │(implements │
└───────────────┘ │ domain intf) │
▲ └───────────────┘
┌───────┴───────┐
│ Presentation │
│(calls app) │
└───────────────┘
```
**Key insight:** Infrastructure implements domain interfaces, doesn't define them. This inverts the "natural" dependency direction.
### Module Boundaries
**Signs of good boundaries:**
- Modules can be understood in isolation
- Changes are localized within modules
- Clear, minimal public API
- Dependencies flow in one direction
**Signs of bad boundaries:**
- Circular dependencies between modules
- "Shotgun surgery" - small changes require many file edits
- Modules reach into each other's internals
- Unclear ownership of concepts
## Repository Health Indicators
### Positive Indicators
| Indicator | What to Look For |
|-----------|------------------|
| Clear structure | Obvious package organization, consistent naming |
| Small interfaces | Most interfaces have 1-3 methods |
| Explicit dependencies | Constructor injection, no globals |
| Test coverage | Unit tests for business logic, integration tests for boundaries |
| Error handling | Wrapped errors, typed errors for expected cases |
| Documentation | CLAUDE.md accurate, code comments explain "why" |
### Warning Signs
| Indicator | What to Look For |
|-----------|------------------|
| God packages | `utils/`, `common/`, `helpers/` with 20+ files |
| Circular deps | Package A imports B, B imports A |
| Deep nesting | 4+ levels of directory nesting |
| Huge files | Files with 500+ lines |
| Interface pollution | Interfaces for everything, even single implementations |
| Global state | Package-level variables, `init()` for setup |
### Metrics to Track
- **Package fan-out:** How many packages does each package import?
- **Cyclomatic complexity:** How complex are the functions?
- **Test coverage:** What percentage of code is tested?
- **Import depth:** How deep is the import tree?
## Review Checklists
### Repository Audit Checklist
Use this when evaluating overall repository health.
**Structure:**
- [ ] Clear package organization following Go conventions
- [ ] No circular dependencies between packages
- [ ] Appropriate use of `internal/` for private packages
- [ ] `cmd/` for application entry points
**Dependencies:**
- [ ] Dependencies flow inward (toward domain)
- [ ] Interfaces defined at point of use (not with implementation)
- [ ] No global state or package-level dependencies
- [ ] Constructor injection throughout
**Code Quality:**
- [ ] Consistent naming conventions
- [ ] No "god" packages (utils, common, helpers)
- [ ] Errors wrapped with context
- [ ] Small, focused interfaces
**Testing:**
- [ ] Unit tests for domain logic
- [ ] Integration tests for boundaries (DB, HTTP)
- [ ] Tests are readable and maintainable
- [ ] Test coverage for critical paths
**Documentation:**
- [ ] CLAUDE.md is accurate and helpful
- [ ] vision.md explains the product purpose
- [ ] Code comments explain "why", not "what"
### Issue Refinement Checklist
Use this when reviewing issues for architecture impact.
**Scope:**
- [ ] Issue is a vertical slice (user-visible value)
- [ ] Changes are localized to specific packages
- [ ] No cross-cutting concerns hidden in implementation
**Design:**
- [ ] Follows existing patterns in the codebase
- [ ] New abstractions are justified
- [ ] Interface changes are backward compatible (or breaking change is documented)
**Dependencies:**
- [ ] New dependencies are minimal and justified
- [ ] No new circular dependencies introduced
- [ ] Integration points are clearly defined
**Testability:**
- [ ] Acceptance criteria are testable
- [ ] New code can be unit tested in isolation
- [ ] Integration test requirements are clear
### PR Review Checklist
Use this when reviewing pull requests for architecture concerns.
**Structure:**
- [ ] Changes respect existing package boundaries
- [ ] New packages follow naming conventions
- [ ] No new circular dependencies
**Interfaces:**
- [ ] Interfaces are defined where used
- [ ] Interfaces are minimal and focused
- [ ] Breaking interface changes are justified
**Dependencies:**
- [ ] Dependencies injected via constructors
- [ ] No new global state
- [ ] External dependencies properly abstracted
**Error Handling:**
- [ ] Errors wrapped with context
- [ ] Sentinel errors for expected conditions
- [ ] Error types for rich error information
**Testing:**
- [ ] New code has appropriate test coverage
- [ ] Tests are clear and maintainable
- [ ] Edge cases covered
## Anti-Patterns to Flag
### God Packages
**Problem:** Packages like `utils/`, `common/`, `helpers/` become dumping grounds.
**Symptoms:**
- 20+ files in one package
- Unrelated functions grouped together
- Package imported by everything
**Fix:** Extract cohesive packages based on what they provide: `validation`, `httputil`, `timeutil`.
### Circular Dependencies
**Problem:** Package A imports B, and B imports A (directly or transitively).
**Symptoms:**
- Import cycle compile errors
- Difficulty understanding code flow
- Changes cascade unexpectedly
**Fix:**
- Extract shared types to a third package
- Use interfaces to invert dependency
- Merge packages if truly coupled
### Leaky Abstractions
**Problem:** Implementation details leak through abstraction boundaries.
**Symptoms:**
- Database types in domain layer
- HTTP types in service layer
- Framework types in business logic
**Fix:** Define types at each layer, map between them explicitly.
### Anemic Domain Model
**Problem:** Domain objects are just data containers, logic is elsewhere.
**Symptoms:**
- Domain types have only getters/setters
- All logic in "service" classes
- Domain types can be in invalid states
**Fix:** Put behavior with data. Domain types should enforce their own invariants.
### Shotgun Surgery
**Problem:** Small changes require editing many files across packages.
**Symptoms:**
- Feature adds touch 10+ files
- Similar changes in multiple places
- Copy-paste between packages
**Fix:** Consolidate related code. If things change together, they belong together.
### Feature Envy
**Problem:** Code in one package is more interested in another package's data.
**Symptoms:**
- Many calls to another package's methods
- Pulling data just to compute something
- Logic that belongs elsewhere
**Fix:** Move the code to where the data lives, or extract the behavior to a shared place.
### Premature Abstraction
**Problem:** Creating interfaces and abstractions before they're needed.
**Symptoms:**
- Interfaces with single implementations
- "Factory" and "Manager" classes everywhere
- Configuration for things that never change
**Fix:** Write concrete code first. Extract abstractions when you have multiple implementations or need to break dependencies.
### Deep Hierarchy
**Problem:** Excessive layers of abstraction or inheritance.
**Symptoms:**
- 5+ levels of embedding/composition
- Hard to trace code flow
- Changes require understanding many layers
**Fix:** Prefer composition over inheritance. Flatten hierarchies where possible.

View File

@@ -123,6 +123,17 @@ These extend the organization's guiding principles:
These extend the organization's non-goals: These extend the organization's non-goals:
- **[Non-goal].** [Explanation] - **[Non-goal].** [Explanation]
## Architecture
This project follows organization architecture patterns (see software-architecture skill).
### Alignment
- [Which patterns we use and where]
### Intentional Divergences
| Area | Standard Pattern | What We Do Instead | Why |
|------|------------------|-------------------|-----|
``` ```
### When to Update Vision ### When to Update Vision

130
software-architecture.md Normal file
View File

@@ -0,0 +1,130 @@
# Software Architecture
> **For Claude:** This content is mirrored in `skills/software-architecture/SKILL.md` which is auto-triggered when relevant. You don't need to load this file directly.
This document describes the architectural patterns we use to achieve our [architecture beliefs](./manifesto.md#architecture-beliefs). It serves as human-readable organizational documentation.
## Beliefs to Patterns
| Belief | Primary Pattern | Supporting Patterns |
|--------|-----------------|---------------------|
| Auditability by default | Event Sourcing | Immutable events, temporal queries |
| Business language in code | Domain-Driven Design | Ubiquitous language, aggregates, bounded contexts |
| Independent evolution | Event-driven communication | Bounded contexts, published language |
| Explicit over implicit | Commands and Events | Domain events, clear intent |
## Event Sourcing
**Achieves:** Auditability by default
Instead of storing current state, we store the sequence of events that led to it.
**Core concepts:**
- **Events** are immutable facts about what happened, named in past tense: `OrderPlaced`, `PaymentReceived`
- **State** is derived by replaying events, not stored directly
- **Event store** is append-only - history is never modified
**Why this matters:**
- Complete audit trail for free
- Debug by replaying history
- Answer "what was the state at time X?"
- Recover from bugs by fixing logic and replaying
**Trade-offs:**
- More complex than CRUD for simple cases
- Requires thinking in events, not state
- Eventually consistent read models
## Domain-Driven Design
**Achieves:** Business language in code
The domain model reflects how the business thinks and talks.
**Core concepts:**
- **Ubiquitous language** - same terms in code, conversations, and documentation
- **Bounded contexts** - explicit boundaries where terms have consistent meaning
- **Aggregates** - clusters of objects that change together, with one root entity
- **Domain events** - capture what happened in business terms
**Why this matters:**
- Domain experts can read and validate the model
- New team members learn the domain through code
- Changes in business rules map clearly to code changes
**Trade-offs:**
- Upfront investment in understanding the domain
- Boundaries may need to shift as understanding grows
- Overkill for pure technical/infrastructure code
## Event-Driven Communication
**Achieves:** Independent evolution
Services communicate by publishing events, not calling each other directly.
**Core concepts:**
- **Publish events** when something important happens
- **Subscribe to events** you care about
- **No direct dependencies** between publisher and subscriber
- **Eventual consistency** - accept that not everything updates instantly
**Why this matters:**
- Add new services without changing existing ones
- Services can be deployed independently
- Natural resilience - if a subscriber is down, events queue
**Trade-offs:**
- Harder to trace request flow
- Eventual consistency requires different thinking
- Need infrastructure for reliable event delivery
## Commands and Events
**Achieves:** Explicit over implicit
Distinguish between requests (commands) and facts (events).
**Core concepts:**
- **Commands** express intent: `PlaceOrder`, `CancelSubscription`
- Commands can be rejected (validation, business rules)
- **Events** express facts: `OrderPlaced`, `SubscriptionCancelled`
- Events are immutable - what happened, happened
**Why this matters:**
- Clear separation of "trying to do X" vs "X happened"
- Commands validate, events just record
- Enables replay - reprocess events with new logic
## When to Diverge
These patterns are defaults, not mandates. Diverge intentionally when:
- **Simplicity wins** - a simple CRUD endpoint doesn't need event sourcing
- **Performance requires it** - sometimes synchronous calls are necessary
- **Team context** - patterns the team doesn't understand cause more harm than good
- **Prototyping** - validate ideas before investing in full architecture
When diverging, document the decision in the project's vision.md (see below).
## Project-Level Architecture
Each project should document its architectural choices in `vision.md` under an **Architecture** section:
```markdown
## Architecture
This project follows organization architecture patterns.
### Alignment
- Event sourcing for [which aggregates/domains]
- Bounded contexts: [list contexts and their responsibilities]
- Event-driven communication between [which services]
### Intentional Divergences
| Area | Standard Pattern | What We Do Instead | Why |
|------|------------------|-------------------|-----|
| [area] | [expected pattern] | [actual approach] | [reasoning] |
```
This creates traceability: org beliefs → patterns → project decisions.