refactor: remove AI agent config management from siren script

Agent configurations (CLAUDE.md, commands, skills, cursor rules) have
been extracted to a separate repo (jimeh/agentic). Remove all related
symlink logic, helper functions, and stale-link cleanup from siren.

- Delete claude/ directory (CLAUDE.md, settings, statusline, commands,
  skills)
- Delete cursor/user-rules.md and cursor commands
- Delete ai/references/ directory
- Remove _add_command_symlinks, _add_skill_symlinks,
  _cleanup_stale_commands, _cleanup_stale_skills functions
- Remove Claude conditional block from define_settings()
- Trim Cursor conditional block to only mcp.json symlink
- Remove stale-symlink cleanup calls from do_config()

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-15 23:35:28 +00:00
parent 95516fc615
commit a7b959c8fc
17 changed files with 0 additions and 1946 deletions

View File

@@ -1,290 +0,0 @@
---
source: https://www.aihero.dev/a-complete-guide-to-agents-md
---
# A Complete Guide To AGENTS.md
Have you ever felt concerned about the size of your `AGENTS.md` file?
Maybe you should be. A bad `AGENTS.md` file can confuse your agent, become a
maintenance nightmare, and cost you tokens on every request.
So you'd better know how to fix it.
## What is AGENTS.md?
An `AGENTS.md` file is a markdown file you check into Git that customizes how AI
coding agents behave in your repository. It sits at the top of the conversation
history, right below the system prompt.
Think of it as a configuration layer between the agent's base instructions and
your actual codebase. The file can contain two types of guidance:
- **Personal scope**: Your commit style preferences, coding patterns you prefer
- **Project scope**: What the project does, which package manager you use, your
architecture decisions
The `AGENTS.md` file is an open standard supported by many - though not all -
tools.
<details>
<summary>CLAUDE.md</summary>
Notably, Claude Code doesn't use `AGENTS.md` - it uses `CLAUDE.md` instead. You
can symlink between them to keep all your tools working the same way:
```bash
# Create a symlink from AGENTS.md to CLAUDE.md
ln -s AGENTS.md CLAUDE.md
```
</details>
## Why Massive `AGENTS.md` Files are a Problem
There's a natural feedback loop that causes `AGENTS.md` files to grow
dangerously large:
1. The agent does something you don't like
2. You add a rule to prevent it
3. Repeat hundreds of times over months
4. File becomes a "ball of mud"
Different developers add conflicting opinions. Nobody does a full style pass.
The result? An unmaintainable mess that actually hurts agent performance.
Another culprit: auto-generated `AGENTS.md` files. Never use initialization
scripts to auto-generate your `AGENTS.md`. They flood the file with things that
are "useful for most scenarios" but would be better progressively disclosed.
Generated files prioritize comprehensiveness over restraint.
### The Instruction Budget
Kyle from Humanlayer's
[article](https://www.humanlayer.dev/blog/writing-a-good-claude-md) mentions the
concept of an "instruction budget":
> Frontier thinking LLMs can follow ~ 150-200 instructions with reasonable
> consistency. Smaller models can attend to fewer instructions than larger
> models, and non-thinking models can attend to fewer instructions than thinking
> models.
Every token in your `AGENTS.md` file gets loaded on **every single request**,
regardless of whether it's relevant. This creates a hard budget problem:
| Scenario | Impact |
| -------------------------- | ----------------------------------------------------- |
| Small, focused `AGENTS.md` | More tokens available for task-specific instructions |
| Large, bloated `AGENTS.md` | Fewer tokens for the actual work; agent gets confused |
| Irrelevant instructions | Token waste + agent distraction = worse performance |
Taken together, this means that **the ideal `AGENTS.md` file should be as small
as possible.**
### Stale Documentation Poisons Context
Another issue for large `AGENTS.md` files is staleness.
Documentation goes out of date quickly. For human developers, stale docs are
annoying, but the human usually has enough built-in memory to be skeptical about
bad docs. For AI agents that read documentation on every request, stale
information actively _poisons_ the context.
This is especially dangerous when you document file system structure. File paths
change constantly. If your `AGENTS.md` says "authentication logic lives in
`src/auth/handlers.ts`" and that file gets renamed or moved, the agent will
confidently look in the wrong place.
Instead of documenting structure, describe capabilities. Give hints about where
things _might_ be and the overall shape of the project. Let the agent generate
its own just-in-time documentation during planning.
Domain concepts (like "organization" vs "group" vs "workspace") are more stable
than file paths, so they're safer to document. But even these can drift in
fast-moving AI-assisted codebases. Keep a light touch.
## Cutting Down Large `AGENTS.md` Files
Be ruthless about what goes here. Consider this the absolute minimum:
- **One-sentence project description** (acts like a role-based prompt)
- **Package manager** (if not npm; or use `corepack` for warnings)
- **Build/typecheck commands** (if non-standard)
That's honestly it. Everything else should go elsewhere.
### The One-Liner Project Description
This single sentence gives the agent context about _why_ they're working in this
repository. It anchors every decision they make.
Example:
```markdown
This is a React component library for accessible data visualization.
```
That's the foundation. The agent now understands its scope.
### Package Manager Specification
If you're In a JavaScript project and using anything other than npm, tell the
agent explicitly:
```markdown
This project uses pnpm workspaces.
```
Without this, the agent might default to `npm` and generate incorrect commands.
<details>
<summary>Corepack is also great</summary>
You could also use [`corepack`](https://github.com/nodejs/corepack) to let the system handle warnings automatically, saving you precious instruction budget.
</details>
### Use Progressive Disclosure
Instead of cramming everything into `AGENTS.md`, use **progressive disclosure**:
give the agent only what it needs right now, and point it to other resources
when needed.
Agents are fast at navigating documentation hierarchies. They understand context
well enough to find what they need.
#### Move Language-Specific Rules to Separate Files
If your `AGENTS.md` currently says:
```markdown
Always use const instead of let.
Never use var.
Use interface instead of type when possible.
Use strict null checks.
...
```
Move that to a separate file instead. In your root `AGENTS.md`:
```markdown
For TypeScript conventions, see docs/TYPESCRIPT.md
```
Notice the light touch, no "always," no all-caps forcing. Just a conversational
reference.
The benefits:
- TypeScript rules only load when the agent writes TypeScript
- Other tasks (CSS debugging, dependency management) don't waste tokens
- File stays focused and portable across model changes
#### Nest Progressive Disclosure
You can go even deeper. Your `docs/TYPESCRIPT.md` can reference
`docs/TESTING.md`. Create a discoverable resource tree:
```
docs/
├── TYPESCRIPT.md
│ └── references TESTING.md
├── TESTING.md
│ └── references specific test runners
└── BUILD.md
└── references esbuild configuration
```
You can even link to external resources, Prisma docs, Next.js docs, etc. The
agent will navigate these hierarchies efficiently.
#### Use Agent Skills
Many tools support "agent skills" - commands or workflows the agent can invoke
to learn how to do something specific. These are another form of progressive
disclosure: the agent pulls in knowledge only when needed.
We'll cover agent skills in-depth in a separate article.
## `AGENTS.md` in Monorepos
You're not limited to a single `AGENTS.md` at the root. You can place
`AGENTS.md` files in subdirectories, and they **merge with the root level**.
This is powerful for monorepos:
### What Goes Where
| Level | Content |
| ----------- | -------------------------------------------------------------------------- |
| **Root** | Monorepo purpose, how to navigate packages, shared tools (pnpm workspaces) |
| **Package** | Package purpose, specific tech stack, package-specific conventions |
Root `AGENTS.md`:
```markdown
This is a monorepo containing web services and CLI tools.
Use pnpm workspaces to manage dependencies.
See each package's AGENTS.md for specific guidelines.
```
Package-level `AGENTS.md` (in `packages/api/AGENTS.md`):
```markdown
This package is a Node.js GraphQL API using Prisma.
Follow docs/API_CONVENTIONS.md for API design patterns.
```
**Don't overload any level.** The agent sees all merged `AGENTS.md` files in its
context. Keep each level focused on what's relevant at that scope.
## Fix A Broken `AGENTS.md` With This Prompt
If you're starting to get nervous about the `AGENTS.md` file in your repo, and
you want to refactor it to use progressive disclosure, try copy-pasting this
prompt into your coding agent:
```txt
I want you to refactor my AGENTS.md file to follow progressive disclosure principles.
Follow these steps:
1. **Find contradictions**: Identify any instructions that conflict with each other. For each contradiction, ask me which version I want to keep.
2. **Identify the essentials**: Extract only what belongs in the root AGENTS.md:
- One-sentence project description
- Package manager (if not npm)
- Non-standard build/typecheck commands
- Anything truly relevant to every single task
3. **Group the rest**: Organize remaining instructions into logical categories (e.g., TypeScript conventions, testing patterns, API design, Git workflow). For each group, create a separate markdown file.
4. **Create the file structure**: Output:
- A minimal root AGENTS.md with markdown links to the separate files
- Each separate file with its relevant instructions
- A suggested docs/ folder structure
5. **Flag for deletion**: Identify any instructions that are:
- Redundant (the agent already knows this)
- Too vague to be actionable
- Overly obvious (like "write clean code")
```
## Don't Build A Ball Of Mud
When you're about to add something to your `AGENTS.md`, ask yourself where it
belongs:
| Location | When to use |
| ------------------------- | -------------------------------------------------- |
| Root `AGENTS.md` | Relevant to every single task in the repo |
| Separate file | Relevant to one domain (TypeScript, testing, etc.) |
| Nested documentation tree | Can be organized hierarchically |
The ideal `AGENTS.md` is small, focused, and points elsewhere. It gives the
agent just enough context to start working, with breadcrumbs to more detailed
guidance.
Everything else lives in progressive disclosure: separate files, nested
`AGENTS.md` files, or skills.
This keeps your instruction budget efficient, your agent focused, and your setup
future-proof as tools and best practices evolve.

View File

@@ -1,69 +0,0 @@
# My AGENTS.md file for building plans you actually read
Most developers are skeptical about AI code generation at first. It seems
impossible that an AI could understand your codebase the way you do, or match
the instincts you've built up over years of experience.
But there's a technique that changes everything: the planning loop. Instead of
asking AI to write code directly, you work through a structured cycle that
dramatically improves the quality of what you get.
This approach transforms AI from an unreliable code generator into an
indispensable coding partner.
## The Plan Loop: A Four-Step Process
Every piece of code now goes through the same cycle.
![Plan Loop
Diagram](./my-agents.md-file-for-building-plans-you-actually-read/plan-loop-diagram.png)
**Plan** with the AI first. Think through the approach together before writing
any code. Discuss the strategy and get alignment on what you're building.
**Execute** by asking the AI to write the code that matches the plan. You're not
asking it to figure out what to build—you've already done that together.
**Test** the code together. Run unit tests, check type safety, or perform manual
QA. Validate that the implementation matches what you planned.
**Commit** the code and start the cycle again for the next piece.
## Why This Matters
This loop is completely indispensable for getting decent outputs from an AI.
If you drop the planning step altogether, you're really hampering yourself.
You're asking the AI to guess what you want, and you'll end up fighting with
hallucinations and misunderstandings.
Planning forces clarity. It makes the AI's job easier and your code better.
## Rules for Creating Great Plans
Here are the key rules from my `CLAUDE.md` file that make plan mode effective:
```md
## Plan Mode
- Make the plan extremely concise. Sacrifice grammar for the sake of concision.
- At the end of each plan, give me a list of unresolved questions to answer, if any.
```
These simple guidelines transform verbose plans into scannable, actionable
documents that keep both you and the AI aligned.
Copy them into your `CLAUDE.md` or `AGENTS.md` file, and enjoy simpler, more
readable plans.
Or, run this script to append them to your `~/.claude/CLAUDE.md` file:
```bash
mkdir -p ~/.claude && cat >> ~/.claude/CLAUDE.md << 'EOF'
## Plan Mode
- Make the plan extremely concise. Sacrifice grammar for the sake of concision.
- At the end of each plan, give me a list of unresolved questions to answer, if any.
EOF
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

View File

@@ -1,113 +0,0 @@
# Rules to Always Follow
Below are rules to follow with everything you do.
## Communication Style
- Be casual unless otherwise specified.
- Be terse. Give the answer immediately, with details afterward if needed.
- Be accurate and thorough.
- Provide direct code solutions or detailed technical explanations rather than
general advice. No introductory phrases like "Here's how you can..."
- Value good arguments over authorities; the source is irrelevant.
- If your content policy is an issue, provide the closest acceptable response
and explain the content policy issue afterward.
- Cite sources at the end when possible, not inline.
- Don't mention your knowledge cutoff.
- Don't disclose you're an AI.
- If clarification is needed, make reasonable assumptions and note them.
## Code Style
- Try to keep line length to 80 characters or fewer when possible.
- Check and fix linting errors.
- Follow code style and conventions already present in the project when
reasonable, including choice of libraries, test frameworks, etc.
- Break from conventions when existing patterns don't fit the new context, but
only with sound reasoning.
- Respect my formatting preferences when you provide code.
## Code Comments
- Respect existing code comments; they're usually there for a reason. Remove
them ONLY if completely irrelevant after a code change. If unsure, keep them.
- New comments must be relevant and specific to the code. They should NOT refer
to specific instructions like "use new X function".
- Generate or update documentation comments for new code.
## Code Quality
- Include robust error handling and highlight potential edge cases.
- Flag security concerns and performance impacts in solutions.
- Suggest appropriate naming conventions and code structure improvements.
- Handle changes across multiple files with proper import/dependency management.
- Provide test examples for new functionality when relevant.
## Technical Considerations
- Consider version constraints and backward compatibility of libraries and
frameworks.
- Consider build environment constraints and platform-specific issues.
- Check Makefile and similar for common project tasks like lint, format, test,
etc.
- If commands fail due to a missing file you expect to exist, double check the
current directory with `pwd`, and `cd` to the project root if needed.
- Do not execute `git` with the `-C` flag. Instead, verify you're not already
in the target directory, then `cd` to it.
- When investigating third-party libraries, use deepwiki to look up information
if available.
## Git Commits
- Prefer conventional commits format (e.g., `feat:`, `fix:`, `refactor:`), but
defer to project conventions if they differ.
- Lead with "why" over "what". The diff shows what changed; the message should
explain the motivation and purpose behind the change. If the "why" is not
clear, ask me before committing.
- The commit body should start with the reason for the change. Technical
overview/details and implementation notes come after.
## Pull Requests
- PR descriptions should lead with "why" context, same as commits. Explain the
motivation and purpose before diving into technical details.
- Use conventional commits format for PR titles when the repo follows
conventional commits.
## Dependencies
- Use well-respected, well-maintained dependencies when they solve the problem
cleanly without workarounds or excessive accommodation.
- If the work to implement it yourself is minimal, skip the dependency.
## Documenting Discoveries
When you encounter surprising, unexpected, or non-obvious findings while
working on a project, document them in the project's agent instructions file:
- If `AGENTS.md` already exists, add findings there.
- If only `CLAUDE.md` exists (project-level, not this global one), add there.
- If neither exists, propose creating an `AGENTS.md` file.
What to document:
- Non-obvious project conventions or patterns that aren't apparent from the
code structure alone.
- Surprising behaviors, gotchas, or workarounds discovered during development.
- Implicit dependencies or ordering constraints between components.
- Environment-specific quirks (e.g., platform differences, tool version
sensitivities).
- Undocumented requirements or constraints found through trial and error.
Keep entries concise and actionable. Group them under a relevant existing
section or create a new section like `## Discoveries` or `## Gotchas`. The
goal is to prevent future agents (or yourself in a future session) from
re-discovering the same things the hard way.
## Plan Mode
- Make the plan extremely concise. Sacrifice grammar for the sake of concision.
- Plans must include testing: comprehensive tests for all changes, covering edge
cases, error conditions, and integration points.
- At the end of each plan, give me a list of unresolved questions to answer, if
any.

View File

@@ -1,85 +0,0 @@
---
description: Convert a project's CLAUDE.md into an agent-agnostic AGENTS.md file, keeping CLAUDE.md as a thin @-reference
---
## Context
- Check if `CLAUDE.md` and `AGENTS.md` exist in the project root.
- Check if `CLAUDE.md` is a symlink (e.g., `ls -la CLAUDE.md`).
## Your Task
Convert this project's `CLAUDE.md` into an `AGENTS.md` file, and replace
`CLAUDE.md` with a thin reference to it.
## Steps
1. **Verify preconditions**:
- If `CLAUDE.md` is a symlink pointing to `AGENTS.md`, skip to step 8.
- If neither `CLAUDE.md` nor `AGENTS.md` exist, abort with an error message.
- If `CLAUDE.md` does not exist but `AGENTS.md` does, skip to step 7.
- If both exist, abort — suggest using `AGENTS.md` directly or removing it
first.
- Otherwise, `CLAUDE.md` exists and `AGENTS.md` does not — proceed.
2. **Read `CLAUDE.md`** content in full.
3. **Review content for references that need updating**:
**Filename references**:
- Headings like `# CLAUDE.md``# AGENTS.md`
- Self-references like "this CLAUDE.md file" → "this AGENTS.md file"
- Any other mentions of the filename that refer to the file itself and should
change to reflect the new name
**Generalize Claude-specific agent language**:
- The title and opening paragraph often describe the file's purpose in
Claude-specific terms (e.g., "This file provides guidance to Claude...").
Rewrite these to be generic (e.g., "This file provides guidance to LLM
agents...").
- "Claude" (when referring to the AI agent performing tasks) → "LLM agents"
or "agents"
- "Tell Claude to..." → "Instruct agents to..."
- "Claude should..." → "Agents should..."
- "When Claude encounters..." → "When agents encounter..."
- Similar phrasing that assumes a specific AI agent — rewrite to be
agent-agnostic
**Do NOT change**:
- "Claude Code" — it's a proper product name (CLI tool)
- References to Claude Code features, documentation, or capabilities (e.g.,
`@`-references, slash commands)
- "Claude" as part of a filename or path (e.g., `.claude/`, `CLAUDE.md`
referring to other projects)
- References to CLAUDE.md that refer to other projects' files or external
concepts
4. **Write `AGENTS.md`** with the updated content.
5. **Replace `CLAUDE.md`** contents with just:
```
@AGENTS.md
```
This makes Claude Code load `AGENTS.md` via the `@`-reference.
6. **Summary**: Report what was done, including any references that were updated
in step 3. Stop here.
7. **Create `CLAUDE.md` reference** (only reached when `CLAUDE.md` doesn't exist
but `AGENTS.md` does):
- Write a new `CLAUDE.md` in the project root containing just:
```
@AGENTS.md
```
- Report that `CLAUDE.md` was created as a reference to the existing
`AGENTS.md`.
8. **Replace symlink with `@`-reference** (only reached when `CLAUDE.md` is a
symlink to `AGENTS.md`):
- Remove the `CLAUDE.md` symlink.
- Write a new `CLAUDE.md` file containing just:
```
@AGENTS.md
```
- Report that the symlink was replaced with an `@`-reference to
`AGENTS.md`.

View File

@@ -1,55 +0,0 @@
---
description: Cleans up all git branches marked as [gone] (branches that have been deleted on the remote but still exist locally), including removing associated worktrees.
source: https://github.com/anthropics/claude-plugins-official/blob/main/plugins/commit-commands/commands/clean_gone.md
---
## Your Task
You need to execute the following bash commands to clean up stale local branches
that have been deleted from the remote repository.
## Commands to Execute
1. **First, list branches to identify any with [gone] status** Execute this
command:
```bash
git branch -v
```
Note: Branches with a '+' prefix have associated worktrees and must have
their worktrees removed before deletion.
2. **Next, identify worktrees that need to be removed for [gone] branches**
Execute this command:
```bash
git worktree list
```
3. **Finally, remove worktrees and delete [gone] branches (handles both regular
and worktree branches)** Execute this command:
```bash
# Process all [gone] branches, removing '+' prefix if present
git branch -v | grep '\[gone\]' | sed 's/^[+* ]//' | awk '{print $1}' | while read branch; do
echo "Processing branch: $branch"
# Find and remove worktree if it exists
worktree=$(git worktree list | grep "\\[$branch\\]" | awk '{print $1}')
if [ ! -z "$worktree" ] && [ "$worktree" != "$(git rev-parse --show-toplevel)" ]; then
echo " Removing worktree: $worktree"
git worktree remove --force "$worktree"
fi
# Delete the branch
echo " Deleting branch: $branch"
git branch -D "$branch"
done
```
## Expected Behavior
After executing these commands, you will:
- See a list of all local branches with their status
- Identify and remove any worktrees associated with [gone] branches
- Delete all branches marked as [gone]
- Provide feedback on which worktrees and branches were removed
If no branches are marked as [gone], report that no cleanup was needed.

View File

@@ -1,46 +0,0 @@
---
allowed-tools: Bash(git checkout --branch:*), Bash(git branch -m:*), Bash(git add:*), Bash(git diff:*), Bash(git log:*), Bash(git status:*), Bash(git push:*), Bash(git commit:*), Bash(gh pr create:*)
description: Commit, push, and open a PR, rename branch appropriately if needed
source: https://github.com/anthropics/claude-plugins-official/blob/main/plugins/commit-commands/commands/commit-push-pr.md
---
## Context
- Current git status: !`git status`
- Current git diff (staged and unstaged changes): !`git diff HEAD`
- Current branch: !`git branch --show-current`
- Recent commits: !`git log --oneline -10`
## Your Task
Based on the above changes:
1. **Check agent docs**: Read the project's AGENTS.md and/or CLAUDE.md if they
exist. Review their content against the current changes. If the changes
introduce new conventions, commands, architecture, or development patterns
that should be documented (or invalidate existing documentation), update the
relevant file as part of this commit. Only update if clearly warranted —
don't add noise.
2. Create a new branch if on main or master. If already on a non-main/master
branch, check if the branch name looks randomly generated (e.g. UUIDs, hex
strings, meaningless character sequences, or 1-3 random unrelated words like
"brave-fox" or "purple-mountain") rather than descriptive of the changes. If
so, rename it to something that aligns with the changes using:
`git branch -m <new-name>`.
3. Create a single commit with an appropriate message. If asked to commit only
staged changes, run `git diff --staged` to see exactly what is staged, and
base the commit message solely on those changes. Do NOT stage additional
files. Otherwise, stage all relevant changes.
4. Push the branch to origin
5. Create a pull request using `gh pr create`. Use `git log` and
`git diff main...HEAD` (or master) to understand all changes on the branch.
The PR description should clearly explain *what* changed and *why*, covering
the full scope of changes since main/master. Do NOT include a list of
commits — the PR already shows those. Focus on a cohesive summary that
helps a reviewer understand the purpose and impact of the changes. Check for
a PR template at `.github/PULL_REQUEST_TEMPLATE.md` — if one exists, use it
as the base for the PR body and fill in the sections appropriately.
6. You have the capability to call multiple tools in a single response. You MUST
do all of the above in a single message. Do not use any other tools or do
anything else. Do not send any other text or messages besides these tool
calls.

View File

@@ -1,26 +0,0 @@
---
allowed-tools: Bash(git add:*), Bash(git diff:*), Bash(git status:*), Bash(git commit:*)
description: Create a git commit
source: https://github.com/anthropics/claude-plugins-official/blob/main/plugins/commit-commands/commands/commit.md
---
## Context
- Current git status: !`git status`
- Current git diff (staged and unstaged changes): !`git diff HEAD`
- Current branch: !`git branch --show-current`
- Recent commits: !`git log --oneline -10`
## Your task
Based on the above changes, create a single git commit.
If asked to commit only staged changes, run `git diff --staged` to see exactly
what is staged, and base the commit message solely on those changes. Do NOT
stage additional files.
Otherwise, stage all relevant changes and create the commit.
You have the capability to call multiple tools in a single response. Do not use
any other tools or do anything else. Do not send any other text or messages
besides these tool calls.

View File

@@ -1,291 +0,0 @@
---
description: Analyze the codebase and generate a minimal, hierarchical AGENTS.md structure with progressive disclosure
sources:
- https://github.com/RayFernando1337/llm-cursor-rules/blob/main/generate-agents.md
- https://www.aihero.dev/a-complete-guide-to-agents-md
---
# Task: Analyze this codebase and generate a hierarchical AGENTS.md structure
## Important Caveats
Auto-generated AGENTS.md files tend to be too comprehensive. Use this as a
**starting point only** - then aggressively trim. Target: smallest possible file
that provides value. Most instructions should move to progressive disclosure
(docs/agents/*.md files).
Remember:
- Stale docs actively poison agent context
- File paths go stale quickly - describe capabilities instead
- Each instruction must earn its token cost
- LLMs have ~150-200 instruction limit before degradation
---
## Paths vs Hints
**Bad (goes stale):**
- `Auth logic: src/auth/provider.tsx`
- `API routes: apps/api/src/routes/**`
**Good (survives refactors):**
- `Auth: Uses React Context pattern, look for *Provider or *Context`
- `API routes: Next.js app router convention, check for route.ts files`
- `Models: Prisma schema defines domain entities`
### Anti-Pattern: Static File References
Never document:
- `User model is in src/models/user.ts`
- `Auth handler lives at lib/auth/handlers.ts`
Instead document:
- `User model: Prisma schema, look for "model User"`
- `Auth: middleware pattern, grep for "authenticate" or "withAuth"`
---
## Document Domain Concepts
**Stable (document these):**
- "Organization" vs "Workspace" vs "Team" terminology
- Core domain entities and their relationships
- Business rules that aren't obvious from code
**Unstable (avoid documenting):**
- Specific file paths
- Directory structure
- Import paths
---
## Context & Principles
You are going to help me create a **hierarchical AGENTS.md system** for this
codebase. This is critical for AI coding agents to work efficiently with minimal
token usage.
### Core Principles
1. **Minimal root AGENTS.md** - Only universal guidance, links to sub-files
2. **Nearest-wins hierarchy** - Agents read closest AGENTS.md to edited file
3. **Pattern hints over paths** - Describe grep-able patterns, not file
locations
4. **Token efficiency** - Small, actionable guidance over encyclopedic docs
5. **Progressive disclosure** - Link to docs/agents/*.md for detailed rules
6. **Domain concepts** - Document terminology and business rules, not structure
---
## Your Process
### Phase 1: Repository Analysis
First, analyze the codebase and provide me with:
1. **Repository type**: Monorepo, multi-package, or simple single project?
2. **Primary technology stack**: Languages, frameworks, key tools
3. **Major packages** that warrant their own AGENTS.md:
- Only for areas with significantly different tech/patterns
- Skip if root guidance suffices
- Prefer fewer, more focused files over many small ones
4. **Build system**: pnpm/npm/yarn workspaces? Turborepo? Or simple?
5. **Testing conventions**: Framework and colocated vs separate?
6. **Key patterns to document** (as grep-able hints):
- What conventions are used (not where files are)
- Domain terminology agents should understand
- Anti-patterns to avoid
Present this as a **structured map** before generating any AGENTS.md files.
---
### Phase 2: Generate Root AGENTS.md
Create a **minimal root AGENTS.md** (~50-100 lines max, ideally under 50).
Per the guide, root AGENTS.md needs only:
1. One-sentence project description
2. Package manager (if not npm)
3. Build/typecheck commands (if non-standard)
#### Required Sections
**1. Project Overview** (3-5 lines)
- One-sentence description of what this project does
- Package manager and key build commands (only if non-standard)
**2. Navigation** (5-10 lines)
Link to sub-AGENTS.md files and describe how to find things:
```
## Navigation
### Sub-package Docs
Each major package has its own AGENTS.md. Look for them in package roots.
### Finding Things
- Components: exported from *.tsx, usually named after component
- API routes: follow framework conventions (route.ts, [...slug], etc.)
- Config: root-level *.config.* files
- Tests: colocated *.test.* or in __tests__ directories
```
**3. Progressive Disclosure** (2-5 lines)
Link to detailed docs instead of inlining them:
```
## Detailed Docs
- TypeScript conventions: see docs/agents/TYPESCRIPT.md
- Testing patterns: see docs/agents/TESTING.md
```
#### Optional Sections (include only if truly needed)
**Conventions** - Only if non-obvious (commit format, unusual style rules)
**Security** - Only if project has specific secret handling beyond standard
`.env` patterns
---
### Phase 3: Generate Sub-Folder AGENTS.md Files
Only create for directories with significantly different tech/patterns. Each
file should be ~30-50 lines max.
#### Required Sections (3-4 essentials)
**1. Package Identity** (1-2 lines)
- What this package/app/service does
- Primary tech if different from root
**2. Setup & Run** (only if different from root)
- Dev, build, test commands for this package
**3. Patterns & Conventions** (5-15 lines)
Describe patterns agents can grep for, not paths they should navigate to:
```
## Patterns
- Auth: Context provider pattern → grep for createContext, Provider
- API calls: Centralized client → grep for fetchClient, apiClient
- Validation: Zod schemas → grep for z.object, .parse
- State: React Query → grep for useQuery, useMutation
### Do/Don't
- DO: Use functional components with hooks
- DON'T: Use class components (legacy only)
```
**4. Pre-PR Check** (1-2 lines)
Single copy-paste command:
```
pnpm --filter @repo/web typecheck && pnpm --filter @repo/web test
```
#### Optional Sections (include only if critical)
- **Gotchas**: Only truly non-obvious issues (1-3 lines max)
- **Quick Find**: Package-specific search commands
---
### Phase 4: Special Considerations
Add these ONLY if the package has them and they're non-obvious:
**Design System** (if exists)
```
## Design System
- Use design tokens (never hardcode colors)
- Component patterns: functional, composable, typed props
```
**Database** (if exists)
```
## Database
- ORM: [name], migrations via `pnpm db:migrate`
- Never run migrations in tests
```
**API** (if exists)
```
## API Patterns
- Validation: Zod schemas
- Errors: Throw typed ApiError
```
---
## Output Format
Provide files in this order:
1. **Analysis Summary** (from Phase 1)
2. **Root AGENTS.md** (complete, ready to copy)
3. **Each Sub-Folder AGENTS.md** (with file path)
Use this format:
```
---
File: `AGENTS.md` (root)
---
[content]
---
File: `apps/web/AGENTS.md`
---
[content]
```
---
## Maintenance Warning
AGENTS.md files go stale. Review quarterly:
- Remove any file paths that crept in
- Verify pattern hints still match codebase conventions
- Update commands that changed
- Delete rules the agent already knows
- Question if each instruction earns its token cost
---
## Quality Checks
Before generating, verify:
- [ ] Root AGENTS.md under 50 lines? (100 max)
- [ ] Sub-folder files under 50 lines each?
- [ ] **No static file paths in documentation?**
- [ ] **Patterns described as grep-able hints?**
- [ ] **Domain concepts over implementation details?**
- [ ] Progressive disclosure used for detailed rules?
- [ ] Does each instruction earn its token cost?
- [ ] Would this survive a major refactor?
- [ ] Commands are copy-paste ready?
- [ ] No duplication between root and sub-files?
- [ ] Not every directory gets its own file?

View File

@@ -1,33 +0,0 @@
---
allowed-tools: Bash(git fetch:*), Bash(git rebase:*), Bash(git stash:*), Bash(git status:*), Bash(git diff:*), Bash(git log:*), Bash(git add:*), Bash(git branch:*), Bash(git rev-parse:*), Read, Edit
description: Rebase current branch onto upstream main/master
---
## Context
- Current branch: !`git branch --show-current`
- Default branch: !`git rev-parse --abbrev-ref origin/HEAD 2>/dev/null`
- Uncommitted changes: !`git status --short`
## Your Task
Rebase the current branch onto the upstream default branch (main or master).
1. If there are uncommitted changes, stash them first with
`git stash push -m "auto-stash before rebase"`.
2. Fetch the latest from origin: `git fetch origin`.
3. Rebase onto the default branch using the value from context above:
`git rebase <default-branch>`.
4. If the rebase succeeds and changes were stashed in step 1, run
`git stash pop`.
5. Show the result with `git log --oneline -10`.
If the rebase fails due to conflicts, attempt to resolve them yourself.
If you have low confidence in the resolution, abort the rebase with
`git rebase --abort`, restore any stashed changes with `git stash pop`,
and ask the user to resolve manually — leaving the working tree as it
was found.
You have the capability to call multiple tools in a single response. Do not
use any other tools or do anything else. Do not send any other text or
messages besides these tool calls.

View File

@@ -1,34 +0,0 @@
---
description: Refactor an existing AGENTS.md to follow progressive disclosure principles, extracting detailed rules into separate docs
source: https://www.aihero.dev/a-complete-guide-to-agents-md
---
# Task: Refactor my AGENTS.md
I want you to refactor my AGENTS.md file to follow progressive disclosure
principles. If there is no AGENTS.md file, look for a CLAUDE.md file instead.
Follow these steps:
1. **Find contradictions**: Identify any instructions that conflict with each
other. For each contradiction, ask me which version I want to keep.
2. **Identify the essentials**: Extract only what belongs in the root AGENTS.md:
- One-sentence project description
- Package manager (if not npm)
- Non-standard build/typecheck commands
- Anything truly relevant to every single task
3. **Group the rest**: Organize remaining instructions into logical categories
(e.g., TypeScript conventions, testing patterns, API design, Git workflow).
For each group, create a separate Markdown file.
4. **Create the file structure**: Output:
- A minimal root AGENTS.md with Markdown links to the separate files
- Each separate file with its relevant instructions
- A suggested docs/agents/ folder structure
5. **Flag for deletion**: Identify any instructions that are:
- Redundant (the agent already knows this)
- Too vague to be actionable
- Overly obvious (like "write clean code")

View File

@@ -1,118 +0,0 @@
{
"env": {
"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
},
"permissions": {
"allow": [
"Bash(awk:*)",
"Bash(bat:*)",
"Bash(bundle check:*)",
"Bash(bundle info:*)",
"Bash(bundle install:*)",
"Bash(bundle list:*)",
"Bash(cargo build:*)",
"Bash(cargo clippy:*)",
"Bash(cargo fmt:*)",
"Bash(cargo test:*)",
"Bash(cat:*)",
"Bash(du:*)",
"Bash(fd:*)",
"Bash(find:*)",
"Bash(gem list:*)",
"Bash(gh issue view:*)",
"Bash(gh pr diff:*)",
"Bash(gh pr list:*)",
"Bash(gh pr view:*)",
"Bash(gh repo view:*)",
"Bash(git add:*)",
"Bash(git branch:*)",
"Bash(git diff:*)",
"Bash(git fetch:*)",
"Bash(git log:*)",
"Bash(git rev-parse:*)",
"Bash(git show:*)",
"Bash(git status:*)",
"Bash(go build:*)",
"Bash(go get:*)",
"Bash(go mod download:*)",
"Bash(go mod init:*)",
"Bash(go mod tidy:*)",
"Bash(go test:*)",
"Bash(golangci-lint run:*)",
"Bash(grep:*)",
"Bash(helm create:*)",
"Bash(helm lint:*)",
"Bash(helm template:*)",
"Bash(ls:*)",
"Bash(npm run build:*)",
"Bash(npm run check-types:*)",
"Bash(npm run compile)",
"Bash(npm run compile-tests:*)",
"Bash(npm run format:*)",
"Bash(npm run lint:*)",
"Bash(npm run lint:fix:*)",
"Bash(npm run test:*)",
"Bash(pnpm build:*)",
"Bash(pnpm check-types:*)",
"Bash(pnpm compile)",
"Bash(pnpm compile-tests:*)",
"Bash(pnpm exec prettier:*)",
"Bash(pnpm format:*)",
"Bash(pnpm install:*)",
"Bash(pnpm lint:*)",
"Bash(pnpm lint:fix:*)",
"Bash(pnpm run build:*)",
"Bash(pnpm run check-types:*)",
"Bash(pnpm run compile)",
"Bash(pnpm run compile-tests:*)",
"Bash(pnpm run format:*)",
"Bash(pnpm run lint:*)",
"Bash(pnpm run lint:fix:*)",
"Bash(pnpm run test:*)",
"Bash(pnpm test:*)",
"Bash(pnpm typecheck:*)",
"Bash(rg:*)",
"Bash(tail:*)",
"Bash(tree:*)",
"Bash(wc:*)",
"Skill(commit-commands:commit:*)",
"WebSearch",
"mcp__context7__get-library-docs",
"mcp__context7__resolve-library-id",
"mcp__deepwiki__ask_question",
"mcp__deepwiki__read_wiki_contents",
"mcp__deepwiki__read_wiki_structure"
],
"deny": [
"Read(./.env)",
"Read(./.env*)",
"Read(./mcp.json)",
"Read(./secrets/**)"
]
},
"statusLine": {
"type": "command",
"command": "~/.claude/statusline.sh",
"padding": 0
},
"enabledPlugins": {
"code-review@claude-plugins-official": true,
"code-simplifier@claude-plugins-official": true,
"commit-commands@claude-plugins-official": false,
"feature-dev@claude-plugins-official": true,
"frontend-design@claude-plugins-official": true,
"gopls-lsp@claude-plugins-official": true,
"lua-lsp@claude-plugins-official": false,
"playwright@claude-plugins-official": false,
"plugin-dev@claude-plugins-official": true,
"pr-review-toolkit@claude-plugins-official": false,
"ralph-loop@claude-plugins-official": false,
"rust-analyzer-lsp@claude-plugins-official": true,
"security-guidance@claude-plugins-official": false,
"sentry@claude-plugins-official": false,
"swift-lsp@claude-plugins-official": false,
"typescript-lsp@claude-plugins-official": true
},
"autoUpdatesChannel": "latest",
"teammateMode": "auto"
}

View File

@@ -1,139 +0,0 @@
---
name: Frontend Design Systems
description: >-
This skill should be used when the user asks to "build a design system",
"create consistent UI", "define color ratios", "set up typography system",
"normalize geometry tokens", "validate visual hierarchy", "apply design
constraints", or when generating frontend UI that requires systematic
visual consistency. Augments the frontend-design skill with system-level
visual decision rules for consistency, hierarchy, and scalable UI
decision-making.
version: 0.1.0
inspired_by: https://www.youtube.com/watch?v=eVnQFWGDEdY
---
# Frontend Design Systems
## Purpose
Augment the `frontend-design` skill with system-level visual decision rules
derived from practical graphic design heuristics. Focus on consistency,
hierarchy, and scalable UI decision-making rather than aesthetic
experimentation.
Apply this skill *after* layout, accessibility, and interaction logic are
established by `frontend-design`.
## Core Principle
Design quality emerges from repeatable systems: ratios, constraints, typography
systems, geometry rules, and hierarchy validation.
Prefer deterministic structure over stylistic improvisation.
## Heuristics
### 1. Color Ratio System
Do not distribute colors evenly. Use proportional dominance:
- 70-90% neutral base
- 10-25% supporting color
- 1-8% accent color
Map accent color to: primary actions, alerts, focus states, brand signals. If
accent overuse occurs, reduce until hierarchy is restored.
### 2. Typography Superfamily Strategy
Default to a Single System: One Family, multiple Weights, Limited Width/Style
Variation.
Express hierarchy via: size, weight, spacing, rhythm.
Introduce additional typefaces only when semantic separation is required (e.g.,
code vs marketing content).
### 3. Geometry Consistency Rule
All UI must inherit a shared structural language: border radius, angle logic,
stroke thickness, elevation system, spacing cadence.
Do not introduce new geometry tokens unless:
- Existing tokens cannot express the requirement
- Functional clarity would otherwise degrade
Consistency > variety.
### 4. Dual-Scale Validation
Evaluate every interface at two levels:
**Macro (~10% scale):** Hierarchy clarity, scanning flow, section priority.
**Micro (~200-300% scale):** Spacing, alignment, typography precision, component
polish.
Reject designs that succeed at only one scale.
### 5. Constraint-First Brand Framing
Before generating UI styles, define negative constraints. Example:
- not playful
- not aggressive
- not corporate
- not experimental
- not premium
- not youthful
Use constraints to filter: color choices, typography decisions, motion styles,
component density. If a design decision conflicts with constraints, discard it.
### 6. Non-Designer Reality Bias
Assume users: are distracted, scroll quickly, use mobile, operate under low
brightness, do not analyze details.
Optimize for: instant comprehension, strong primary action visibility, minimal
cognitive load, clear visual hierarchy within <2 seconds.
Design for use, not inspection.
### 7. Repetition over Novelty
When uncertain: repeat existing visual rules, reinforce hierarchy, reduce
variation.
Allow novelty only after: clarity is achieved, hierarchy is stable, interaction
affordances are obvious.
## Integration Behavior
When layered after `frontend-design`:
1. Convert layout decisions into visual systems: derive color ratios, apply
typography hierarchy, normalize geometry tokens.
2. Run constraint filtering before rendering UI variants.
3. Evaluate macro structure first, micro polish second.
4. Optimize for comprehension speed over stylistic uniqueness.
5. Prefer consistency, predictability, clarity, and restraint over visual
experimentation.
## Failure Modes to Avoid
- Evenly distributed color usage
- Mixing multiple unrelated typefaces
- Inconsistent border radii and spacing logic
- Hierarchy visible only at high zoom
- Designing for designers instead of users
- Novelty introduced without structural justification
## Output Expectations
UI generated with this skill should feel: intentional, cohesive, restrained,
hierarchy-driven, fast to parse, and visually consistent across components and
pages.
Bias toward clarity and repetition until interaction goals are fully satisfied.

View File

@@ -1,245 +0,0 @@
#!/bin/bash
# --- Color Constants ---
COLOR_DIR=245 # directory
COLOR_GIT_BRANCH=153 # light blue pastel
COLOR_GIT_STATUS=182 # pink pastel
COLOR_DIM=243 # dimmer text (lines, cost)
COLOR_SEP=242 # separators
# --- Utility Functions ---
# Print text in specified 256-color
colored() {
printf "\033[38;5;%sm%s\033[0m" "$1" "$2"
}
# Print separator
sep() {
colored $COLOR_SEP " · "
}
# Format token counts (e.g., 50k, 1.2M)
format_tokens() {
local tokens=$1
if [ "$tokens" -ge 1000000 ]; then
awk "BEGIN {printf \"%.1fM\", $tokens/1000000}"
elif [ "$tokens" -ge 1000 ]; then
awk "BEGIN {printf \"%.0fk\", $tokens/1000}"
else
echo "$tokens"
fi
}
# Return color code based on percentage threshold
# Args: $1 = percentage, $2 = base color (used when below warning threshold)
get_percentage_color() {
local percent=$1
local base_color=$2
# 229 = light yellow, 221 = yellow, 214 = gold, 208 = orange
if [ "$percent" -ge 98 ]; then
echo 208
elif [ "$percent" -ge 95 ]; then
echo 214
elif [ "$percent" -ge 90 ]; then
echo 221
elif [ "$percent" -ge 85 ]; then
echo 229
else
echo "$base_color"
fi
}
# --- Data Extraction ---
# Read stdin, save JSON, extract all fields into globals
parse_input() {
INPUT=$(cat)
MODEL=$(echo "$INPUT" | jq -r '.model.display_name')
CWD=$(echo "$INPUT" | jq -r '.workspace.current_dir')
PERCENT=$(echo "$INPUT" | jq -r '.context_window.used_percentage // 0' |
xargs printf "%.0f")
TOTAL_INPUT=$(echo "$INPUT" | jq -r '.context_window.total_input_tokens // 0')
TOTAL_OUTPUT=$(echo "$INPUT" | jq -r '.context_window.total_output_tokens // 0')
TOTAL_TOKENS=$((TOTAL_INPUT + TOTAL_OUTPUT))
CONTEXT_SIZE=$(echo "$INPUT" | jq -r '.context_window.context_window_size // 0')
# Calculate currently loaded tokens from percentage
CURRENT_TOKENS=$((CONTEXT_SIZE * PERCENT / 100))
# Extract cost info
COST_USD=$(echo "$INPUT" | jq -r '.cost.total_cost_usd // 0')
LINES_ADDED=$(echo "$INPUT" | jq -r '.cost.total_lines_added // 0')
LINES_REMOVED=$(echo "$INPUT" | jq -r '.cost.total_lines_removed // 0')
}
# --- Component Builders ---
# Get CWD, replace $HOME with ~
get_directory() {
if [ -n "$CWD" ]; then
DIR="$CWD"
else
DIR=$(pwd)
fi
# Replace home directory with tilde
DIR="${DIR/#$HOME/~}"
}
# Get branch, status indicators, ahead/behind
get_git_info() {
GIT_BRANCH=""
GIT_STATUS=""
GIT_AHEAD_BEHIND=""
# Skip if not in a git repo (skip optional locks to avoid blocking)
if [ ! -d "${CWD:-.}/.git" ] &&
! git -C "${CWD:-.}" rev-parse --git-dir > /dev/null 2>&1; then
return
fi
# Get branch name
GIT_BRANCH=$(git -C "${CWD:-.}" branch --show-current 2> /dev/null ||
git -C "${CWD:-.}" rev-parse --short HEAD 2> /dev/null)
[ -z "$GIT_BRANCH" ] && return
# Get status indicators
local git_dirty="" git_staged="" git_untracked=""
# Check for staged changes
if ! git -C "${CWD:-.}" diff --cached --quiet 2> /dev/null; then
git_staged="+"
fi
# Check for unstaged changes
if ! git -C "${CWD:-.}" diff --quiet 2> /dev/null; then
git_dirty="!"
fi
# Check for untracked files
if [ -n "$(git -C "${CWD:-.}" ls-files --others --exclude-standard 2> /dev/null)" ]; then
git_untracked="?"
fi
# Combine status indicators
GIT_STATUS="${git_staged}${git_dirty}${git_untracked}"
# Get ahead/behind counts
local upstream ahead behind
upstream=$(git -C "${CWD:-.}" rev-parse --abbrev-ref '@{upstream}' 2> /dev/null)
if [ -n "$upstream" ]; then
ahead=$(git -C "${CWD:-.}" rev-list --count '@{upstream}..HEAD' 2> /dev/null)
behind=$(git -C "${CWD:-.}" rev-list --count 'HEAD..@{upstream}' 2> /dev/null)
if [ "$ahead" -gt 0 ]; then
GIT_AHEAD_BEHIND="${ahead}"
fi
if [ "$behind" -gt 0 ]; then
GIT_AHEAD_BEHIND="${GIT_AHEAD_BEHIND}${behind}"
fi
fi
}
# Build braille progress bar from PERCENT
build_progress_bar() {
# Braille characters with 7 levels per cell
# ⣀ (2) -> ⣄ (3) -> ⣤ (4) -> ⣦ (5) -> ⣶ (6) -> ⣷ (7) -> ⣿ (8 dots)
local braille_chars=("⣀" "⣄" "⣤" "⣦" "⣶" "⣷" "⣿")
local bar_width=10
local levels=7
local total_gradations=$((bar_width * levels))
local current_gradation=$((PERCENT * total_gradations / 100))
PROGRESS_BAR=""
for ((i = 0; i < bar_width; i++)); do
local cell_start=$((i * levels))
local cell_fill=$((current_gradation - cell_start))
if [ $cell_fill -le 0 ]; then
# Empty cell
PROGRESS_BAR+="${braille_chars[0]}"
elif [ $cell_fill -ge $levels ]; then
# Full cell
PROGRESS_BAR+="${braille_chars[$((levels - 1))]}"
else
# Partial cell
PROGRESS_BAR+="${braille_chars[$cell_fill]}"
fi
done
}
# --- Output ---
# Print the final formatted statusline
print_statusline() {
local current_display total_display cost_display context_color
current_display=$(format_tokens "$CURRENT_TOKENS")
total_display=$(format_tokens "$TOTAL_TOKENS")
# Determine context color based on percentage (ramps to warning colors)
context_color=$(get_percentage_color "$PERCENT" $COLOR_DIM)
# Format cost as $X.XX
cost_display=$(awk "BEGIN {printf \"$%.2f\", $COST_USD}")
# Directory
colored $COLOR_DIR "$DIR"
# Git info
if [ -n "$GIT_BRANCH" ]; then
printf " "
colored $COLOR_GIT_BRANCH "$GIT_BRANCH"
# Status indicators
if [ -n "$GIT_STATUS" ]; then
colored $COLOR_GIT_STATUS "$GIT_STATUS"
fi
# Ahead/behind
if [ -n "$GIT_AHEAD_BEHIND" ]; then
printf " "
colored $COLOR_GIT_STATUS "$GIT_AHEAD_BEHIND"
fi
fi
sep
# Model (only if not default Opus 4.6)
if [ "$MODEL" != "Opus 4.6" ]; then
colored $COLOR_DIR "$MODEL"
sep
fi
# Lines added/removed
colored $COLOR_DIM "+$LINES_ADDED"
colored $COLOR_SEP "/"
colored $COLOR_DIM "-$LINES_REMOVED"
sep
# Progress bar and percentage (dynamic color based on context usage)
colored "$context_color" "$PROGRESS_BAR $PERCENT%"
sep
# Token counts (dynamic color based on context usage)
colored "$context_color" "$current_display/$total_display"
sep
# Cost
colored $COLOR_DIM "$cost_display"
}
# --- Entry Point ---
main() {
parse_input
get_directory
get_git_info
build_progress_bar
print_statusline
}
main "$@"

View File

@@ -1,265 +0,0 @@
---
source: https://github.com/RayFernando1337/llm-cursor-rules/blob/main/generate-agents.md
---
# Task: Analyze this codebase and generate a hierarchical AGENTS.md structure
## Context & Principles
You are going to help me create a **hierarchical AGENTS.md system** for this codebase. This is critical for AI coding agents to work efficiently with minimal token usage.
### Core Principles
1. **Root AGENTS.md is LIGHTWEIGHT** - Only universal guidance, links to sub-files
2. **Nearest-wins hierarchy** - Agents read the closest AGENTS.md to the file being edited
3. **JIT (Just-In-Time) indexing** - Provide paths/globs/commands, NOT full content
4. **Token efficiency** - Small, actionable guidance over encyclopedic documentation
5. **Sub-folder AGENTS.md files have MORE detail** - Specific patterns, examples, commands
## Your Process
### Phase 1: Repository Analysis
First, analyze the codebase structure and provide me with:
1. **Repository type**: Monorepo, multi-package, or simple single project?
2. **Primary technology stack**: Languages, frameworks, key tools
3. **Major directories/packages** that should have their own AGENTS.md:
- Apps (e.g., `apps/web`, `apps/api`, `apps/mobile`)
- Services (e.g., `services/auth`, `services/transcribe`)
- Packages/libs (e.g., `packages/ui`, `packages/shared`)
- Workers/jobs (e.g., `workers/queue`, `workers/cron`)
4. **Build system**: pnpm/npm/yarn workspaces? Turborepo? Lerna? Or simple?
5. **Testing setup**: Jest, Vitest, Playwright, pytest? Where are tests?
6. **Key patterns to document**:
- Code organization patterns
- Important conventions (naming, styling, commits)
- Critical files that serve as good examples
- Anti-patterns to avoid
Present this as a **structured map** before generating any AGENTS.md files.
---
### Phase 2: Generate Root AGENTS.md
Create a **lightweight root AGENTS.md** (~100-200 lines max) that includes:
#### Required Sections
**1. Project Snapshot** (3-5 lines)
- Repo type (monorepo/simple)
- Primary tech stack
- Note that sub-packages have their own AGENTS.md files
**2. Root Setup Commands** (5-10 lines)
- Install dependencies (root level)
- Build all
- Typecheck all
- Test all
**3. Universal Conventions** (5-10 lines)
- Code style (TypeScript strict? Prettier? ESLint?)
- Commit format (Conventional Commits?)
- Branch strategy
- PR requirements
**4. Security & Secrets** (3-5 lines)
- Never commit tokens
- Where secrets go (.env patterns)
- PII handling if applicable
**5. JIT Index - Directory Map** (10-20 lines)
Structure like:
```
## JIT Index (what to open, not what to paste)
### Package Structure
- Web UI: `apps/web/` → [see apps/web/AGENTS.md](apps/web/AGENTS.md)
- API: `apps/api/` → [see apps/api/AGENTS.md](apps/api/AGENTS.md)
- Auth service: `services/auth/` → [see services/auth/AGENTS.md](services/auth/AGENTS.md)
- Shared packages: `packages/**/` → [see packages/README.md for details]
### Quick Find Commands
- Search for a function: `rg -n "functionName" apps/** packages/**`
- Find a component: `rg -n "export.*ComponentName" apps/web/src`
- Find API routes: `rg -n "export const (GET|POST)" apps/api`
```
**6. Definition of Done** (3-5 lines)
- What must pass before a PR is ready
- Minimal checklist
---
### Phase 3: Generate Sub-Folder AGENTS.md Files
For EACH major package/directory identified in Phase 1, create a **detailed AGENTS.md** that includes:
#### Required Sections
**1. Package Identity** (2-3 lines)
- What this package/app/service does
- Primary tech/framework for THIS package
**2. Setup & Run** (5-10 lines)
- Install command (if different from root)
- Dev server command
- Build command
- Test command
- Lint/typecheck commands
**3. Patterns & Conventions** (10-20 lines)
**THIS IS THE MOST IMPORTANT SECTION**
- File organization rules (where things go)
- Naming conventions specific to this package
- Preferred patterns with **file examples**:
```
- ✅ DO: Use functional components like `src/components/Button.tsx`
- ❌ DON'T: Use class components like `src/legacy/OldButton.tsx`
- ✅ Forms: Copy pattern from `src/components/forms/ContactForm.tsx`
- ✅ API calls: Use `src/lib/api/client.ts` wrapper, see example in `src/hooks/useUser.ts`
```
**4. Touch Points / Key Files** (5-10 lines)
Point to the most important files to understand this package:
```
- Auth logic: `src/auth/provider.tsx`
- API client: `src/lib/api.ts`
- Types: `src/types/index.ts`
- Config: `src/config.ts`
```
**5. JIT Index Hints** (5-10 lines)
Specific search commands for this package:
```
- Find a React component: `rg -n "export function .*" src/components`
- Find a hook: `rg -n "export const use" src/hooks`
- Find route handlers: `rg -n "export async function (GET|POST)" src/app`
- Find tests: `find . -name "*.test.ts"`
```
**6. Common Gotchas** (3-5 lines, if applicable)
- "Auth requires `NEXT_PUBLIC_` prefix for client-side use"
- "Always use `@/` imports for absolute paths"
- "Database migrations must be run before tests: `pnpm db:migrate`"
**7. Pre-PR Checks** (2-3 lines)
Package-specific command to run before creating a PR:
```
pnpm --filter @repo/web typecheck && pnpm --filter @repo/web test && pnpm --filter @repo/web build
```
---
### Phase 4: Special Considerations
For each AGENTS.md file, also consider:
**A. Design System / UI Package**
If there's a design system or UI library:
```markdown
## Design System
- Components: `packages/ui/src/components/**`
- Use design tokens from `packages/ui/src/tokens.ts` (never hardcode colors)
- See component gallery: `pnpm --filter @repo/ui storybook`
- Examples:
- Buttons: Copy `packages/ui/src/components/Button/Button.tsx`
- Forms: Copy `packages/ui/src/components/Input/Input.tsx`
```
**B. Database / Data Layer**
If there's a database service:
```markdown
## Database
- ORM: Prisma / Drizzle / TypeORM
- Schema: `prisma/schema.prisma`
- Migrations: `pnpm db:migrate`
- Seed: `pnpm db:seed`
- **NEVER** run migrations in tests; use `test-db` script
- Connection: via `src/lib/db.ts` singleton
```
**C. API / Backend Service**
```markdown
## API Patterns
- REST routes: `src/routes/**/*.ts`
- Auth middleware: `src/middleware/auth.ts` (apply to protected routes)
- Validation: Use Zod schemas in `src/schemas/**`
- Error handling: All errors thrown as `ApiError` from `src/lib/errors.ts`
- Example endpoint: See `src/routes/users/get.ts` for full pattern
```
**D. Testing Package**
```markdown
## Testing
- Unit tests: `*.test.ts` colocated with source
- Integration tests: `tests/integration/**`
- E2E tests: `tests/e2e/**` (Playwright)
- Run single test: `pnpm test -- path/to/file.test.ts`
- Coverage: `pnpm test:coverage` (aim for >80%)
- Mock external APIs using `src/test/mocks/**`
```
---
## Output Format
Provide the files in this order:
1. **Analysis Summary** (from Phase 1)
2. **Root AGENTS.md** (complete, ready to copy)
3. **Each Sub-Folder AGENTS.md** (one at a time, with file path)
For each file, use this format:
```
---
File: `AGENTS.md` (root)
---
[full content here]
---
File: `apps/web/AGENTS.md`
---
[full content here]
---
File: `services/auth/AGENTS.md`
---
[full content here]
```
---
## Constraints & Quality Checks
Before generating, verify:
- [ ] Root AGENTS.md is under 200 lines
- [ ] Root links to all sub-AGENTS.md files
- [ ] Each sub-file has concrete examples (actual file paths)
- [ ] Commands are copy-paste ready (no placeholders unless unavoidable)
- [ ] No duplication between root and sub-files
- [ ] JIT hints use actual patterns from the codebase (ripgrep, find, glob)
- [ ] Every "✅ DO" has a real file example
- [ ] Every "❌ DON'T" references a real anti-pattern or legacy file
- [ ] Pre-PR checks are single copy-paste commands

View File

@@ -1 +0,0 @@
../claude/CLAUDE.md

136
siren
View File

@@ -43,19 +43,9 @@ define_settings() {
STATIC_SYMLINKS["harper-ls/file_dictionaries"]="$(harper_config_dir)/file_dictionaries"
STATIC_SYMLINKS["harper-ls/ignored_lints"]="$(harper_config_dir)/ignored_lints"
# Conditionally add symlinks for Claude (only if CLI is installed).
if command -v claude &>/dev/null; then
STATIC_SYMLINKS["claude/CLAUDE.md"]="${HOME}/.claude/CLAUDE.md"
STATIC_SYMLINKS["claude/settings.json"]="${HOME}/.claude/settings.json"
STATIC_SYMLINKS["claude/statusline.sh"]="${HOME}/.claude/statusline.sh"
_add_command_symlinks "claude" "${HOME}/.claude"
_add_skill_symlinks "claude" "${HOME}/.claude"
fi
# Conditionally add symlinks for Cursor.
if [[ "${SETUP_EDITOR}" == "cursor" ]]; then
STATIC_SYMLINKS["cursor/mcp.json"]="${HOME}/.cursor/mcp.json"
_add_command_symlinks "cursor" "${HOME}/.cursor"
fi
}
@@ -429,123 +419,6 @@ symlink_static_config() {
done
}
# Add symlinks for all markdown files in a commands directory.
# Args: $1 = source subdir (e.g., "claude"), $2 = target dir (e.g., ~/.claude)
_add_command_symlinks() {
local source_subdir="$1"
local target_base="$2"
local cmd_file
for cmd_file in "${SCRIPT_DIR}/${source_subdir}/commands/"*.md; do
if [[ -f "${cmd_file}" ]]; then
local filename
filename="$(basename "${cmd_file}")"
STATIC_SYMLINKS["${source_subdir}/commands/${filename}"]="${target_base}/commands/${filename}"
fi
done
}
# Add symlinks for all skill directories (containing SKILL.md) in a skills
# directory.
# Args: $1 = source subdir (e.g., "claude"), $2 = target dir (e.g., ~/.claude)
_add_skill_symlinks() {
local source_subdir="$1"
local target_base="$2"
local skills_dir="${SCRIPT_DIR}/${source_subdir}/skills"
if [[ ! -d "${skills_dir}" ]]; then
return
fi
local skill_dir
for skill_dir in "${skills_dir}"/*/; do
# Skip if glob didn't match anything (returns literal pattern).
[[ -d "${skill_dir}" ]] || continue
# Only treat directories containing SKILL.md as skills.
if [[ -f "${skill_dir}/SKILL.md" ]]; then
local skill_name
skill_name="$(basename "${skill_dir}")"
STATIC_SYMLINKS["${source_subdir}/skills/${skill_name}"]="${target_base}/skills/${skill_name}"
fi
done
}
# Remove stale symlinks in a commands directory that point to source files that
# no longer exist.
# Args: $1 = source subdir (e.g., "claude"), $2 = target dir (e.g., ~/.claude)
_cleanup_stale_commands() {
local source_subdir="$1"
local target_base="$2"
local commands_dir="${target_base}/commands"
local source_dir="${SCRIPT_DIR}/${source_subdir}/commands"
if [[ ! -d "${commands_dir}" ]]; then
return
fi
local link
for link in "${commands_dir}"/*; do
# Skip if glob didn't match anything (returns literal pattern).
[[ -e "${link}" || -L "${link}" ]] || continue
# Skip if not a symlink.
if [[ ! -L "${link}" ]]; then
continue
fi
local target
target="$(readlink "${link}")"
# Check if the symlink points to our source directory.
if [[ "${target}" == "${source_dir}/"* ]]; then
# If the target file no longer exists, remove the symlink.
if [[ ! -e "${target}" ]]; then
info "Removing stale symlink: ${link}"
rm -f "${link}"
fi
fi
done
}
# Remove stale symlinks in a skills directory that point to source
# directories that no longer exist. Only touches symlinks pointing into our
# source tree; symlinks managed by other tools are left untouched.
# Args: $1 = source subdir (e.g., "claude"), $2 = target dir (e.g., ~/.claude)
_cleanup_stale_skills() {
local source_subdir="$1"
local target_base="$2"
local skills_dir="${target_base}/skills"
local source_dir="${SCRIPT_DIR}/${source_subdir}/skills"
if [[ ! -d "${skills_dir}" ]]; then
return
fi
local link
for link in "${skills_dir}"/*; do
# Skip if glob didn't match anything (returns literal pattern).
[[ -e "${link}" || -L "${link}" ]] || continue
# Skip if not a symlink.
if [[ ! -L "${link}" ]]; then
continue
fi
local target
target="$(readlink "${link}")"
# Check if the symlink points to our source directory.
if [[ "${target}" == "${source_dir}/"* ]]; then
# If the target directory no longer exists, remove the symlink.
if [[ ! -e "${target}" ]]; then
info "Removing stale symlink: ${link}"
rm -f "${link}"
fi
fi
done
}
# Find the editor CLI command.
#
# Returns: Editor command path via `STDOUT`.
@@ -1342,15 +1215,6 @@ do_config() {
symlink_editor_config
symlink_static_config
# Clean up stale command and skill symlinks.
if command -v claude &>/dev/null; then
_cleanup_stale_commands "claude" "${HOME}/.claude"
_cleanup_stale_skills "claude" "${HOME}/.claude"
fi
if [[ "${SETUP_EDITOR}" == "cursor" ]]; then
_cleanup_stale_commands "cursor" "${HOME}/.cursor"
fi
info "Symlink setup complete!"
}