Skip to content

Commit 7611e30

Browse files
committed
Update generate-agents-md.md
1 parent 14a935a commit 7611e30

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

website/prompts/onboarding/generate-agents-md.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ import GenerateAgentsMD from '@site/shared-prompts/\_generate-agents-md.mdx';
99

1010
### Overview
1111

12-
**Why multi-source grounding works:** [ChunkHound](/docs/methodology/lesson-5-grounding#code-grounding-choosing-tools-by-scale) provides codebase-specific context (patterns, conventions, architecture) while [ArguSeek](/docs/methodology/lesson-5-grounding#arguseek-isolated-context--state) provides current ecosystem knowledge (framework best practices, security guidelines)—this implements [multi-source grounding](/docs/methodology/lesson-5-grounding#production-pattern-multi-source-grounding) to combine empirical project reality with ecosystem best practices. The [structured output format](/docs/methodology/lesson-4-prompting-101#applying-structure-to-prompts) with explicit sections ensures comprehensive coverage by forcing systematic enumeration instead of free-form narrative. The ≤500 line [conciseness constraint](/docs/methodology/lesson-4-prompting-101#constraints-as-guardrails) forces prioritization—without it, agents generate verbose documentation that gets ignored during actual use. The non-duplication directive keeps focus on AI-specific operational details agents can't easily infer from code alone (environment setup, non-interactive command modifications, deployment gotchas). This implements the [Research phase](/docs/methodology/lesson-3-high-level-methodology#phase-1-research-grounding) of the [four-phase workflow](/docs/methodology/lesson-3-high-level-methodology#the-four-phase-workflow), letting agents build their own foundation before tackling implementation tasks.
12+
**Why multi-source grounding works:** [ChunkHound](/docs/methodology/lesson-5-grounding#code-grounding-choosing-tools-by-scale) provides codebase-specific context (patterns, conventions, architecture) while [ArguSeek](/docs/methodology/lesson-5-grounding#arguseek-isolated-context--state) provides current ecosystem knowledge (framework best practices, security guidelines)—this implements [multi-source grounding](/docs/methodology/lesson-5-grounding#production-pattern-multi-source-grounding) to combine empirical project reality with ecosystem best practices. The [structured output format](/docs/methodology/lesson-4-prompting-101#applying-structure-to-prompts) with explicit sections ensures comprehensive coverage by forcing systematic enumeration instead of free-form narrative. The ≤200 line [conciseness constraint](/docs/methodology/lesson-4-prompting-101#constraints-as-guardrails) forces prioritization—without it, agents generate verbose documentation that gets ignored during actual use. The non-duplication directive keeps focus on AI-specific operational details agents can't easily infer from code alone (environment setup, non-interactive command modifications, deployment gotchas). This implements the [Research phase](/docs/methodology/lesson-3-high-level-methodology#phase-1-research-grounding) of the [four-phase workflow](/docs/methodology/lesson-3-high-level-methodology#the-four-phase-workflow), letting agents build their own foundation before tackling implementation tasks.
1313

1414
**When to use this pattern:** New project onboarding (establish baseline context before first implementation task), documenting legacy projects (capture tribal knowledge systematically), refreshing context after architectural changes (re-run after migrations, framework upgrades, major refactors). Run early in project adoption to establish baseline [context files](/docs/practical-techniques/lesson-6-project-onboarding#the-context-file-ecosystem), re-run after major changes, then manually add tribal knowledge (production incidents, team conventions, non-obvious gotchas) that AI can't discover from code. Without initial context grounding, agents hallucinate conventions based on training patterns instead of reading your actual codebase—this manifests as style violations, incorrect assumptions about architecture, and ignored project-specific constraints.
1515

0 commit comments

Comments
 (0)