Semantic anchors are well-defined terms, methodologies, or frameworks that serve as reference points in communication with Large Language Models (LLMs). They act as shared vocabulary that triggers specific, contextually rich knowledge domains within the LLM’s training data.
When working with LLMs, using semantic anchors provides several advantages:
-
Precision: Anchors reduce ambiguity by referencing established bodies of knowledge
-
Efficiency: A single anchor term can activate complex conceptual frameworks without lengthy explanations
-
Consistency: Well-known anchors ensure the LLM interprets concepts as intended by the broader community
-
Context Compression: Anchors allow you to convey rich context with minimal tokens
While semantic anchors are knowledge-based and most large frontier models share similar training data, there may be variations in how different LLM providers interpret specific anchors:
-
Knowledge Base Variations: Different models may have varying levels of familiarity with specific methodologies, especially for niche or emerging practices
-
Model Training Cutoff: Newer methodologies may not be recognized by models with earlier training cutoff dates
-
Cultural and Regional Context: Some anchors may be better recognized in models trained with specific regional or linguistic focuses
-
Testing Across Models: When precision is critical, consider testing semantic anchors with your specific LLM provider using the testing approach described later in this document
For most established semantic anchors in software development, the major LLM providers (such as Claude, GPT-4, Gemini, etc.) demonstrate consistent understanding. However, if you experience significant variations, you may need to provide additional context or choose more widely-recognized anchors.
-
Be Specific: Use the full, precise name of methodologies (e.g., "TDD, London School" rather than just "mocking")
-
Combine Anchors: Reference multiple anchors to triangulate your meaning
-
Verify Understanding: Ask the LLM to explain its interpretation when precision is critical
-
Update Over Time: As new methodologies emerge, incorporate them into your anchor vocabulary
A term qualifies as a semantic anchor when it activates a rich, well-defined conceptual framework in the LLM’s training data. The key differentiator is definition depth – not whether the anchor is about domain knowledge or interaction patterns.
A good semantic anchor is:
-
Precise: It references a specific, established body of knowledge or methodology with clear boundaries
-
Rich: It activates multiple interconnected concepts, not just a single instruction or directive
-
Consistent: Different users invoking it will get similar conceptual activation across contexts
-
Attributable: It can be traced to key proponents, publications, established practices, or documented standards
Semantic anchors exist on a spectrum from domain-heavy to interaction-heavy:
Domain-heavy ◄──────────────────────► Interaction-heavy
arc42 Pyramid Principle Socratic Method
SOLID Rubber Duck Debugging BLUF
DDD Five Whys Chain of ThoughtThe distinction isn’t a strict category but rather a matter of emphasis. Most anchors have both dimensions:
-
Pyramid Principle: Domain knowledge about structured communication + behavior change in how output is structured
-
TDD, London School: Domain knowledge about testing + behavior change in how code is written
-
Socratic Method: Interaction pattern for dialogue + domain knowledge from philosophical tradition
The quality bar is the same across this spectrum – all anchors must be well-defined, rich, and activatable.
Well-known terms that are not semantic anchors because they lack definition depth:
-
"TLDR": Underspecified, no defined structure or methodology, vague instruction to "be short"
-
"ELI5": Vague target level, no pedagogical framework, no consistent interpretation
-
"Keep it short": Pure instruction, no conceptual depth or established methodology
-
"Make it simple": Ambiguous directive without reference to specific simplification frameworks
These terms may be useful in conversation, but they don’t activate rich conceptual frameworks the way true semantic anchors do.
Below is a curated list of semantic anchors useful for software development, architecture, and requirements engineering. Each anchor includes related concepts and practices.
The catalog is organized into the following categories:
Details
Also known as: Mockist TDD, Outside-In TDD
Core Concepts:
-
Mock-heavy testing: Heavy use of test doubles (mocks, stubs) to isolate units
-
Outside-in development: Start from the outermost layers (UI, API) and work inward
-
Interaction-based testing: Focus on verifying interactions between objects
-
Behavior verification: Test how objects collaborate rather than state
-
Interface discovery: Use tests to discover and define interfaces
-
Walking skeleton: Build end-to-end functionality early, then fill in details
Key Proponents: Steve Freeman, Nat Pryce ("Growing Object-Oriented Software, Guided by Tests")
When to Use:
-
Complex systems with many collaborating objects
-
When designing APIs and interfaces
-
Distributed systems where integration is costly
Details
Also known as: Classicist TDD, Detroit School
Core Concepts:
-
State-based testing: Verify the state of objects after operations
-
Minimal mocking: Use real objects whenever possible; mock only external dependencies
-
Inside-out development: Start with core domain logic and build outward
-
Simplicity focus: Emergent design through refactoring
-
Red-Green-Refactor: The fundamental TDD cycle
-
YAGNI: You Aren’t Gonna Need It - avoid premature abstraction
Key Proponents: Kent Beck, Martin Fowler
When to Use:
-
Domain-driven design projects
-
When business logic is central
-
Smaller, cohesive modules
Details
Also known as: Generative Testing, QuickCheck-style Testing
Core Concepts:
-
Properties: Invariants that should always hold
-
Generators: Automatic test data creation
-
Shrinking: Minimizing failing test cases to simplest form
-
Universal quantification: Testing "for all inputs"
-
Specification testing: Testing high-level properties, not examples
-
Edge case discovery: Finds cases you didn’t think of
-
Complementary to example-based: Works alongside traditional unit tests
-
Stateful testing: Testing sequences of operations
-
Model-based testing: Compare implementation against simpler model
Key Tools: QuickCheck (Haskell), Hypothesis (Python), fast-check (JavaScript), FsCheck (.NET)
When to Use:
-
Testing pure functions and algorithms
-
Validating business rules and invariants
-
Testing parsers and serializers
-
Finding edge cases in complex logic
-
Complementing example-based TDD
Details
Full Name: Testing Pyramid according to Mike Cohn
Core Concepts:
-
Three layers:
-
Unit tests (base): Many fast, isolated tests
-
Integration tests (middle): Moderate number, test component interaction
-
End-to-end tests (top): Few, test complete user journeys
-
-
Proportional distribution: More unit tests, fewer E2E tests
-
Cost and speed: Unit tests cheap and fast, E2E tests expensive and slow
-
Feedback loops: Faster feedback from lower levels
-
Anti-pattern: Ice cream cone: Too many E2E tests, too few unit tests
-
Test at the right level: Don’t test through UI what can be tested in isolation
-
Confidence gradient: Balance confidence with execution speed
Key Proponent: Mike Cohn ("Succeeding with Agile", 2009)
When to Use:
-
Planning test strategy for projects
-
Balancing test types in CI/CD pipelines
-
Evaluating existing test suites
-
Guiding team testing practices
Details
Also known as: Mutation Analysis, Fault-Based Testing
Core Concepts:
-
Test quality assessment: Evaluate how effective tests are at detecting bugs
-
Code mutations: Deliberately introduce small, syntactic changes (mutants) into source code
-
Mutation operators: Rules for creating mutants (e.g., change
>to>=, flip boolean, remove statement) -
Killed mutants: Mutations caught by failing tests (good)
-
Survived mutants: Mutations not detected by tests (indicates test weakness)
-
Equivalent mutants: Mutations that don’t change program behavior (false positives)
-
Mutation score: Percentage of killed mutants:
(killed / (total - equivalent)) × 100% -
First-order mutations: Single atomic change per mutant
-
Higher-order mutations: Multiple changes combined
-
Weak mutation: Test only needs to create different internal state
-
Strong mutation: Test must produce different final output
-
Test adequacy criterion: "Are tests good enough?" not just "Is coverage high enough?"
Key Proponents: Richard Lipton (theoretical foundation, 1971), Richard DeMillo, Timothy Budd
Key Tools:
-
PITest (Java)
-
Stryker (JavaScript/TypeScript, C#, Scala)
-
Mutmut (Python)
-
Infection (PHP)
-
Mull (C/C++)
When to Use:
-
Evaluating test suite quality beyond coverage metrics
-
Identifying gaps in test assertions
-
Critical systems requiring high test confidence
-
Complementing code coverage as a quality metric
-
Refactoring legacy code with existing tests
-
Teaching effective testing practices
-
Continuous improvement of test effectiveness
Practical Challenges:
-
Computational cost: N mutations × M tests = expensive
-
Equivalent mutant problem: Hard to automatically detect functionally identical mutants
-
Time investment: Can be slow on large codebases
-
Mitigation strategies: Selective mutation, mutation sampling, incremental analysis
Relationship to Other Practices:
-
Code coverage: Mutation testing reveals that high coverage ≠ good tests
-
TDD: Strong TDD often produces high mutation scores naturally
-
Property-based testing: Orthogonal but complementary approaches
-
Fault injection: Similar concept applied to production systems
Details
Full Name: arc42 Architecture Documentation Template
Core Concepts:
-
12 standardized sections: From introduction to glossary
-
Section 1: Introduction and Goals
-
Section 2: Constraints
-
Section 3: Context and Scope
-
Section 4: Solution Strategy
-
Section 5: Building Block View
-
Section 6: Runtime View
-
Section 7: Deployment View
-
Section 8: Crosscutting Concepts
-
Section 9: Architecture Decisions
-
Section 10: Quality Requirements
-
Section 11: Risks and Technical Debt
-
Section 12: Glossary
-
Pragmatic documentation: Document only what’s necessary
-
Multiple formats: AsciiDoc, Markdown, Confluence, etc.
Key Proponents: Gernot Starke, Peter Hruschka
When to Use:
-
Medium to large software projects
-
When stakeholder communication is critical
-
Long-lived systems requiring maintainability
Details
Full Name: Architecture Decision Records according to Michael Nygard
Core Concepts:
-
Lightweight documentation: Short, focused records
-
Standard structure:
-
Title
-
Status (proposed, accepted, deprecated, superseded)
-
Context (forces at play)
-
Decision (what was chosen)
-
Consequences (both positive and negative)
-
-
Immutability: ADRs are never deleted, only superseded
-
Version control: ADRs stored with code
-
Decision archaeology: Understanding why past decisions were made
-
Evolutionary architecture: Supporting architecture that changes over time
Key Proponent: Michael Nygard
When to Use:
-
All software projects (low overhead, high value)
-
Distributed teams needing shared understanding
-
When onboarding new team members
-
Complex systems with evolving architecture
Details
Full Name: Markdown Any Decision Records
Core Concepts:
-
Structured template: Well-defined format with specific sections
-
Standard fields:
-
Title (short noun phrase)
-
Status (proposed, accepted, rejected, deprecated, superseded)
-
Context and Problem Statement
-
Decision Drivers (forces influencing the decision)
-
Considered Options (alternatives evaluated)
-
Decision Outcome (chosen option with justification)
-
Pros and Cons of the Options (trade-off analysis)
-
Links (related decisions, references)
-
-
Markdown format: Uses standard Markdown for wide compatibility
-
Clear structure: More detailed than basic ADRs, includes explicit alternatives
-
Trade-off documentation: Explicitly captures pros/cons of each option
-
Version control: Stored with code, immutable like other ADRs
-
Lightweight yet comprehensive: Balances completeness with maintainability
Key Proponents: Oliver Kopp, Olaf Zimmermann (and MADR community)
Reference: https://adr.github.io/madr/
When to Use:
-
When you need more structure than basic ADRs
-
Projects requiring explicit documentation of alternatives
-
Teams that need to justify decisions with detailed trade-offs
-
Organizations using Markdown-based documentation workflows
-
When LLM assistance is needed to generate consistent decision records
-
Complementing arc42 Section 9 (Architecture Decisions)
Details
Full Name: C4 Model for Software Architecture Diagrams
Core Concepts:
-
Four levels of abstraction:
-
Level 1 - Context: System in its environment (users, external systems)
-
Level 2 - Container: Applications and data stores that make up the system
-
Level 3 - Component: Components within containers
-
Level 4 - Code: Class diagrams, entity relationships (optional)
-
-
Zoom in/out: Progressive disclosure of detail
-
Simple notation: Boxes and arrows, minimal notation overhead
-
Audience-appropriate: Different diagrams for different stakeholders
-
Supplementary diagrams: Deployment, dynamic views, etc.
Key Proponent: Simon Brown
When to Use:
-
Communicating architecture to diverse stakeholders
-
Onboarding new team members
-
Architecture documentation and review
-
Replacing or supplementing UML
Details
Also known as: Ports and Adapters, Onion Architecture (variant)
Core Concepts:
-
Hexagonal structure: Core domain at the center, isolated from external concerns
-
Ports: Interfaces defining how the application communicates
-
Adapters: Implementations that connect to external systems
-
Dependency inversion: Dependencies point inward toward the domain
-
Technology independence: Core logic doesn’t depend on frameworks or infrastructure
-
Primary/Driving adapters: User interfaces, APIs (inbound)
-
Secondary/Driven adapters: Databases, message queues (outbound)
-
Testability: Easy to test core logic in isolation
-
Symmetry: All external interactions are treated uniformly
Key Proponent: Alistair Cockburn (2005)
When to Use:
-
Applications requiring high testability
-
Systems that need to support multiple interfaces (web, CLI, API)
-
When you want to defer infrastructure decisions
-
Microservices with clear domain boundaries
Details
Full Name: Clean Architecture according to Robert C. Martin
Core Concepts:
-
The Dependency Rule: Dependencies only point inward
-
Concentric circles: Entities → Use Cases → Interface Adapters → Frameworks & Drivers
-
Independent of frameworks: Architecture doesn’t depend on libraries
-
Testable: Business rules testable without UI, database, or external elements
-
Independent of UI: UI can change without changing business rules
-
Independent of database: Business rules not bound to database
-
Independent of external agencies: Business rules don’t know about outside world
-
Screaming Architecture: Architecture reveals the intent of the system
-
SOLID principles: Foundation of the architecture
Key Proponent: Robert C. Martin ("Uncle Bob")
When to Use:
-
Enterprise applications with complex business logic
-
Systems requiring long-term maintainability
-
When team size and turnover are high
-
Projects where business rules must be protected from technology changes
Details
Full Name: SOLID Object-Oriented Design Principles
Core Concepts:
-
Single Responsibility Principle (SRP): Each class should have one responsibility
-
Open/Closed Principle (OCP): Entities should be open for extension, closed for modification
-
Liskov Substitution Principle (LSP): Subtypes must be substitutable for their base types
-
Interface Segregation Principle (ISP): Clients should not be forced to depend on interfaces they do not use
-
Dependency Inversion Principle (DIP): Depend on abstractions, not concrete implementations
Key Proponent: Robert C. Martin ("Uncle Bob")
When to Use:
-
Designing maintainable and scalable object-oriented systems
-
Refactoring legacy code to improve structure
-
Building systems where flexibility and testability are important
-
Teaching or enforcing good software design practices
Details
Full Name: Don’t Repeat Yourself Principle
Core Concepts:
-
Single representation: Every piece of knowledge should have a single, unambiguous representation
-
Avoid duplication: Eliminate duplicate code, logic, and knowledge across the system
-
Abstraction: Extract common patterns into reusable components
-
Maintenance efficiency: Changes require modification in only one place
-
Knowledge duplication vs. code duplication: Focus on avoiding duplicate knowledge, not just duplicate code
-
Normalized data: Apply DRY to data structures and schemas
-
Configuration management: Centralize configuration and avoid scattered settings
Key Proponent: Andy Hunt and Dave Thomas ("The Pragmatic Programmer", 1999)
When to Use:
-
Refactoring codebases with repeated logic or patterns
-
Designing APIs and libraries to minimize client code duplication
-
Creating maintainable systems where changes are frequent
-
Establishing coding standards and best practices
Related Concepts: SPOT, SSOT, WET (Write Everything Twice - deliberate exception to DRY)
Details
Full Name: Single Point of Truth
Core Concepts:
-
Implementation pattern: Focuses on where and how data/logic is stored and accessed
-
Centralized location: Each piece of information resides in exactly one place
-
Reference relationships: Other locations reference the single point rather than duplicate it
-
Data consistency: Eliminates synchronization issues and conflicting data
-
Update propagation: Changes at the single point automatically affect all references
-
Clear ownership: Explicit responsibility for maintaining each piece of truth
-
Code-level practice: Applied at the code and system design level
When to Use:
-
Implementing functions and utilities to avoid code duplication
-
Database schema design to eliminate redundant data
-
Configuration management across distributed systems
-
State management in applications
-
API design where data flows from a single endpoint
Difference from SSOT: SPOT emphasizes the implementation detail of where data lives, while SSOT emphasizes the authoritative, trusted nature of that data source
Related Concepts: DRY, SSOT, Normalized databases, Master data management
Details
Full Name: Single Source of Truth
Core Concepts:
-
Conceptual principle: Focuses on establishing trust and authority for data
-
Authoritative source: One canonical, trusted location for each piece of data
-
Data integrity: All consumers reference the same trusted source
-
Version control: Single source ensures consistent versioning
-
Derived data: Other representations are derived from the single source
-
Trust and reliability: The source is the definitive version when conflicts arise
-
System of record: The primary data store for critical business information
-
Organizational practice: Applied at the architecture and business process level
Key Application Areas:
-
Version control systems (Git as SSOT for code)
-
Database design and data warehousing
-
Documentation and knowledge management
-
Configuration management
-
Master data management (MDM)
When to Use:
-
Designing data architecture for enterprise systems
-
Establishing documentation standards and knowledge bases
-
Building data pipelines and ETL processes
-
Implementing microservices with clear data ownership
-
Creating audit trails and ensuring compliance
-
Resolving conflicts between multiple data sources
Difference from SPOT: SSOT emphasizes the authoritative, trusted nature of a data source and is used at the architecture/organizational level, while SPOT focuses on the implementation pattern
Related Concepts: DRY, SPOT, Event sourcing, Data lakes, Master data management
Details
Full Name: Domain-Driven Design according to Eric Evans
Core Concepts:
-
Ubiquitous Language: Shared vocabulary between developers and domain experts
-
Bounded Context: Explicit boundaries where a model is defined and applicable
-
Aggregates: Cluster of domain objects treated as a single unit
-
Entities: Objects defined by identity, not attributes
-
Value Objects: Immutable objects defined by their attributes
-
Repositories: Abstraction for object persistence and retrieval
-
Domain Events: Significant occurrences in the domain
-
Strategic Design: Context mapping, anti-corruption layers
-
Tactical Design: Building blocks (entities, value objects, services)
-
Model-Driven Design: Code that expresses the domain model
Key Proponent: Eric Evans ("Domain-Driven Design: Tackling Complexity in the Heart of Software", 2003)
When to Use:
-
Complex business domains with intricate rules
-
Long-lived systems requiring deep domain understanding
-
When business and technical teams need close collaboration
-
Systems where the domain logic is the core value
Details
Full Name: Problem Space in Nonviolent Communication
Core Concepts:
-
Observations: Concrete, objective facts without evaluation
-
Feelings: Emotions arising from observations
-
Needs: Universal human needs underlying feelings
-
Requests: Specific, actionable requests (not demands)
-
Empathic connection: Understanding before problem-solving
-
Separating observation from interpretation: Avoiding judgment
-
Needs-based conflict resolution: Finding solutions that meet everyone’s needs
Key Proponent: Marshall Rosenberg
Application in Software Development:
-
Requirements elicitation that uncovers real user needs
-
Stakeholder communication and conflict resolution
-
User story formulation focused on needs
-
Retrospectives and team communication
Details
Full Name: Easy Approach to Requirements Syntax
Core Concepts:
-
Ubiquitous requirements: "The <system> shall <requirement>"
-
Event-driven requirements: "WHEN <trigger> the <system> shall <requirement>"
-
Unwanted behavior: "IF <condition>, THEN the <system> shall <requirement>"
-
State-driven requirements: "WHILE <state>, the <system> shall <requirement>"
-
Optional features: "WHERE <feature is included>, the <system> shall <requirement>"
-
Structured syntax: Consistent templates for clarity
-
Testability: Requirements written to be verifiable
Key Proponent: Alistair Mavin (Rolls-Royce)
When to Use:
-
Safety-critical systems
-
Regulated industries (aerospace, automotive, medical)
-
When requirements traceability is essential
-
Large, distributed teams
Details
Full Name: User Story Mapping according to Jeff Patton
Core Concepts:
-
Narrative flow: Horizontal arrangement of user activities
-
User activities: High-level tasks users perform
-
User tasks: Steps within activities
-
Walking skeleton: Minimal end-to-end functionality first
-
Release planning: Horizontal slices for releases
-
Prioritization by value: Vertical ordering by importance
-
Shared understanding: Collaborative mapping builds team alignment
-
Big picture view: See the whole journey, not just backlog items
-
Opportunity for conversation: Stories as placeholders for discussion
Key Proponent: Jeff Patton ("User Story Mapping", 2014)
When to Use:
-
Planning new products or major features
-
When backlog feels overwhelming or fragmented
-
Release planning for incremental delivery
-
Onboarding team members to product vision
Details
Full Name: Impact Mapping according to Gojko Adzic
Core Concepts:
-
Four levels: Goal → Actors → Impacts → Deliverables
-
Goal: Business objective (Why?)
-
Actors: Who can produce or prevent desired impact? (Who?)
-
Impacts: How can actors' behavior change? (How?)
-
Deliverables: What can we build? (What?)
-
Visual mapping: Mind-map style collaborative diagram
-
Assumption testing: Make assumptions explicit
-
Scope management: Prevent scope creep by linking to goals
-
Roadmap alternative: Goal-oriented rather than feature-oriented
Key Proponent: Gojko Adzic ("Impact Mapping", 2012)
When to Use:
-
Strategic planning for products or projects
-
When stakeholders disagree on priorities
-
Aligning delivery with business outcomes
-
Avoiding building features that don’t serve business goals
Details
Full Name: Jobs To Be Done Framework (Christensen interpretation)
Core Concepts:
-
Job definition: Progress people want to make in a particular context
-
Functional job: Practical task to accomplish
-
Emotional job: How people want to feel
-
Social job: How people want to be perceived
-
Hire and fire: Customers "hire" products to do a job, "fire" when inadequate
-
Context matters: Jobs exist in specific circumstances
-
Competition redefined: Anything solving the same job is competition
-
Innovation opportunities: Unmet jobs or poorly served jobs
-
Job stories: Alternative to user stories focusing on context and motivation
Key Proponents: Clayton Christensen, Alan Klement, Bob Moesta
When to Use:
-
Product discovery and innovation
-
Understanding why customers choose solutions
-
Identifying true competition
-
Writing more meaningful user stories
-
Market segmentation based on jobs, not demographics
Details
Full Name: Docs-as-Code Approach according to Ralf D. Müller
Core Concepts:
-
Plain text formats: AsciiDoc, Markdown
-
Version control: Documentation in Git alongside code
-
Automated toolchains: Build pipelines for documentation
-
Single source of truth: Generate multiple output formats from one source
-
Diagrams as code: PlantUML, Mermaid, Graphviz, Kroki
-
Continuous documentation: Updated with every commit
-
Developer-friendly: Use same tools and workflows as for code
-
Review process: Pull requests for documentation changes
-
Modular documentation: Includes and composition
Key Proponent: Ralf D. Müller (docToolchain creator)
Technical Stack:
-
AsciiDoc/Asciidoctor
-
docToolchain
-
Gradle-based automation
-
Kroki for diagram rendering
-
Arc42 template integration
When to Use:
-
Technical documentation for software projects
-
When documentation needs to stay synchronized with code
-
Distributed teams collaborating on documentation
-
Projects requiring multiple output formats (HTML, PDF, etc.)
Details
Full Name: Diátaxis Documentation Framework according to Daniele Procida
Core Concepts:
-
Four documentation types:
-
Tutorials: Learning-oriented, lessons for beginners
-
How-to guides: Task-oriented, directions for specific goals
-
Reference: Information-oriented, technical descriptions
-
Explanation: Understanding-oriented, conceptual discussions
-
-
Two dimensions:
-
Practical vs. Theoretical
-
Acquisition (learning) vs. Application (working)
-
-
Separation of concerns: Each type serves a distinct purpose
-
User needs: Different users need different documentation at different times
-
Quality criteria: Each type has specific quality indicators
-
Systematic approach: Framework for organizing any documentation
Key Proponent: Daniele Procida
When to Use:
-
Organizing technical documentation
-
Improving existing documentation
-
Planning documentation structure
-
Evaluating documentation quality
-
Complementing Docs-as-Code approaches
Details
Full Name: The Minto Pyramid Principle according to Barbara Minto
Core Concepts:
-
Governing Thought: Single key message at the top of the pyramid
-
SCQ Framework: Situation → Complication → Question → Answer structure for setting context
-
MECE Principle: Mutually Exclusive, Collectively Exhaustive grouping of ideas
-
Vertical Logic: Each level answers "Why?" of the level above it
-
Horizontal Logic: Arguments at the same level grouped deductively or inductively
-
Top-Down Delivery: Present conclusion first, then supporting arguments
-
Pyramid Structure: One central idea supported by groups of three supporting ideas
-
BLUF: Bottom Line Up Front - lead with the conclusion
-
Deductive vs. Inductive Reasoning: Choose appropriate logic for horizontal grouping
Key Proponent: Barbara Minto (McKinsey, "The Minto Pyramid Principle", 1987)
When to Use:
-
Executive presentations and briefings
-
Written reports and proposals
-
Complex arguments requiring clear structure
-
Stakeholder communication where time is limited
-
Business cases and recommendations
-
Consulting deliverables
-
Any situation requiring persuasive, structured communication
Details
Full Name: MECE (Mutually Exclusive, Collectively Exhaustive)
Core Concepts:
-
Mutually Exclusive: Categories have no overlap - each item belongs to exactly one category
-
Collectively Exhaustive: Categories cover all possibilities - nothing is left out
-
Framework for organization: Systematic approach to structuring information and problems
-
Prevents duplication: Mutual exclusivity ensures no redundant coverage
-
Prevents gaps: Collective exhaustiveness ensures complete coverage
-
Clear boundaries: Unambiguous categorization with well-defined criteria
-
Hierarchical application: Can be applied recursively at multiple levels
-
Validation approach: Check both dimensions independently (exclusivity and exhaustiveness)
Key Proponent: Barbara Minto (McKinsey & Company, late 1960s)
When to Use:
-
Problem decomposition and analysis
-
Software architecture and component design
-
Module boundary definition to avoid overlapping responsibilities
-
Requirements organization and breakdown
-
API endpoint structure and design
-
Decision tree construction
-
Issue tree development
-
Organizing complex information
-
System design and modular architecture
Related Concepts:
-
Foundational to the Pyramid Principle
-
Supports Single Responsibility Principle
-
Enables Separation of Concerns
-
Used in logic trees and issue trees
Details
Full Name: Pugh Decision Matrix (also Pugh Controlled Convergence)
Core Concepts:
-
Baseline comparison: Compare alternatives against a reference solution
-
Criteria weighting: Assign importance to evaluation criteria
-
Relative scoring: Better (+), Same (S), Worse (-) than baseline
-
Structured evaluation: Systematic comparison across multiple dimensions
-
Iterative refinement: Multiple rounds to converge on best solution
-
Team decision-making: Facilitates group consensus
-
Hybrid solutions: Combine strengths of different alternatives
Key Proponent: Stuart Pugh
When to Use:
-
Multiple viable alternatives exist
-
Decision criteria are known but trade-offs are unclear
-
Team needs to reach consensus
-
Architecture or technology selection decisions
Details
Full Name: Cynefin Framework according to Dave Snowden
Core Concepts:
-
Five domains:
-
Clear (formerly "Simple"): Best practices apply, sense-categorize-respond
-
Complicated: Good practices exist, sense-analyze-respond
-
Complex: Emergent practices, probe-sense-respond
-
Chaotic: Novel practices needed, act-sense-respond
-
Confused (center): Don’t know which domain you’re in
-
-
Domain transitions: How situations move between domains
-
Safe-to-fail probes: Experiments in complex domain
-
Complacency risk: Moving from clear to chaotic
-
Decision-making context: Different domains require different approaches
-
Facilitation tool: Helps teams discuss and categorize challenges
Key Proponent: Dave Snowden (1999)
When to Use:
-
Understanding what type of problem you’re facing
-
Choosing appropriate decision-making approaches
-
Facilitating team discussions about complexity
-
Strategic planning in uncertain environments
Details
Core Concepts:
-
Value chain: Map components from user needs down
-
Evolution axis: Genesis → Custom → Product → Commodity
-
Movement: Components naturally evolve over time
-
Situational awareness: Understanding the landscape before deciding
-
Gameplay patterns: Common strategic moves
-
Climatic patterns: Forces that affect all players
-
Doctrine: Universal principles of good strategy
-
Inertia: Resistance to change in organizations
-
Strategic planning: Visual approach to strategy
-
Build-Buy-Partner decisions: Based on evolution stage
Key Proponent: Simon Wardley
When to Use:
-
Strategic technology planning
-
Build vs. buy decisions
-
Understanding competitive landscape
-
Communicating strategy visually
-
Identifying opportunities for disruption
Details
Full Name: Programming as Theory Building (Mental Model) according to Peter Naur
Core Concepts:
-
Theory building: Programming is creating a mental model, not just writing code
-
Theory of the program: Deep understanding of why the program works and how it relates to the problem domain
-
Knowledge in people: The real program exists in developers' minds, not in the code
-
Theory decay: When original developers leave, the theory is lost
-
Documentation limitations: Written documentation cannot fully capture the theory
-
Maintenance as theory: Effective maintenance requires possessing the theory
-
Communication is key: Theory must be shared through collaboration and conversation
-
Ramp-up time: New team members need time to build the theory
-
Code as artifact: Code is merely a representation of the underlying theory
Key Proponent: Peter Naur (Turing Award winner, 1978)
Original Work: "Programming as Theory Building" (1985)
Application in Software Development:
-
Understanding why knowledge transfer is challenging
-
Emphasizing pair programming and mob programming
-
Justifying time for onboarding and code walkthroughs
-
Explaining technical debt accumulation when teams change
-
Supporting documentation practices that capture "why" not just "what"
-
Advocating for team stability and continuity
Contrast with Other Views:
-
Programming as text production → Focus on code output
-
Programming as problem solving → Focus on algorithms
-
Programming as theory building → Focus on understanding
Details
Core Concepts: * A specification for adding human and machine readable meaning to commit messages * Determining a semantic version bump (based on the types of commits landed) * Communicating the nature of changes to teammates, the public, and other stakeholders * Schema: <type>[!][(optional scope)]: <description> + optional body/footer * Common Types: feat: - introduce new feature to the codebase (→ Semver Minor) fix: - patches a bug in your codebase (→ SemVer Patch) docs: - documentation improvements to the codebase chore: - codebase/repository housekeeping changes style: - formatting changes that do not affect the meaning of the code refactor: - implementation changes that do not affect the meaning of the code * *! - BREAKING CHANGE (→ SemVer Major) * BREAKING CHANGE: introduces a breaking API change
Key Proponents: Benjamin E. Coe, James J. Womack, Steve Mao
When to Use:
-
everything-as-code paradigm targeted
-
team-/community-communication
-
repository quality improvements
Details
Full Name: Semantic Versioning Specification
Core Concepts:
-
Version format: MAJOR.MINOR.PATCH (e.g., 2.4.7)
-
MAJOR: Incompatible API changes (breaking changes)
-
MINOR: Backward-compatible functionality additions
-
PATCH: Backward-compatible bug fixes
-
Pre-release versions: Append hyphen and identifiers (e.g., 1.0.0-alpha.1)
-
-
Build metadata: Append plus sign and identifiers (e.g., 1.0.0+20241111)
-
Version precedence: Clear rules for version comparison
-
Initial development: 0.y.z for initial development (API unstable)
-
Public API declaration: Once public API declared, version dependencies matter
Key Proponent: Tom Preston-Werner
When to Use:
-
Libraries and APIs consumed by other software
-
Software with defined public interfaces
-
Projects requiring dependency management
-
Communication of change impact to users/consumers
Details
Full Name: Block Element Modifier (BEM) (S)CSS Methodology
Core Concepts:
-
Motivation: Solve CSS specificity wars, naming conflicts, and stylesheet maintainability issues in large codebases
-
Block: Standalone component that is meaningful on its own (e.g.,
menu,button,header) -
Element: Part of a block with no standalone meaning (e.g.,
menuitem,buttonicon) -
Modifier: Flag on blocks or elements that changes appearance or behavior (e.g.,
button—disabled,menu__item—active) -
Naming convention:
block__element—modifierstructure -
Independence: Blocks are self-contained and reusable
-
No cascading: Avoid deep CSS selectors, use flat structure
-
Explicit relationships: Clear parent-child relationships through naming
-
Reusability: Components can be moved anywhere in the project
-
Mix: Combining multiple BEM entities on a single DOM node
-
File structure: Often paired with component-based file organization
Naming Examples:
-
Block:
.search-form -
Element:
.search-forminput,.search-formbutton -
Modifier:
.search-form—compact,.search-form__button—disabled
Key Proponents: Yandex development team
When to Use:
-
Large-scale web applications with many components
-
Team projects requiring consistent (S)CSS naming conventions
-
When (S)CSS maintainability and scalability are priorities
-
Projects where developers need to quickly understand (S)CSS structure
-
Component-based architectures (React, Vue, Angular)
Details
Full Name: todo.txt-flavoured Markdown Task Lists
Also known as: Enhanced Markdown Tasks, Markdown with todo.txt conventions
Core Concepts:
-
Markdown task lists: Standard GitHub-flavoured markdown syntax (
- [ ]uncompleted,- [x]completed) -
Priority markers: Uses todo.txt priority notation
(A),(B),©where(A)is highest priority -
Project tags: Prefixed with
+to group related tasks (e.g.,+website,+semantic-anchors) -
Context tags: Prefixed with
@to indicate location/tool/context (e.g.,@computer,@home,@research) -
Key-value metadata: Structured data pairs like
due:YYYY-MM-DD,priority:high, or custom fields -
Date tracking: Creation dates and completion dates in ISO format (YYYY-MM-DD)
-
Human readability: Plain text format that remains readable without special tools
-
Tool-agnostic: Can be processed by both markdown renderers and todo.txt tools
-
Searchable and filterable: Easy to grep/search by tags, priorities, or metadata
Pattern Structure:
- [ ] (Priority) Task description +project @context key:value
- [x] YYYY-MM-DD (Priority) Completed task +projectExample Usage:
- [ ] (A) Review PR for +website @computer due:2024-02-03
- [x] 2024-02-01 (B) Update documentation +docToolchain
- [ ] (C) Research new feature +semantic-anchors @research
- [ ] Call team meeting @phone +project-planning due:2024-02-05
- [x] 2024-01-30 Fix bug in authentication +backend @computerKey Proponents: Combines GitHub-flavoured Markdown task lists with Gina Trapani’s todo.txt format
Original References:
-
GitHub-flavoured Markdown task lists
-
todo.txt format specification by Gina Trapani
When to Use:
-
Task management in markdown documentation
-
Project planning and tracking in README files
-
GitHub issues and pull request descriptions requiring structured task lists
-
Personal productivity systems using plain text
-
Documentation that combines narrative with actionable tasks
-
When you need both human readability and programmatic parsing
-
Team collaboration where tasks need clear priorities and contexts
-
Generating consistent task list formats with LLMs
Benefits:
-
Leverages two well-established, widely recognized standards
-
Renders nicely in GitHub, GitLab, and other markdown viewers
-
Remains fully functional in plain text editors
-
Enables rich metadata without sacrificing readability
-
Facilitates both manual and automated task tracking
Details
Full Name: State-of-the-Art
Core Concepts:
-
Latest approaches: Focus on the most current, cutting-edge methods and techniques
-
Research-grounded: Reference current research papers, benchmarks, and empirical results
-
Comparative analysis: Compare new approaches with existing or previous methods
-
Performance-focused: Emphasize benchmark-leading and best-performing solutions
-
Up-to-date information: Provide current, grounded information rather than outdated practices
-
Evidence-based: Support claims with recent studies, benchmarks, and real-world implementations
-
Contextual awareness: Consider the specific domain and timeframe for "state-of-the-art"
Usage Patterns:
-
"Learn SOTA for [topic]" - triggers research and comprehensive information gathering
-
"What’s SOTA for [topic]?" - requests current best practices and approaches
-
"Give me the SOTA approach for [problem]" - asks for cutting-edge solutions
Example Usage:
-
"Learn SOTA for RAG implementations"
-
"What’s SOTA for code generation in 2024?"
-
"Give me the SOTA approach for semantic search"
-
"What’s the SOTA in transformer architectures?"
Key Proponent: Widely used in ML/AI research community
When to Use:
-
Researching current best practices in a technical domain
-
Comparing different approaches to solve a problem
-
Staying updated with rapidly evolving fields (AI/ML, web technologies)
-
Making technology decisions based on current benchmarks
-
Learning about cutting-edge implementations
-
Avoiding outdated or deprecated approaches
Why It Works:
-
Heavily represented in ML/AI papers, benchmarks, and technical discussions
-
Consistent meaning across contexts: "best performing" and "most current"
-
Concise trigger for comprehensive research behavior
-
Activates research-oriented response patterns in LLMs
Details
Full Name: There Is More Than One Way To Do It
Also known as: Tim Toady
Core Concepts:
-
Multiple valid approaches: Acknowledges that problems can be solved in different, equally valid ways
-
Developer freedom: Trust developers to choose the right approach for their context
-
Expressiveness: Languages and tools should support diverse problem-solving styles
-
Context-dependent decisions: The "best" solution depends on constraints, team, and situation
-
No single canonical form: Resist dogma — flexibility over prescription
-
Trade-off awareness: Different approaches have different trade-offs; none is universally superior
-
Pragmatism over purity: Practical results matter more than theoretical elegance
-
Collaborative decision-making: When working with others, discuss approach rather than assume
Key Proponent: Larry Wall (Perl programming language)
Contrast:
-
Python’s Zen: "There should be one-- and preferably only one --obvious way to do it" (opposite philosophy)
-
TIMTOWTDI favors flexibility and expressiveness over enforced uniformity
When to Use:
-
Choosing between multiple valid architectural or design approaches
-
Code reviews where different styles achieve the same goal
-
Team discussions about tooling, frameworks, or methodologies
-
LLM-assisted development: ask for alternatives rather than accepting the first suggestion
-
Avoiding premature standardization before understanding trade-offs
-
Resisting "one true way" dogma in technology choices
-
Architecture Decision Records (ADRs): documenting why one approach was chosen over other valid alternatives
-
KonsenT-based decisions: finding solutions with no objections rather than forcing one "right" way
Details
Full Name: Statistical Process Control
Core Concepts:
-
Process monitoring: Systematic statistical monitoring of running processes
-
Common Cause Variation: Inherent, random fluctuation — stable and predictable
-
Special Cause Variation: Assignable cause — unstable, correctable
-
Control Charts: Central visual tool (see dedicated anchor)
-
Detection rules: Nelson Rules, Western Electric Rules (see dedicated anchors)
-
Process Capability: Indices Cp, Cpk (short-term) and Pp, Ppk (long-term)
-
In-Control: Process exhibits only Common Cause Variation
-
Out-of-Control: Special Cause detected — intervention required
-
Continuous Improvement: SPC as a foundation for ongoing process improvement
-
DMAIC Control Phase: SPC tools secure improvements within Six Sigma
Key Proponents: Walter A. Shewhart (founder), W. Edwards Deming (dissemination), Western Electric Company
Relationship to Other Anchors:
-
Control Chart (Shewhart): Central tool within SPC
-
Nelson Rules: Detection rules for Special Causes on Control Charts
-
Six Sigma: Uses SPC particularly in the Control phase
-
Testing Pyramid: Conceptual parallel — different levels of quality assurance
When to Use:
-
Monitoring manufacturing and business processes
-
Quality management per ISO 9001, IATF 16949, Six Sigma
-
Pharmaceutical Continuous Process Verification (CPV)
-
Conceptual foundation for anomaly detection in IT systems
-
When the question is: "Is my process stable, or has something changed?"
Details
Full Name: Shewhart Control Chart
Also known as: Process Control Chart, SPC Chart
Core Concepts:
-
Time series diagram: Measured value plotted over time
-
Centerline (CL): Process mean
-
Upper/Lower Control Limit (UCL/LCL): Typically at ±3σ from the mean
-
Zones A/B/C: Division into 6 areas (each 1σ wide) for pattern recognition
-
Common Cause Variation: Inherent, random fluctuation of a stable process
-
Special Cause Variation: Assignable, correctable deviation
-
Chart Types:
-
X-bar Chart: Subgroup means
-
R-Chart: Subgroup ranges
-
I-MR Chart: Individual values and moving range
-
p-Chart: Defect proportions
-
c-Chart: Defect counts per unit
-
-
In-Control vs. Out-of-Control: Core decision based on rules (Nelson, Western Electric)
-
Normal distribution assumption: Control limits are based on normally distributed data
Key Proponent: Walter A. Shewhart (1920s, Bell Labs / Western Electric)
Key Work: "Economic Control of Quality of Manufactured Product" (1931)
Relationship to Other Anchors:
-
Nelson Rules: 8 rules for pattern recognition on Control Charts
-
SPC: Control Charts are the central tool of Statistical Process Control
-
Six Sigma: Control Charts are used in the Control phase of DMAIC
When to Use:
-
Process monitoring in manufacturing and production
-
Quality assurance using statistical methods
-
Detection of process shifts and trends
-
Foundation for rule-based anomaly detection in time series
-
Conceptual basis — even when different terminology is used in IT monitoring
Details
Full Name: Nelson Rules (Tests for Special Causes)
Core Concepts:
-
8 rules for detecting non-random patterns in Control Charts
-
Rule 1: One point beyond 3σ (Outlier)
-
Rule 2: 9 consecutive points on the same side of the mean (Shift/Bias)
-
Rule 3: 6 consecutive points steadily increasing or decreasing (Trend)
-
Rule 4: 14 points alternating up and down (Oscillation)
-
Rule 5: 2 out of 3 points beyond 2σ on the same side
-
Rule 6: 4 out of 5 points beyond 1σ on the same side
-
Rule 7: 15 points within 1σ (suspiciously low variance)
-
Rule 8: 8 points outside ±1σ, but none beyond ±3σ (systematic oscillation)
-
Common Cause vs. Special Cause: Distinguishing inherent from assignable variation
-
Zones A/B/C: Dividing the Control Chart into 6 zones (each 1σ wide)
-
False Positive Trade-off: More active rules = higher sensitivity, but more false alarms
Key Proponent: Lloyd S. Nelson (1984, Journal of Quality Technology)
Relationship to Other Anchors:
-
Control Chart (Shewhart): Nelson Rules are applied to Control Charts
-
SPC: Nelson Rules are a tool within Statistical Process Control
-
Western Electric Rules: Predecessor; Nelson Rules extend these with Rules 5-8
When to Use:
-
Detecting non-random patterns in time series data
-
Process monitoring in manufacturing, pharmaceuticals, healthcare
-
Potential application in IT monitoring (memory leaks, performance degradation)
-
Quality assurance in Six Sigma / DMAIC Control Phase
This category contains semantic anchors that primarily guide how an LLM collaborates, reasons, or communicates – while maintaining the same quality bar as domain-focused anchors. These anchors reference well-established methodologies, communication standards, or reasoning techniques with rich conceptual frameworks.
|
Note
|
Many anchors have both domain and interaction dimensions. The anchors in this section emphasize interaction patterns but still activate substantial conceptual depth from their source domains (philosophy, cognitive science, military communication, etc.). |
Details
Full Name: Socratic Method (also Socratic Dialogue, Elenchus)
Core Concepts:
-
Guided Discovery: Lead learners to insights through questions rather than direct instruction
-
Elenchus: Cross-examination technique to expose contradictions in beliefs
-
Maieutics: "Midwifery of ideas" – helping others give birth to knowledge they already possess
-
Aporia: State of productive confusion that motivates deeper inquiry
-
Question Hierarchy: Progress from clarifying questions to probing assumptions to exploring implications
-
Dialectic Method: Structured dialogue to arrive at truth through reasoned argument
-
Non-assertive Teaching: Teacher claims ignorance, guides through questions
-
Assumption Surfacing: Make implicit beliefs explicit through systematic questioning
-
Logical Consistency: Test ideas for internal coherence and contradictions
Key Proponent: Socrates (via Plato’s dialogues, ~400 BCE)
Historical Context: 2400+ years of philosophical tradition, foundational to Western philosophy and critical thinking education
When to Use:
-
Teaching complex concepts where understanding must be constructed, not transmitted
-
Helping someone work through a problem without giving direct answers
-
Uncovering hidden assumptions in arguments or designs
-
Exploring the implications of a decision or belief
-
Encouraging deeper thinking about a topic
-
Code review or design review where understanding, not compliance, is the goal
Related Concepts:
-
Cognitive apprenticeship
-
Constructivist learning theory
-
Critical thinking pedagogy
Details
Full Name: BLUF (Bottom Line Up Front)
Also known as: Direct Answer Format, Conclusion-First Communication
Core Concepts:
-
Conclusion First: State the main point, decision, or recommendation immediately
-
Inverted Pyramid: Most important information first, supporting details follow
-
Action Orientation: Lead with what needs to happen or what was decided
-
Busy Reader Optimization: Enable time-pressed readers to get key information instantly
-
Supporting Evidence Follows: Detailed rationale, data, and background come after the conclusion
-
Scannable Structure: Clear hierarchy enables readers to stop at their needed depth
-
Clarity Over Suspense: No narrative buildup or delayed conclusions
-
One Sentence First: Ideally, the BLUF itself is a single, clear sentence
Key Proponents: US Military (Army writing standards), adopted broadly in business and government
Historical Context: Standardized in military communication where rapid decision-making is critical; now standard in business writing (McKinsey, consulting, executive communication)
When to Use:
-
Executive summaries and briefings
-
Status reports to leadership
-
Email to busy stakeholders
-
Incident reports requiring immediate action
-
Any high-stakes communication where the reader needs the conclusion first
-
Technical documentation for time-constrained readers
Relationship to Other Anchors:
-
Related to Pyramid Principle but more narrowly focused on conclusion-first structure
-
Complements MECE by providing the organizational principle for grouped information
-
Contrasts with narrative or exploratory writing styles
Counter-example: Academic papers (which build to conclusions) or storytelling (which uses suspense)
Details
Full Name: Rubber Duck Debugging
Core Concepts:
-
Explain to Understand: Articulating a problem aloud surfaces gaps in understanding
-
Step-by-Step Verbalization: Force yourself to go through code/logic line by line
-
Assumption Surfacing: Speaking aloud exposes implicit assumptions that may be wrong
-
No Expertise Required: The "listener" (rubber duck, colleague, LLM) need not be an expert
-
Slowing Down: The act of explaining forces a slower, more deliberate thought process
-
External Cognition: Verbalizing creates an external representation that aids debugging
-
Self-Directed Learning: Often the explainer solves the problem before finishing the explanation
-
Teaching to Learn: Related to the Feynman Technique and learning-by-teaching principle
Key Origin: "The Pragmatic Programmer" by Andrew Hunt and David Thomas (1999), referencing earlier programmer folklore
Historical Context: Decades-old practice in programming culture, formalized and named in influential software engineering literature
When to Use:
-
Debugging stubborn problems where you’re stuck
-
Code review where explaining to a colleague reveals issues
-
Learning new concepts by teaching them to someone (or something) else
-
Validating understanding of complex systems or algorithms
-
When rubber-ducking to an LLM, explicitly adopting this frame to trigger step-by-step explanation
Related Concepts:
-
Pair programming (where explaining is continuous)
-
Feynman Technique (learning by simple explanation)
-
Socratic Method (when the duck asks questions back)
Details
Full Name: Chain of Thought Prompting
Core Concepts:
-
Step-by-Step Reasoning: Explicitly show intermediate reasoning steps before reaching a conclusion
-
Reasoning Transparency: Make the thought process visible, not just the final answer
-
Intermediate Representations: Break complex problems into smaller, manageable steps
-
Error Reduction: Exposing reasoning allows detection of logical errors mid-process
-
Complex Task Decomposition: Handle multi-step problems that cannot be solved in one jump
-
Zero-Shot CoT: Simple prompt like "Let’s think step by step" to trigger CoT behavior
-
Few-Shot CoT: Provide examples with reasoning chains to guide the model
-
Self-Consistency: Generate multiple reasoning paths and select most consistent answer
Key Proponents: Wei et al. (Google Research, 2022), "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models"
Historical Context: Breakthrough in LLM prompting research, significantly improved performance on reasoning tasks (math, logic, common sense)
When to Use:
-
Complex reasoning problems (multi-step math, logic puzzles)
-
When you need to verify the reasoning process, not just the answer
-
Debugging incorrect LLM outputs by seeing where reasoning went wrong
-
Teaching or explaining complex topics where steps matter
-
Problems requiring planning or strategy
-
Any task where intermediate steps provide value
Related Research:
-
Tree of Thoughts (ToT): Extension allowing branching and backtracking
-
Self-Consistency: Sample multiple reasoning paths
-
Least-to-Most Prompting: Build up from simple to complex
Example Prompt Pattern:
Problem: [Complex question] Let's solve this step by step: 1. [First step] 2. [Second step] ... Therefore: [Conclusion]
Details
Full Name: Devil’s Advocate (Latin: Advocatus Diaboli)
Core Concepts:
-
Systematic Counter-Argumentation: Present opposing viewpoints even if not personally held
-
Assumption Challenging: Question premises and surface hidden assumptions
-
Stress-Testing Ideas: Identify weaknesses before they become problems
-
Steelmanning: Present the strongest version of the opposing argument, not a strawman
-
Intellectual Honesty: Separate idea evaluation from ego or political concerns
-
Pre-Mortem Thinking: Imagine failure scenarios to prevent them
-
Dialectical Reasoning: Thesis + Antithesis → Synthesis
-
Risk Identification: Surface potential problems proactively
Key Origin: Catholic Church canonization process (Promotor Fidei role, formalized 1587), secularized in critical thinking and decision-making
Historical Context: 400+ years as formalized practice in the Church, adopted widely in law, philosophy, business strategy, and red teaming
When to Use:
-
Critical design or architecture decisions where failure is costly
-
Security threat modeling (red teaming)
-
Evaluating business strategies or proposals
-
Pre-mortems before launching significant initiatives
-
Code review where you want to challenge assumptions
-
Risk assessment and contingency planning
-
Any high-stakes decision where being wrong is expensive
Related Concepts:
-
Red teaming (security context)
-
Pre-mortem analysis
-
Dialectical reasoning
-
Critical thinking frameworks
-
Steelmanning (vs. strawmanning)
Example Prompt Pattern:
I propose [idea/design/decision]. Play devil's advocate: What are the strongest arguments against this approach?
Details
Full Name: Five Whys Root Cause Analysis
Core Concepts:
-
Iterative Causal Analysis: Ask "Why?" repeatedly (typically ~5 times) to drill down to root causes
-
Root Cause vs. Symptom: Distinguish between surface symptoms and underlying causes
-
Causal Chain: Each answer becomes the subject of the next "Why?" question
-
Actionable Root Cause: Continue until you reach a cause that can be acted upon
-
Simplicity: No complex tools or statistical analysis required
-
Team-Based Investigation: Collaborative exploration of causal relationships
-
Avoiding Blame: Focus on process failures, not individual fault
-
Countermeasure Identification: Once root cause is found, design interventions
Key Proponent: Taiichi Ohno (Toyota Production System, 1950s)
Historical Context: Core tool in Lean Manufacturing and Toyota Production System (TPS), foundational to continuous improvement (Kaizen)
When to Use:
-
Incident post-mortems in software/DevOps
-
Debugging when surface fixes don’t resolve the issue
-
Process improvement initiatives
-
Understanding recurring problems
-
Quality defect analysis
-
Any situation where symptoms are clear but causes are not
Related Concepts:
-
Kaizen (continuous improvement)
-
Root Cause Analysis (RCA)
-
Fishbone Diagram (Ishikawa) – complementary tool
-
A3 Problem Solving (Toyota)
-
DevOps post-mortem culture
Pitfall to Avoid:
-
Stopping too early at a symptom rather than root cause
-
Pursuing a single causal chain when multiple factors contribute (use Fishbone Diagram for complex causality)
-
Blame-focused questioning rather than system-focused
Example Application:
Problem: Website is down Why? → Database connection failed Why? → Connection pool exhausted Why? → Long-running queries not timing out Why? → No query timeout configured Why? → Default configuration was never reviewed for production Root Cause: Configuration review process missing Countermeasure: Establish pre-production configuration checklist
Details
Full Name: Feynman Technique (also Feynman Learning Method)
Core Concepts:
-
Explain Simply: Teach the concept in simple language as if to a beginner (traditionally "explain to a 12-year-old")
-
Identify Gaps: When you struggle to explain, you’ve found gaps in your understanding
-
Return to Source Material: Go back and re-learn the parts you couldn’t explain clearly
-
Simplify and Use Analogies: Refine explanation using plain language and concrete examples
-
Iterative Refinement: Repeat the cycle until you can explain clearly and simply
-
No Jargon Hiding: Inability to avoid jargon signals lack of true understanding
-
Active Learning: Transform passive reading into active teaching
-
Metacognition: Become aware of what you don’t know
Key Attribution: Richard Feynman (Nobel Prize-winning physicist, 1965), famous for making complex physics accessible
Historical Context: Feynman was renowned for his teaching ability and his belief that deep understanding meant being able to explain simply. The "technique" is a formalization of his learning approach.
When to Use:
-
Learning new technical concepts or frameworks
-
Validating your understanding before using knowledge in practice
-
Preparing to teach or present a topic
-
Debugging conceptual confusion
-
Code review where you explain your design choices simply
-
Documentation writing (if you can’t explain it simply, you don’t understand it)
Four Steps (Canonical Form):
-
Choose a concept: Pick the topic you want to understand
-
Teach it to a child: Write an explanation in simple terms
-
Identify gaps and review: Where you struggle, study more
-
Simplify and analogize: Refine your explanation, use examples
Related Concepts:
-
Rubber Duck Debugging (explaining to understand)
-
Learning by teaching
-
Active recall
-
Elaborative interrogation
-
Plain language movement
Quote: "If you can’t explain it simply, you don’t understand it well enough." (Often attributed to Einstein, but embodies Feynman’s philosophy)
Before adding a new semantic anchor to your catalog, it’s valuable to test whether the LLM recognizes it and what concepts it associates with the term. This helps you:
-
Verify that the anchor is well-established in the LLM’s training data
-
Understand how the LLM interprets the concept
-
Identify gaps or misconceptions in the LLM’s understanding
-
Decide whether the anchor needs additional context or clarification
|
Note
|
While most established semantic anchors are consistently recognized across major LLM providers, testing with your specific model can reveal variations in interpretation or depth of understanding. Consider testing critical anchors with multiple models if you work across different LLM platforms. |
Use the following prompt pattern to test a semantic anchor:
What concepts do you associate with '<semantic anchor>'?Or in German:
Welche Konzepte verbindest Du mit '<semantischer Anker>'?When the LLM responds, evaluate the following:
-
Recognition: Does the LLM recognize the term at all?
-
Accuracy: Are the associated concepts correct and relevant?
-
Depth: Does the LLM show deep understanding or just surface knowledge?
-
Specificity: Does the LLM distinguish this concept from similar ones?
-
Practical application: Does the LLM understand when and how to use it?
Human: What concepts do you associate with 'Pugh-Matrix'?
LLM: I associate the Pugh-Matrix with:
- Baseline comparison methodology
- Weighted decision criteria
- Relative scoring (better/same/worse)
- Team-based evaluation
- Iterative refinement process
[...]Based on the response, you can determine:
-
✅ Good anchor: LLM shows strong understanding → Add to catalog
-
⚠️ Partial recognition: LLM knows it but lacks depth → Add with extra context -
❌ Unknown: LLM doesn’t recognize it → May not be a good semantic anchor yet
Beyond testing whether the LLM recognizes a term, evaluate whether it qualifies as a semantic anchor using the criteria from What Qualifies as a Semantic Anchor:
-
Is it Precise? Does it reference a specific, well-defined methodology or framework, rather than a vague instruction?
-
Is it Rich? Does it activate multiple interconnected concepts, not just a single directive?
-
Is it Consistent? Will different users and contexts get similar conceptual activation?
-
Is it Attributable? Can it be traced to key proponents, publications, or established practice?
Counter-examples to avoid:
-
"TLDR" – lacks defined structure or methodology
-
"ELI5" – vague pedagogical target, no framework
-
"Keep it short/simple" – pure instruction, no depth
If a term fails these criteria, it may be a useful prompt pattern but not a semantic anchor. Save semantic anchors for terms that activate rich, well-established conceptual frameworks.
Once you’ve tested a semantic anchor and confirmed it’s valuable, you can contribute it to this catalog.
The easiest way to contribute is to click the edit button (pencil icon) on this file in GitHub, make your changes, and submit a pull request directly.
Add a new section following this pattern:
=== Your New Anchor Name
[%collapsible]
====
*Full Name*: Complete name or expansion
*Core Concepts*:
* Key concept 1
* Key concept 2
* ...
*Key Proponent*: Name(s) of key figures
*When to Use*:
* Use case 1
* Use case 2
====|
Tip
|
You can use your LLM to help generate a properly formatted entry. Ask it to analyze the semantic anchor and produce an entry following the established pattern in this document. |
Semantic anchors create a shared language between you and LLMs, enabling more precise and efficient communication. By referencing established methodologies, frameworks, and practices, you can quickly activate relevant knowledge domains and ensure consistent interpretation of concepts.
As your work evolves, continue to identify and catalog new semantic anchors that emerge in your field. This living vocabulary becomes a powerful tool for effective collaboration with AI assistants.
This document itself serves as a semantic anchor catalog, providing you with quick reference terminology for software development conversations.