Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions .github/workflows/gqm_update.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,3 @@ jobs:
cd scripts/gqm_gen
./update_gqm.sh

- name: Report coverage
if: success()
uses: sidx1024/report-nyc-coverage-github-action@v1.2.7
with:
coverage_file: ".nyc_output/nyc-coverage-report/coverage-summary.json"
3 changes: 2 additions & 1 deletion .github/workflows/mdbook.yml
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,8 @@ jobs:
with:
source-dir: ./book/html
preview-branch: gh-pages
umbrella-dir: pr-preview
umbrella-dir: docs/pr-preview
pages-base-path: docs

gqm-preview:
if: github.event_name == 'pull_request' && github.event.action != 'closed' && github.event.pull_request.head.repo.full_name == github.repository
Expand Down
4 changes: 2 additions & 2 deletions innersource-and-ai/innersource-and-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

Organizations are increasingly adopting AI in the workplace—from generative AI assistants to agentic coding tools that can write, refactor, and review code. In many organizations, developers are now expected to do agentic coding (sometimes called "vibe coding"), where the role shifts from writing code to providing instructions in natural language and overseeing the work of automated coding agents. Some teams are going further, with multiple agents representing roles like quality engineering, project management, and frontend/backend development working in tandem and interacting directly with tools like issue trackers and source control platforms.

This shift raises important questions: does software reuse still matter when AI can regenerate capabilities on demand? How do you maintain quality when code is produced at unprecedented speed? For InnerSource program leads, the question is whether InnerSource still matters in this new landscape.
This shift raises important questions: does software reuse still matter when AI can regenerate capabilities on demand? How do you maintain quality when code is produced at unprecedented speed? How do you capture and share knowledge—not just code, but patterns, tutorials, and learnings—so that AI systems can be trained on the right context? For InnerSource program leads, the question is whether InnerSource still matters in this new landscape.

It does. InnerSource is potentially *more* important than ever. Shared repositories, clear boundaries, documentation, and collaborative practices help AI systems—and the people using them—work with the right context, reuse existing components, and keep quality high. This section explains why InnerSource matters when adopting AI, how to shape your repositories and practices for AI-assisted development, and what risks and guardrails to keep in mind.
It does. InnerSource is potentially *more* important than ever. Shared repositories, clear boundaries, documentation, and collaborative practices help AI systems—and the people using them—work with the right context, reuse existing components, and keep quality high. Beyond code, InnerSource practices help organizations capture and share non-code assets—enablement content, architectural decisions, and institutional knowledge—that are essential for training and grounding AI systems. A solid data foundation, with clean, discoverable, and well-governed data, is also critical: organizations that treat their data lakes and data products as InnerSource-ready will be better positioned to adopt AI effectively. This section explains why InnerSource matters when adopting AI, how to shape your repositories and practices for AI-assisted development, and what risks and guardrails to keep in mind.

The following articles in this section go deeper:

Expand Down
12 changes: 12 additions & 0 deletions innersource-and-ai/risks-and-guardrails.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,18 @@ AI coding tools can deliver impressive short-term productivity gains. The risk i

"AI slop" refers to low-quality, generic, or incorrect content produced by AI systems without adequate human oversight. In a development context, this can mean boilerplate code that does not fit the project's conventions, misleading documentation, or subtly incorrect implementations. InnerSource's emphasis on transparency—keeping things traceable and open for inspection—directly mitigates this risk. When contributions (whether from humans or AI) go through visible review processes in shared repositories, quality issues are caught earlier and patterns of slop become visible to the community.

## Defining boundaries for proprietary knowledge

As organizations use InnerSource practices to capture and share knowledge for AI training, they must define clear boundaries between what can be shared broadly and what must remain protected. Not all internal knowledge is appropriate for AI training—sensitive research, competitive intelligence, and regulated data require careful handling. InnerSource governance practices—clear ownership, access controls, and contribution guidelines—provide a natural framework for making these distinctions explicit.

The goal is to separate human creation outcomes (the knowledge and artifacts that can be shared) from the creation process itself and from proprietary assets that need safeguarding. Organizations should establish policies that specify which content can be used for AI training, which requires restricted access, and which must remain outside AI systems entirely. This is especially important for organizations with sensitive internal research or regulated data, where compliance and appropriate access controls are non-negotiable.

## Transparency and stakeholder involvement

Involving stakeholders and keeping development transparent supports responsible AI deployment. When decisions about tools, patterns, and policies are visible and discussable, teams can align on what is acceptable and what is not. This aligns with InnerSource principles of openness and collaboration and helps prevent AI from being used in ways that conflict with organizational values or compliance requirements.

## Leading people and agents

As AI agents take on more development tasks, leaders face a new challenge: managing both people and AI agents. This goes beyond tooling decisions into questions of work design, accountability, and organizational structure. Who is responsible when an agent produces incorrect or harmful output? How do you balance workloads between human contributors and automated agents? How do you ensure that institutional knowledge continues to be built by people even as agents handle more of the routine work?

InnerSource program leads should think proactively about these questions rather than waiting to react as problems emerge. Clear contribution guidelines that apply to both human and AI contributors, transparent review processes, and explicit accountability structures will help organizations navigate this transition. The goal is to design work practices that get the best from both people and agents while preserving the collaborative culture that makes InnerSource effective.
2 changes: 2 additions & 0 deletions innersource-and-ai/shaping-for-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ Well-defined repositories with clear scope and interfaces make it easier for hum

InnerSource behaviors like solid READMEs, CONTRIBUTING guides, and architecture decision records are increasingly important when AI is in the loop. They help AI and people alike understand how to use and extend shared code correctly. Documentation that explains *why* decisions were made, not just *what* the code does, supports better AI-generated contributions and reduces misuse. Making repositories searchable and well-described also helps teams and tools find the right building blocks instead of reimplementing them.

Discoverability deserves special attention. In large organizations, teams frequently build duplicate solutions because they cannot find what already exists. This problem extends beyond code to data assets, enablement content, and operational knowledge. Program leads should work with platform teams to ensure that shared assets are consistently tagged, well-described, and surfaced through central search and recommendation tools. AI-powered chatbots and assistants can help with discoverability, but they are only as good as the content they can access—investing in publishing and indexing infrastructure pays dividends.

## Playbooks for people and agents

Playbooks that describe how to contribute—and what to avoid—benefit both human contributors and AI-assisted workflows. The community is starting to develop playbooks that work for both. As these emerge, they will be reflected in the InnerSource Patterns book and linked from this section. The goal is to make it easy for contributors and tools to follow the same rules and expectations.
Expand Down
16 changes: 16 additions & 0 deletions innersource-and-ai/why-innersource-matters-with-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ AI and agentic coding are changing how development work gets done. Developers sp

When many teams use AI to generate or modify code, the risk of duplication and inconsistency grows. InnerSource encourages shared building blocks and a single place to contribute improvements. That reduces waste and keeps quality consistent across the organization. The demand for software architecture and orchestration skills is also rising: understanding system boundaries, interfaces, and processes is essential for building valuable, reliable AI-assisted systems. InnerSource’s emphasis on transparency, documentation, and community aligns with this need.

This relevance extends beyond code. Organizations are discovering that non-code assets—patterns, tutorials, enablement content, blog posts, and architectural learnings—are just as important to capture and share. When people leave teams or companies, their knowledge often leaves with them. InnerSource practices that encourage open contribution and visible documentation help preserve institutional knowledge and make it available for AI training and retrieval. Organizations that have historically restricted internal knowledge sharing are now recognizing the cost: lost insights, repeated effort, and AI systems that lack the context they need to be useful.

## The shifting role of the developer

Agentic coding—sometimes called “vibe coding”—is changing what it means to be a software developer. The role is shifting from one that writes code to one that provides instructions in natural language and oversees the work of automated agents. Teams are beginning to deploy agent teams where specialized agents handle quality engineering, project management, frontend, and backend work, interacting directly with tools like Jira and GitHub.
Expand All @@ -22,10 +24,24 @@ The ease of generating software with AI puts the role of software reuse in quest

Without shared standards and shared repos, each team may produce similar solutions in isolation. InnerSource fosters reuse and cost sharing across units, which in turn supports sustainability and efficiency. Reusable InnerSource components can also reduce the cost of AI adoption: well-maintained shared libraries mean agents spend fewer tokens and less compute regenerating solutions that already exist. This is the same benefit InnerSource has always offered; in an AI-augmented world, it becomes harder to ignore.

## Capturing non-code knowledge

Software reuse is only part of the picture. Organizations also benefit from capturing and sharing non-code reusable assets: patterns, enablement content, tutorials, architecture decisions, and operational learnings. These assets are valuable both for human contributors and as training or grounding material for AI systems. Rather than requiring individuals to read through long tutorials, AI tools can surface the right knowledge at the right time—but only if that knowledge has been captured, organized, and made accessible in the first place.

InnerSource practices provide a natural framework for this. Open contribution models, visible repositories, and shared publishing tools encourage teams to document what they learn rather than keeping it siloed. Organizations that invest in capturing non-code knowledge will find their AI systems are better grounded in organizational context and more useful to the people they serve.

## Platforms ready for InnerSource

Platforms and tooling play a crucial role in enabling InnerSource at scale. As organizations adopt AI and agentic workflows, collaboration platforms must support discovery, visibility, and contribution across team boundaries. Platforms that make it easy to find reusable components, understand interfaces, and submit improvements reduce friction and encourage participation. Investment in platform capabilities—search, documentation, governance workflows, and integration with development tools—directly multiplies the effectiveness of InnerSource practices in an AI-augmented environment.

Discoverability is a particular challenge. Without central guidance and good search capabilities, multiple teams within the same organization may independently build similar platforms or solutions, unaware of each other's work. This duplication is costly and undermines the benefits of InnerSource. Program leads should invest in mechanisms that make shared assets—code, data, documentation, and tooling—easy to find across the organization. AI-powered search and recommendation tools can help, but they work best when the underlying assets are well-described, consistently tagged, and published to a central location.

## The importance of a solid data foundation

Data is a critical enabler for AI adoption, and organizations that treat their data assets with the same care as their code will be better positioned to succeed. Clean, well-governed, and discoverable data—whether in data lakes, data warehouses, or data products—is essential for training, fine-tuning, and grounding AI systems. When data is siloed, inconsistent, or poorly documented, AI initiatives stall or produce unreliable results.

InnerSource principles apply naturally to data: open contribution, clear ownership, transparent governance, and shared standards help organizations build a data foundation that is ready for cross-team collaboration and AI consumption. Treating data products as InnerSource projects—with contribution guidelines, quality standards, and discoverability mechanisms—enables teams to share and build on each other's data work rather than duplicating effort. As AI adoption accelerates, the organizations that invest in making their data InnerSource-ready will have a significant advantage.

## Enterprise AI and production readiness

This section focuses on large-scale enterprise adoption of AI—internal tools, pipelines, and agentic workflows—rather than consumer-facing AI products. In that context, the difference between prototype AI solutions and production-ready ones matters a lot. InnerSource practices—transparency, code review, documentation, and governance—help teams build robust, secure, and maintainable AI-assisted development. They also help leaders see what is ready for production and what still needs work.
Expand Down
Loading