Skip to content

Add an AI and Autonomous code contribution policy.#328

Open
freakboy3742 wants to merge 9 commits intomainfrom
ai-policy
Open

Add an AI and Autonomous code contribution policy.#328
freakboy3742 wants to merge 9 commits intomainfrom
ai-policy

Conversation

@freakboy3742
Copy link
Copy Markdown
Member

This is the first step at adding an AI policy for BeeWare: Adding an actual policy.

Once ratified, links to this document will be added to the contribution guide.

It includes an updated pull request template, adding a checkbox for declaring AI tooling and prompt for declaring that usage.

It also includes an update to the contribution guide that can be used as a template for other projects. This is a significant change to the contribution guide in this repository - the current version has a number of dead links. It replaces that content with references to the current contribution guide on the website. When rolled out to other projects, this content can be used as-is, or can have references to that project's contribution guide (for Briefcase, Toga etc).

Submitted in draft form to allow discussion and ratification by the core team.

PR Checklist:

  • All new features have been tested
  • All new features have been documented
  • I have read the CONTRIBUTING.md file
  • I will abide by the code of conduct

@freakboy3742
Copy link
Copy Markdown
Member Author

I've incorporated updates reflecting the feedback that has been given to date. Barring significant additional feedback, my current plan is put this to the core team for endorsement towards the end of this week.

Copy link
Copy Markdown

@gpshead gpshead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A pile of comments, not necessarily in linear thought order. Read them all to understand my thinking.

General theme: the AI_POLICY doc is a bit long. I push back against things that are going to discriminate against people based on tool use. Forced disclosure can drive some genuinely interested contributors away as it sets of a passively hostile "i'm gonna be bullied if i use AI" tone. Voluntary but encouraged disclosure is meaningful.

We're entering a world where some people perceived as easy victims who use AI may be targeted and harassed (or worse) because of it. Forced disclosure policies means honest people targeted get driven underground and/or out of communities - exactly as the trolls want. This can hurt people and projects.

Focus on what you want out of contributions - your contributor guidelines should already be covering that. The best AI policies are basically as a TL;DR saying to respect the contributor guidelines with a focus on respect maintainer time and attention and a reminder that abuse of that ends in closures and excommuncation.

When you have rules, explain the why rather than the what. Rules that are just hurdles for no communally justifiable reason are some set of passive aggressive, virtue signals (probably not communicating what you think), or pointless sign-not-a-cop roadblocks that will be ignored or fake-complied.

Limit words. brief policies get read. long policies are less likely to. what else can be trimmed?

(ironically... a model might do a good job here; I clearly didn't use one as I spent far too long forumulating my replies. I wrote too much and predict inconsistency, incoherence, and misedits across my comments 😅)

@freakboy3742
Copy link
Copy Markdown
Member Author

freakboy3742 commented Apr 8, 2026

A pile of comments, not necessarily in linear thought order. Read them all to understand my thinking.

@gpshead Thanks for these comments - they're definitely helpful.

General theme: the AI_POLICY doc is a bit long.

That's definitely a fair criticism.

I push back against things that are going to discriminate against people based on tool use. Forced disclosure can drive some genuinely interested contributors away as it sets of a passively hostile "i'm gonna be bullied if i use AI" tone. Voluntary but encouraged disclosure is meaningful.

As noted inline, any bullying behavior would trigger BeeWare's CoC, so that shouldn't be a concern.

The real motivation for requiring declaration is legal advice (from an actual lawyer) that suggested prudence is the best path. If Anthropic's lawyers are willing to go on record with legal advice to the contrary, I'd love to hear that.

Frankly, this has always been the weak point in OSS contribution - getting actual lawyers to make actual statements about what is needed. Is a CLA needed? Why or why not? I'd dearly like to have reasoned legal opinions - even if they're from companies with vested interests. It's taken 30+ years to get to a place where there's anything close to a common legal understanding of what OSS license compliance means in practice; I'd very much like to get clarity on what best practice means for AI contributions before I'm 80 :-)

@gpshead
Copy link
Copy Markdown

gpshead commented Apr 8, 2026

One reason I liked the terms of service callout is that some AI service provider ToS's have indemnification clauses for their customers in them. But it'd take a lawyer to understand implications. OSS is, as usual, the underexplored legal frontier.

@freakboy3742 freakboy3742 marked this pull request as ready for review April 9, 2026 02:01
Copy link
Copy Markdown
Contributor

@kattni kattni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me, however, I caught a few issues on my read-through. Suggested changes inline.

Co-authored-by: Kattni <kattni@kattni.com>
Copy link
Copy Markdown
Contributor

@kattni kattni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Thanks for writing this up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants