-
-
Notifications
You must be signed in to change notification settings - Fork 930
generative-ai: Add anti pattern example #1679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
getting-started/generative-ai.rst
Outdated
| ============= | ||
| - While AI-assisted tools such as autocompletion can enhance productivity, they sometimes rewrite entire code blocks instead of making small, focused edits. | ||
| This can make it more difficult to review changes and to fully understand both the original intent of the code and the rationale behind the new modifications. | ||
| Please keep maintaining consistency with the original code helps preserve clarity, traceability, and meaningful review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Please keep maintaining consistency with the original code helps preserve clarity, traceability, and meaningful review. | |
| Maintaining consistency with the original code helps preserve clarity, traceability, and meaningful review. |
We also might want to mention (and link to?) the idea that we want to avoid churn.
| those that are fully generated by AI. If a contributor repeatedly opens unproductive | ||
| issues or PRs, they may be blocked. | ||
|
|
||
| Anti-patterns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a second anti-pattern to include in the list? Other bad things: hallucinations, incorrectly changing tests instead of fixing code, ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think you covered the incorrectly changing tests instead of fixing one. Good call. I like having that in the list.
I don't personally see that so often anymore with Opus 4.1 or Sonnet 4.5, but older or lower end models tend to reward hack their way into that state more often - it's a thing we'll likely encounter coming from new contributors for a while as they can't be assumed to be using the latest and greatest.
| This can make it more difficult to review changes and to fully understand both the original intent of the code and the rationale behind the new modifications. | ||
| Maintaining consistency with the original code helps preserve clarity, traceability, and meaningful reviews and also helps us avoid unnecessary code churn. | ||
| - Sometimes AI assisted tools make failing unit tests pass by altering or bypassing the tests rather than addressing the underlying problem in the code. | ||
| Such changes do not represent a real fix and should be avoided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand what should be avoided:
- avoid using AI-assisted tools?
- avoid rewriting code blocks?
- avoid using Ai-assisted tools that rewrite code blocks?
What is the recommended approach instead if we are telling people not to do a certain thing?
I would suggest starting the paragraph with something like:
"We discourage the use of AI when ..... "
Then followed with the explanation of why.
Currently this section reads with the "why" first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this sentence was only intended to be tied to the false testing / false success bullet it is part of. Rather than wording it as "should be avoided" the right thing for users to do is to request that authors have reviewed the work in detail for problems like this before proposing it as a PR.
"Such changes do not represent a real fix. Authors must review the work done by AI tooling in detail to ensure it actually makes sense before proposing it as a PR."
I could maybe suggest adding an AI use tip for this situation like "iterate with your AI agent on the quality and utility of the change and tests to understand if the issue is actually fixed" but I'd leave that out for now as topic is maybe larger and constantly evolving, beyond the scope of this PR.
I'm trying to focus on a proactive positive outcome based wording rather than a negative like "avoid" or "discourage".
The reason being, the statements in here don't reflect all AI tooling. This devguide doc is about the use of AI, so we should assume those who read it are interested in or already using AI and are here to learn how to do it in an appropriate and productive manner.
Statements like "we discourage use of AI when ..." are likely to be outdated very quickly and discourage things that won't be true for long. Thus my desire to focus this on what we do want and why it is important for project health and reviewer respect.
|
|
||
| Anti-patterns | ||
| ============= | ||
| - While AI-assisted tools such as autocompletion can enhance productivity, they sometimes rewrite entire code blocks instead of making small, focused edits. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd actually tighten this phrasing up a bit, we don't need to explain what shape AI tools can take or what they can do (anyone reading this knows that).
I find this list of anti-patterns to not be AI specific. We already reject PRs coming from people who make larger than necessary changes that are difficult to review. So I'd suggest wording it as a "positive desired thing, not negative example" phrasing, starting something like this:
- Make small focused edits, prefer not rewriting entire code blocks.
This makes it easier to review changes and fully understand both the original intent of the code and the rationale behind the new modifications.
...
| This can make it more difficult to review changes and to fully understand both the original intent of the code and the rationale behind the new modifications. | ||
| Maintaining consistency with the original code helps preserve clarity, traceability, and meaningful reviews and also helps us avoid unnecessary code churn. | ||
| - Sometimes AI assisted tools make failing unit tests pass by altering or bypassing the tests rather than addressing the underlying problem in the code. | ||
| Such changes do not represent a real fix and should be avoided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this sentence was only intended to be tied to the false testing / false success bullet it is part of. Rather than wording it as "should be avoided" the right thing for users to do is to request that authors have reviewed the work in detail for problems like this before proposing it as a PR.
"Such changes do not represent a real fix. Authors must review the work done by AI tooling in detail to ensure it actually makes sense before proposing it as a PR."
I could maybe suggest adding an AI use tip for this situation like "iterate with your AI agent on the quality and utility of the change and tests to understand if the issue is actually fixed" but I'd leave that out for now as topic is maybe larger and constantly evolving, beyond the scope of this PR.
I'm trying to focus on a proactive positive outcome based wording rather than a negative like "avoid" or "discourage".
The reason being, the statements in here don't reflect all AI tooling. This devguide doc is about the use of AI, so we should assume those who read it are interested in or already using AI and are here to learn how to do it in an appropriate and productive manner.
Statements like "we discourage use of AI when ..." are likely to be outdated very quickly and discourage things that won't be true for long. Thus my desire to focus this on what we do want and why it is important for project health and reviewer respect.
| those that are fully generated by AI. If a contributor repeatedly opens unproductive | ||
| issues or PRs, they may be blocked. | ||
|
|
||
| Anti-patterns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think you covered the incorrectly changing tests instead of fixing one. Good call. I like having that in the list.
I don't personally see that so often anymore with Opus 4.1 or Sonnet 4.5, but older or lower end models tend to reward hack their way into that state more often - it's a thing we'll likely encounter coming from new contributors for a while as they can't be assumed to be using the latest and greatest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be interesting to add a link to: https://arxiv.org/pdf/2507.09089 and possibly https://www.fast.ai/posts/2025-10-30-build-to-last.html
| those that are fully generated by AI. If a contributor repeatedly opens unproductive | ||
| issues or PRs, they may be blocked. | ||
|
|
||
| Anti-patterns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would use a different title than antipatterns. I would like to move toward "Considerations or Keys for Success".
I would start by linking to the section about submitting a quality PR:
- consider whether the change is necessary
- small, focused changes
- tests that exercise the change
- style follows the existing code
AI coding tools will compose code. The contributor, not the AI tool, is responsible for the quality and validity of code submitted with a PR.
Take the time to consider the code changes suggested by an AI tool. Respect the reviewer's time by validating your PR before submission.
That "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity" paper was unfortunately out of date by the time it was published - which I don't expect all readers recognize. It requires understanding that the models used were already multiple generations behind and obsolete. If we're going to link to that, we'd need to explain that caveat up front. But it's also a developer productivity focused paper and doesn't really align with the topic of this devguide section. People should use what tooling is productive for them, but when it comes to OSS dev the thing I believe we all want is for our own time as recipients of people's reports/issues/PRs/etc to be well spent and respected. I like your idea of a "section about submitting a quality PR" for that reason. That is more what I believe we actually care about.
reality check: is my conscious bias against linking to things mentioning Mojo from official Python channels worthwhile here? +1 Regardless of biases, I like its broader theme from Chris of "understand the system that must be built maintained long term" concept. Key for CPython and any big OSS maintenance. |
|
@gpshead I think we can skip the links. I wrote the message before the one where I suggested putting AI use in a positive light where the contributor is responsible for the validity of the PR. I think there are two goals:
|
|
I like the idea of staying away from any opinion about AI on either side of the debate, and stay focused on the contributions. To that end, I would change some of the opening text:
to:
|
|
I love the suggested opening @nedbat. |
|
A lot of opinions!, I will address reviews by this weekend! |
📚 Documentation preview 📚: https://cpython-devguide--1679.org.readthedocs.build/