Add issue template config and PR template check workflow#1105
Add issue template config and PR template check workflow#1105Marshall-Hallenbeck wants to merge 3 commits intomainfrom
Conversation
Disable blank issues and add contact links for the wiki and Discord. Add a GitHub Actions workflow that comments on PRs missing template sections.
|
Looks like the template check worked: https://github.com/Pennyw0rth/NetExec/actions/runs/21891306791/job/63197484682?pr=1105 |
Yes absolutely, there are way too many bug reports where people don't fill out the form (e.g. OS and installation method) which makes it much harder to debug. |
|
I anyway wanted to make some slight changes to the PR template, @Marshall-Hallenbeck fine if I just do it in this PR? |
No problem, go ahead! |
Perfect, done. If you guys don't agree with the changes please let me know :D |
| - [ ] I have ran Ruff against my changes (via poetry: `poetry run python -m ruff check . --preview`, use `--fix` to automatically fix what it can) | ||
| - [ ] I have ran Ruff against my changes (poetry: `poetry run ruff check .`, use `--fix` to automatically fix what it can) | ||
| - [ ] I have added or updated the `tests/e2e_commands.txt` file if necessary (new modules or features are _required_ to be added to the e2e tests) | ||
| - [ ] New and existing e2e tests pass locally with my changes |
There was a problem hiding this comment.
@NeffIsBack why did you remove this line?
There was a problem hiding this comment.
A few reasons:
- I think it is part of the reviewers responsibility to make sure the PR does not interfere (or break) with other parts of the project. The normal end user usually don't have the knowledge about the code base to properly make a judgement about the impact.
- Most PRs simply do not interfere with other parts of the code and therefore do not require this amount of indepth testing (reviewing the code and structure is most often much more important). E.g. Modules, which are the majority of PRs, simply "do something" at the end of the execution chain while in 99% of the time do not influence any other part of the execution.
- Most people simply don't have the infrastructure to properly execute the tests. They require a multi Domain forest with MSSQL, NFS, FTP and whatnot.
- In the 2+ years we have these tests (and the PR template requesting it) i have yet to see someone actually reporting issues from running these tests, although i know for sure that there were long periods where some parts where at least semi broken. My guess would be that people simply don't run them and either falsly claiming they have (setting the checkmark) or don't set the checkmark at all.
TLDR; Imo these e2e tests are really useful for us making sure everything works as intended when there is a need to do that (e.g. huge refactors, new releases etc.), but should not be required for contributors.
There was a problem hiding this comment.
I think it is part of the reviewers responsibility to make sure the PR does not interfere (or break) with other parts of the project. The normal end user usually don't have the knowledge about the code base to properly make a judgement about the impact.
I think that if people are changing the code they should be able to tell if it breaks other parts of the code. I would argue normal end users are not the same users creating PRs. Us maintainers should definitely be checking that everything works, but if people are submitting code they should be expected to confirm it doesn't break stuff.
Most PRs simply do not interfere with other parts of the code and therefore do not require this amount of indepth testing (reviewing the code and structure is most often much more important). E.g. Modules, which are the majority of PRs, simply "do something" at the end of the execution chain while in 99% of the time do not influence any other part of the execution.
Maybe, but it's hard to tell sometimes, and we've seen things have a cascading affect, like a module doesn't exit correctly so none of the other modules run after it, or things like that.
Most people simply don't have the infrastructure to properly execute the tests. They require a multi Domain forest with MSSQL, NFS, FTP and whatnot.
The tests will still pass, they just all won't run fully, which I think is OK.
In the 2+ years we have these tests (and the PR template requesting it) i have yet to see someone actually reporting issues from running these tests, although i know for sure that there were long periods where some parts where at least semi broken. My guess would be that people simply don't run them and either falsly claiming they have (setting the checkmark) or don't set the checkmark at all.
I've found dozens, if not hundreds of issues via them. I don't think anyone else actually runs them, partially due to what you said about a full testing environment.
Overall, I think we need some way to make sure people are actually testing their code to see if it works. We keep getting PRs with stuff like "I havent tested this", or we run it and there's glaringly obvious errors that would have been seen if the user simply ran the program.
There was a problem hiding this comment.
I think it is part of the reviewers responsibility to make sure the PR does not interfere (or break) with other parts of the project. The normal end user usually don't have the knowledge about the code base to properly make a judgement about the impact.
I think that if people are changing the code they should be able to tell if it breaks other parts of the code. I would argue normal end users are not the same users creating PRs. Us maintainers should definitely be checking that everything works, but if people are submitting code they should be expected to confirm it doesn't break stuff.
Usually they don't and usually they also don't break/interfere with other stuff. PRs from people that don't know the codebase (not the usual contributors) and actually change integral components are very very rare. Imo not worth forcing everyone (for even the simplest PRs) to run the tests. Imo a reviewer responsibility not the one that opened the PR (people usually don't PR stuff they know break other parts lol).
Most PRs simply do not interfere with other parts of the code and therefore do not require this amount of indepth testing (reviewing the code and structure is most often much more important). E.g. Modules, which are the majority of PRs, simply "do something" at the end of the execution chain while in 99% of the time do not influence any other part of the execution.
Maybe, but it's hard to tell sometimes, and we've seen things have a cascading affect, like a module doesn't exit correctly so none of the other modules run after it, or things like that.
From what i can tell this is very very rare. If modules are properly reviewed based on the proposed code there is not much that can go wrong if you stick to the rules (no wild global variables, not randomly changing protocol variables etc.). Things that break other parts of the project usually get caught in the review anyway because it's simply bad code from the beginning. Also a reason for the enhancement/New Module tags because modules rarely touch any other code.
Most people simply don't have the infrastructure to properly execute the tests. They require a multi Domain forest with MSSQL, NFS, FTP and whatnot.
The tests will still pass, they just all won't run fully, which I think is OK.
Yeah but if you don't have a proper testing environment most of them won't run anyway which kinda defeats the prupose. (At least not in the scale the tests should be run)
In the 2+ years we have these tests (and the PR template requesting it) i have yet to see someone actually reporting issues from running these tests, although i know for sure that there were long periods where some parts where at least semi broken. My guess would be that people simply don't run them and either falsly claiming they have (setting the checkmark) or don't set the checkmark at all.
I've found dozens, if not hundreds of issues via them.
Absolutely, same here, but by now everything is 1. pretty stable on linux and 2. usually i find and rule out bugs that are appearing due to cross platform stuff. Usually linux works really well, just windows has some problems including the binaries for new releases. So, by now, if the code has been properly reviewed i don't find that many issues anymore apart from missing dependencies in the compiled binary (due to the not update spec file).
I don't think anyone else actually runs them, partially due to what you said about a full testing environment.
👍
Overall, I think we need some way to make sure people are actually testing their code to see if it works. We keep getting PRs with stuff like "I havent tested this", or we run it and there's glaringly obvious errors that would have been seen if the user simply ran the program.
Actually i think PRs that haven't been tested are pretty rare by now. People not running ruff is much more common as far as i can tell. So usually the best proof for me is just a screenshot from the output by the new addition. That's always a good indicator and usually (but not always) included in the PR description.
|
@NeffIsBack While we are making these updates, should we update our AI policy and add in a checkbox about if AI was used? |
Let's do it, AI slop is increasing day by day. |
Description
Discussion point for maintainers (@zblurx @NeffIsBack @mpgn):
Type of change
Insert an "x" inside the brackets for relevant items (do not delete options)
Setup guide for the review
N/A - this is a GitHub Workflow update
Screenshots (if appropriate):
N/A
Checklist:
Insert an "x" inside the brackets for completed and relevant items (do not delete options)
poetry run python -m ruff check . --preview, use--fixto automatically fix what it can)tests/e2e_commands.txtfile if necessary (new modules or features are required to be added to the e2e tests)