Skip to content

Add issue template config and PR template check workflow#1105

Open
Marshall-Hallenbeck wants to merge 3 commits intomainfrom
marshall_pr_template_check
Open

Add issue template config and PR template check workflow#1105
Marshall-Hallenbeck wants to merge 3 commits intomainfrom
marshall_pr_template_check

Conversation

@Marshall-Hallenbeck
Copy link
Collaborator

Description

  • Disable blank issues and add contact links for the wiki and Discord.
  • Add a GitHub Actions workflow that comments on PRs missing template sections.

Discussion point for maintainers (@zblurx @NeffIsBack @mpgn):

  • Should we disable the blank issue (part of what this PR does) and force a feature request or bug report template? If yes, should we add a "question" template, or something for "other"?

Type of change

Insert an "x" inside the brackets for relevant items (do not delete options)

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Deprecation of feature or functionality
  • This change requires a documentation update
  • This requires a third party update (such as Impacket, Dploot, lsassy, etc)

Setup guide for the review

N/A - this is a GitHub Workflow update

Screenshots (if appropriate):

N/A

Checklist:

Insert an "x" inside the brackets for completed and relevant items (do not delete options)

  • I have ran Ruff against my changes (via poetry: poetry run python -m ruff check . --preview, use --fix to automatically fix what it can)
  • I have added or updated the tests/e2e_commands.txt file if necessary (new modules or features are required to be added to the e2e tests)
  • New and existing e2e tests pass locally with my changes
  • If reliant on changes of third party dependencies, such as Impacket, dploot, lsassy, etc, I have linked the relevant PRs in those projects
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation (PR here: https://github.com/Pennyw0rth/NetExec-Wiki)

Disable blank issues and add contact links for the wiki and Discord.
Add a GitHub Actions workflow that comments on PRs missing template sections.
@Marshall-Hallenbeck
Copy link
Collaborator Author

@NeffIsBack
Copy link
Member

Should we disable the blank issue (part of what this PR does) and force a feature request or bug report template? If yes, should we add a "question" template, or something for "other"?

Yes absolutely, there are way too many bug reports where people don't fill out the form (e.g. OS and installation method) which makes it much harder to debug.

@NeffIsBack
Copy link
Member

I anyway wanted to make some slight changes to the PR template, @Marshall-Hallenbeck fine if I just do it in this PR?

@Marshall-Hallenbeck
Copy link
Collaborator Author

I anyway wanted to make some slight changes to the PR template, @Marshall-Hallenbeck fine if I just do it in this PR?

No problem, go ahead!

@NeffIsBack
Copy link
Member

I anyway wanted to make some slight changes to the PR template, @Marshall-Hallenbeck fine if I just do it in this PR?

No problem, go ahead!

Perfect, done. If you guys don't agree with the changes please let me know :D

Copy link
Member

@NeffIsBack NeffIsBack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imo LGTM

- [ ] I have ran Ruff against my changes (via poetry: `poetry run python -m ruff check . --preview`, use `--fix` to automatically fix what it can)
- [ ] I have ran Ruff against my changes (poetry: `poetry run ruff check .`, use `--fix` to automatically fix what it can)
- [ ] I have added or updated the `tests/e2e_commands.txt` file if necessary (new modules or features are _required_ to be added to the e2e tests)
- [ ] New and existing e2e tests pass locally with my changes
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NeffIsBack why did you remove this line?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few reasons:

  • I think it is part of the reviewers responsibility to make sure the PR does not interfere (or break) with other parts of the project. The normal end user usually don't have the knowledge about the code base to properly make a judgement about the impact.
  • Most PRs simply do not interfere with other parts of the code and therefore do not require this amount of indepth testing (reviewing the code and structure is most often much more important). E.g. Modules, which are the majority of PRs, simply "do something" at the end of the execution chain while in 99% of the time do not influence any other part of the execution.
  • Most people simply don't have the infrastructure to properly execute the tests. They require a multi Domain forest with MSSQL, NFS, FTP and whatnot.
  • In the 2+ years we have these tests (and the PR template requesting it) i have yet to see someone actually reporting issues from running these tests, although i know for sure that there were long periods where some parts where at least semi broken. My guess would be that people simply don't run them and either falsly claiming they have (setting the checkmark) or don't set the checkmark at all.

TLDR; Imo these e2e tests are really useful for us making sure everything works as intended when there is a need to do that (e.g. huge refactors, new releases etc.), but should not be required for contributors.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is part of the reviewers responsibility to make sure the PR does not interfere (or break) with other parts of the project. The normal end user usually don't have the knowledge about the code base to properly make a judgement about the impact.

I think that if people are changing the code they should be able to tell if it breaks other parts of the code. I would argue normal end users are not the same users creating PRs. Us maintainers should definitely be checking that everything works, but if people are submitting code they should be expected to confirm it doesn't break stuff.

Most PRs simply do not interfere with other parts of the code and therefore do not require this amount of indepth testing (reviewing the code and structure is most often much more important). E.g. Modules, which are the majority of PRs, simply "do something" at the end of the execution chain while in 99% of the time do not influence any other part of the execution.

Maybe, but it's hard to tell sometimes, and we've seen things have a cascading affect, like a module doesn't exit correctly so none of the other modules run after it, or things like that.

Most people simply don't have the infrastructure to properly execute the tests. They require a multi Domain forest with MSSQL, NFS, FTP and whatnot.

The tests will still pass, they just all won't run fully, which I think is OK.

In the 2+ years we have these tests (and the PR template requesting it) i have yet to see someone actually reporting issues from running these tests, although i know for sure that there were long periods where some parts where at least semi broken. My guess would be that people simply don't run them and either falsly claiming they have (setting the checkmark) or don't set the checkmark at all.

I've found dozens, if not hundreds of issues via them. I don't think anyone else actually runs them, partially due to what you said about a full testing environment.

Overall, I think we need some way to make sure people are actually testing their code to see if it works. We keep getting PRs with stuff like "I havent tested this", or we run it and there's glaringly obvious errors that would have been seen if the user simply ran the program.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is part of the reviewers responsibility to make sure the PR does not interfere (or break) with other parts of the project. The normal end user usually don't have the knowledge about the code base to properly make a judgement about the impact.

I think that if people are changing the code they should be able to tell if it breaks other parts of the code. I would argue normal end users are not the same users creating PRs. Us maintainers should definitely be checking that everything works, but if people are submitting code they should be expected to confirm it doesn't break stuff.

Usually they don't and usually they also don't break/interfere with other stuff. PRs from people that don't know the codebase (not the usual contributors) and actually change integral components are very very rare. Imo not worth forcing everyone (for even the simplest PRs) to run the tests. Imo a reviewer responsibility not the one that opened the PR (people usually don't PR stuff they know break other parts lol).

Most PRs simply do not interfere with other parts of the code and therefore do not require this amount of indepth testing (reviewing the code and structure is most often much more important). E.g. Modules, which are the majority of PRs, simply "do something" at the end of the execution chain while in 99% of the time do not influence any other part of the execution.

Maybe, but it's hard to tell sometimes, and we've seen things have a cascading affect, like a module doesn't exit correctly so none of the other modules run after it, or things like that.

From what i can tell this is very very rare. If modules are properly reviewed based on the proposed code there is not much that can go wrong if you stick to the rules (no wild global variables, not randomly changing protocol variables etc.). Things that break other parts of the project usually get caught in the review anyway because it's simply bad code from the beginning. Also a reason for the enhancement/New Module tags because modules rarely touch any other code.

Most people simply don't have the infrastructure to properly execute the tests. They require a multi Domain forest with MSSQL, NFS, FTP and whatnot.

The tests will still pass, they just all won't run fully, which I think is OK.

Yeah but if you don't have a proper testing environment most of them won't run anyway which kinda defeats the prupose. (At least not in the scale the tests should be run)

In the 2+ years we have these tests (and the PR template requesting it) i have yet to see someone actually reporting issues from running these tests, although i know for sure that there were long periods where some parts where at least semi broken. My guess would be that people simply don't run them and either falsly claiming they have (setting the checkmark) or don't set the checkmark at all.

I've found dozens, if not hundreds of issues via them.

Absolutely, same here, but by now everything is 1. pretty stable on linux and 2. usually i find and rule out bugs that are appearing due to cross platform stuff. Usually linux works really well, just windows has some problems including the binaries for new releases. So, by now, if the code has been properly reviewed i don't find that many issues anymore apart from missing dependencies in the compiled binary (due to the not update spec file).

I don't think anyone else actually runs them, partially due to what you said about a full testing environment.

👍

Overall, I think we need some way to make sure people are actually testing their code to see if it works. We keep getting PRs with stuff like "I havent tested this", or we run it and there's glaringly obvious errors that would have been seen if the user simply ran the program.

Actually i think PRs that haven't been tested are pretty rare by now. People not running ruff is much more common as far as i can tell. So usually the best proof for me is just a screenshot from the output by the new addition. That's always a good indicator and usually (but not always) included in the PR description.

@Marshall-Hallenbeck
Copy link
Collaborator Author

@NeffIsBack While we are making these updates, should we update our AI policy and add in a checkbox about if AI was used?

@NeffIsBack
Copy link
Member

@NeffIsBack While we are making these updates, should we update our AI policy and add in a checkbox about if AI was used?

Let's do it, AI slop is increasing day by day.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants