Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
112 changes: 112 additions & 0 deletions .github/workflows/issue-agent.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
name: Issue Agent

on:
issues:
types: [opened, edited, reopened, labeled]
Comment on lines +3 to +5
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow will post a new comment every time it's triggered, which could lead to spam on issues that are edited multiple times. For the edited trigger, consider either:

  1. Editing the existing bot comment instead of creating a new one (find the previous comment by filtering comments where the author is the GitHub Actions bot)
  2. Adding a check to only comment once per issue
  3. Removing the edited trigger if re-analysis on edits isn't necessary

The same pattern applies to issues that are repeatedly labeled/unlabeled.

Copilot uses AI. Check for mistakes.

Comment on lines +3 to +6
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Triggering on edited and labeled will post a new agent comment every time an issue is tweaked or labels change, which can quickly flood threads. Consider updating an existing bot comment instead of always creating a new one, or restrict triggers to fewer event types.

Copilot uses AI. Check for mistakes.
permissions:
issues: write

jobs:
analyze-issue:
runs-on: ubuntu-latest
steps:
Comment on lines +10 to +13
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no rate limiting or concurrency control for the GitHub Models API calls. If multiple issues are opened/edited rapidly (e.g., during a bulk import or bot activity), this could:

  1. Exceed GitHub Models API rate limits, causing failures
  2. Incur unexpected costs if the API has usage-based pricing
  3. Create a spam flood of bot comments

Consider adding:

  1. A concurrency limit in the workflow (e.g., concurrency: group: "issue-agent" with cancel-in-progress: true)
  2. Rate limiting checks before making API calls
  3. A filter to skip bot-created issues (check if context.payload.issue.user.type === 'Bot')

Copilot uses AI. Check for mistakes.
- name: Analyze issue and post comment
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const issueNumber = context.issue.number;
const issueTitle = context.payload.issue.title;
const issueBody = context.payload.issue.body || '(no description provided)';
const issueUser = context.payload.issue.user.login;
const labels = (context.payload.issue.labels || []).map(l => l.name).join(', ') || 'none';

// Call GitHub Models API for AI-powered analysis
const response = await fetch('https://models.inference.ai.azure.com/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.GITHUB_TOKEN}`
Comment on lines +26 to +30
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Models API call isn’t protected against network/transport failures. If fetch(...) throws (DNS/timeout/etc.), the script will error out before reaching the fallback comment. Wrap the request in try/catch and proceed with the non-AI fallback on any exception.

Copilot uses AI. Check for mistakes.
},
Comment on lines +26 to +31
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Authorization header for the Models API call is built from process.env.GITHUB_TOKEN, but this workflow never sets GITHUB_TOKEN in env:. As written, the bearer token can end up empty and the Models API request will fail (falling back every time). Set env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} (or ${{ github.token }}) for this step, or pass the token via an explicit env var and reference that instead of process.env.GITHUB_TOKEN.

Copilot uses AI. Check for mistakes.
body: JSON.stringify({
model: 'gpt-4o-mini',
messages: [
{
role: 'system',
content: `You are a rigorous technical agent reviewing GitHub issues for the "simulation-theory" repository — a research project on simulation theory, mathematics, quantum mechanics, and philosophy. Your job is to carefully read each issue and provide a thorough, structured analysis.

For each issue produce:
1. **Summary** — a concise one-paragraph summary of what the issue is about.
2. **Key Points** — bullet list of the most important observations or questions raised.
3. **Relevance to Simulation Theory** — how this issue connects to the project's themes.
4. **Suggested Actions** — concrete next steps or questions for the author.

Be rigorous, thoughtful, and constructive. Keep the tone academic and helpful.`
},
{
role: 'user',
content: `Please analyze this GitHub issue:\n\n**Title:** ${issueTitle}\n**Author:** ${issueUser}\n**Labels:** ${labels}\n\n**Description:**\n${issueBody}`
Comment on lines +47 to +49
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

User-provided content (issue title, body, labels) is directly interpolated into the AI prompt without sanitization. While the content is only sent to the GitHub Models API and not rendered directly in the comment, this could still lead to prompt injection attacks where malicious users craft issue content designed to manipulate the AI's response.

Consider:

  1. Adding a content length limit before sending to the API
  2. Sanitizing or escaping special characters that could be used for prompt injection
  3. Adding a disclaimer in the workflow documentation about the risks of AI-generated content

Note: This is partially mitigated by the fact that the AI's response is wrapped in a clearly marked bot comment, but prompt injection could still cause the bot to generate misleading or inappropriate analysis.

Copilot uses AI. Check for mistakes.
}
],
max_tokens: 1500,
temperature: 0.4
})
});
Comment on lines +26 to +55
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The GitHub Models API request lacks proper error handling for network failures. The current code only checks response.ok but doesn't handle cases where the fetch itself throws an error (network timeouts, DNS failures, etc.). This will cause the workflow to fail completely instead of falling back to the templated comment.

Wrap the fetch call in a try-catch block to catch any thrown errors and ensure the fallback logic is reached even when the API call fails catastrophically.

Copilot uses AI. Check for mistakes.
Comment on lines +26 to +55
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The API call to GitHub Models does not include timeout configuration. If the API becomes unresponsive, the workflow could hang for an extended period before timing out with the default GitHub Actions job timeout (360 minutes). Consider adding a timeout to the fetch call or implementing a reasonable timeout mechanism to fail fast if the API doesn't respond promptly.

Copilot uses AI. Check for mistakes.

let analysisText;
if (response.ok) {
let data;
try {
data = await response.json();
} catch (error) {
console.log('Failed to parse JSON from GitHub Models API response:', error);
}

if (data && data.choices && data.choices.length > 0 && data.choices[0].message) {
analysisText = data.choices[0].message.content;
} else if (data) {
console.log('Unexpected response structure from GitHub Models API:', JSON.stringify(data));
}
} else {
console.log(`GitHub Models API returned ${response.status}: ${await response.text()}`);
}

// Fallback: structured analysis without AI
if (!analysisText) {
analysisText = `**Summary**\nIssue #${issueNumber} titled *"${issueTitle}"* was submitted by @${issueUser}. ${issueBody.length > 0 ? 'It contains a description that may include images or text.' : 'No description was provided.'}\n\n**Labels:** ${labels}\n\n**Suggested Actions**\n- Review the content of this issue and add appropriate labels if missing.\n- Respond to the author with any clarifying questions.\n- Link related issues or pull requests if applicable.`;
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fallback message construction uses issueBody.length to check if a description exists, but issueBody is already defined as an empty string fallback at line 22 when body is null/undefined. This means issueBody.length will always be greater than 0 (even if it's just the fallback text), making the conditional logic misleading. The condition should check the original body value or compare against the fallback string, not just the length.

Copilot uses AI. Check for mistakes.
Comment on lines +76 to +77
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fallback comment may expose sensitive information. The fallback analysis at line 76 includes the full issue body in the comment without any filtering or sanitization. If the issue body contains sensitive information (tokens, credentials, personal data), this will be publicly exposed in the agent's comment. Consider truncating or summarizing the issue body reference, or at minimum applying the same sanitization that should be used for AI-generated content.

Copilot uses AI. Check for mistakes.
}

const marker = '*This comment was generated automatically by the Issue Agent workflow.*';
const commentBody = `## 🤖 Agent Analysis\n\n${analysisText}\n\n---\n${marker}`;

// Look for an existing Issue Agent comment and update it if found to avoid spamming
const { data: existingComments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issueNumber,
per_page: 100
});

const existingAgentComment = existingComments.find(c =>
c.user &&
c.user.type === 'Bot' &&
typeof c.body === 'string' &&
c.body.includes(marker)
);

if (existingAgentComment) {
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: existingAgentComment.id,
body: commentBody
});
} else {
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issueNumber,
body: commentBody
});
}
Comment on lines +14 to +112
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code duplication between workflows. Both pr-agent.yml and issue-agent.yml contain similar logic for calling the GitHub Models API, handling responses, and posting comments. This duplication makes maintenance harder and could lead to inconsistencies (as evidenced by the missing sanitization in issue-agent.yml). Consider extracting the common logic into a reusable composite action or a shared script that both workflows can call, which would improve maintainability and ensure consistent behavior.

Copilot uses AI. Check for mistakes.
186 changes: 186 additions & 0 deletions .github/workflows/pr-agent.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,186 @@
name: PR Agent

on:
pull_request_target:
types: [opened, reopened]

permissions:
pull-requests: write
contents: read
models: read

jobs:
analyze-pr:
runs-on: ubuntu-latest
Comment on lines +12 to +14
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no rate limiting or concurrency control for the GitHub Models API calls. If multiple PRs are opened/synchronized rapidly (e.g., during batch updates or automated dependency PRs), this could:

  1. Exceed GitHub Models API rate limits, causing failures
  2. Incur unexpected costs if the API has usage-based pricing
  3. Create a spam flood of bot comments (especially with synchronize trigger firing on every push)

Consider adding:

  1. A concurrency limit in the workflow (e.g., concurrency: group: "pr-agent" with cancel-in-progress: true or group: ${{ github.event.pull_request.number }})
  2. Rate limiting checks before making API calls
  3. A filter to skip bot-created PRs (check if context.payload.pull_request.user.type === 'Bot')

Copilot uses AI. Check for mistakes.
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
Comment on lines +4 to +19
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using 'pull_request_target' trigger introduces security risks when combined with code checkout. This trigger runs in the context of the base repository with write permissions, even for PRs from forks. When combined with checking out code at line 17 (especially from untrusted forks), this creates a potential security vulnerability where malicious code from a fork could be executed with write permissions to the repository. Consider using the standard 'pull_request' trigger instead, or if 'pull_request_target' is necessary for API access, remove the checkout step since it's not being used for any file operations in this workflow.

Copilot uses AI. Check for mistakes.

- name: Collect changed files
id: changed
run: |
# Ensure the base branch ref is available locally (important for fork-based PRs)
git fetch origin "${{ github.event.pull_request.base.ref }}" --no-tags --prune --depth=1

BASE="${{ github.event.pull_request.base.sha }}"
HEAD="${{ github.event.pull_request.head.sha }}"

# Compute the list of changed files between base and head; fail explicitly on error
if ! ALL_FILES=$(git diff --name-only "$BASE" "$HEAD"); then
echo "Error: failed to compute git diff between $BASE and $HEAD" >&2
exit 1
fi

# Count total changed files robustly, even when there are zero files
TOTAL=$(printf '%s\n' "$ALL_FILES" | sed '/^$/d' | wc -l | tr -d ' ')
FILES=$(echo "$ALL_FILES" | head -50 | tr '\n' ', ')
FILES="${FILES%, }"
if [ "$TOTAL" -gt 50 ]; then
REMAINING=$(( TOTAL - 50 ))
FILES="${FILES} (and ${REMAINING} more files)"
fi
{
echo 'files<<EOF'
echo "$FILES"
echo 'EOF'
} >> "$GITHUB_OUTPUT"

Comment on lines +16 to +49
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The checkout step is unnecessary and wasteful. The workflow checks out the entire repository with full history (fetch-depth: 0) but only uses git commands to compute file diffs, which could be obtained more efficiently through the GitHub API. The checkout step also introduces security risks when combined with pull_request_target. Consider removing the checkout step entirely and using github.rest.pulls.listFiles() API to get the list of changed files, which would be safer, faster, and more efficient.

Suggested change
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Collect changed files
id: changed
run: |
# Ensure the base branch ref is available locally (important for fork-based PRs)
git fetch origin "${{ github.event.pull_request.base.ref }}" --no-tags --prune --depth=1
BASE="${{ github.event.pull_request.base.sha }}"
HEAD="${{ github.event.pull_request.head.sha }}"
# Compute the list of changed files between base and head; fail explicitly on error
if ! ALL_FILES=$(git diff --name-only "$BASE" "$HEAD"); then
echo "Error: failed to compute git diff between $BASE and $HEAD" >&2
exit 1
fi
# Count total changed files robustly, even when there are zero files
TOTAL=$(printf '%s\n' "$ALL_FILES" | sed '/^$/d' | wc -l | tr -d ' ')
FILES=$(echo "$ALL_FILES" | head -50 | tr '\n' ', ')
FILES="${FILES%, }"
if [ "$TOTAL" -gt 50 ]; then
REMAINING=$(( TOTAL - 50 ))
FILES="${FILES} (and ${REMAINING} more files)"
fi
{
echo 'files<<EOF'
echo "$FILES"
echo 'EOF'
} >> "$GITHUB_OUTPUT"
- name: Collect changed files
id: changed
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
result-encoding: string
script: |
const prNumber = context.payload.pull_request.number;
const perPage = 100;
let page = 1;
const files = [];
while (true) {
const { data } = await github.rest.pulls.listFiles({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: prNumber,
per_page: perPage,
page,
});
if (!data.length) {
break;
}
for (const file of data) {
files.push(file.filename);
}
if (data.length < perPage) {
break;
}
page += 1;
}
const total = files.length;
const limited = files.slice(0, 50);
let summary = limited.join(', ');
if (total > 50) {
const remaining = total - 50;
summary += ` (and ${remaining} more files)`;
}
core.setOutput('files', summary);

Copilot uses AI. Check for mistakes.
- name: Analyze PR and post comment
uses: actions/github-script@v7
env:
CHANGED_FILES: ${{ steps.changed.outputs.files }}
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const prNumber = context.payload.pull_request.number;
const prTitle = context.payload.pull_request.title;
const prBody = context.payload.pull_request.body || '(no description provided)';
const prUser = context.payload.pull_request.user.login;
const baseBranch = context.payload.pull_request.base.ref;
const headBranch = context.payload.pull_request.head.ref;
const changedFiles = process.env.CHANGED_FILES || 'unknown';
const additions = context.payload.pull_request.additions ?? '?';
const deletions = context.payload.pull_request.deletions ?? '?';

// Call GitHub Models API for AI-powered analysis
let response;
try {
response = await fetch('https://models.inference.ai.azure.com/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.GITHUB_TOKEN}`
},
body: JSON.stringify({
model: 'gpt-4o-mini',
messages: [
{
role: 'system',
content: `You are a rigorous code and content review agent for the "simulation-theory" repository — a research project on simulation theory, mathematics, quantum mechanics, and philosophy. Your job is to carefully examine each pull request and provide a thorough, structured review.

For each PR produce:
1. **Summary** — a concise one-paragraph summary of the proposed changes.
2. **Changed Files Analysis** — observations about the files being modified and why they matter.
3. **Potential Concerns** — any risks, conflicts, or issues the reviewer should check.
4. **Relevance to Project Goals** — how these changes align with (or diverge from) simulation-theory research.
5. **Suggested Actions** — specific things the PR author or reviewers should do before merging.

Be rigorous, constructive, and precise. Keep the tone academic and professional.`
},
{
role: 'user',
content: `Please analyze this pull request:\n\n**Title:** ${prTitle}\n**Author:** ${prUser}\n**Base branch:** ${baseBranch} ← **Head branch:** ${headBranch}\n**Changes:** +${additions} / -${deletions} lines\n**Changed files:** ${changedFiles}\n\n**Description:**\n${prBody}`
Comment on lines +93 to +94
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

User-provided content (PR title, description, branch names) is directly interpolated into the AI prompt without sanitization. While the content is only sent to the GitHub Models API and not rendered directly in the comment, this could still lead to prompt injection attacks where malicious users craft PR content designed to manipulate the AI's response.

Consider:

  1. Adding a content length limit before sending to the API (especially for PR body which can be very long)
  2. Sanitizing or escaping special characters that could be used for prompt injection
  3. Adding a disclaimer in the workflow documentation about the risks of AI-generated content

Note: This is partially mitigated by the fact that the AI's response is wrapped in a clearly marked bot comment, but prompt injection could still cause the bot to generate misleading or inappropriate analysis.

Copilot uses AI. Check for mistakes.
}
],
max_tokens: 1500,
temperature: 0.4
})
Comment on lines +97 to +99
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With active PRs mentioned in the description, triggering this workflow on 'opened', 'edited', 'synchronize', and 'reopened' events could result in significant API usage costs. Each workflow run makes an API call to the GitHub Models service with max_tokens set to 1500. The 'synchronize' event in particular fires on every new commit, which could lead to many API calls for actively developed PRs. Consider implementing rate limiting, cooldown periods, or restricting which events trigger the workflow to control costs and avoid hitting potential API rate limits.

Copilot uses AI. Check for mistakes.
});

let analysisText;
if (response.ok) {
try {
const data = await response.json();
if (data.choices && data.choices.length > 0 && data.choices[0].message) {
try {
if (response.ok) {
const data = await response.json();
if (data.choices && data.choices.length > 0 && data.choices[0].message) {
analysisText = data.choices[0].message.content;
} else {
console.log('Unexpected response structure from GitHub Models API:', JSON.stringify(data));
}
} else {
console.log(`GitHub Models API returned ${response.status}: ${await response.text()}`);
}
} catch (error) {
console.log('Error while calling or parsing response from GitHub Models API, falling back to templated analysis:', error);
Comment on lines +107 to +119
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate code block detected. Lines 106-120 duplicate the logic from lines 102-120, creating a nested try-catch structure that will never work correctly. The outer try block (starting at line 104) is opened but never closed, and the inner try block (starting at line 107) creates duplicate logic. This will cause the workflow to fail with a syntax error when executed. The duplicate try-catch block starting at line 107 should be removed entirely.

Suggested change
try {
if (response.ok) {
const data = await response.json();
if (data.choices && data.choices.length > 0 && data.choices[0].message) {
analysisText = data.choices[0].message.content;
} else {
console.log('Unexpected response structure from GitHub Models API:', JSON.stringify(data));
}
} else {
console.log(`GitHub Models API returned ${response.status}: ${await response.text()}`);
}
} catch (error) {
console.log('Error while calling or parsing response from GitHub Models API, falling back to templated analysis:', error);
analysisText = data.choices[0].message.content;
} else {
console.log('Unexpected response structure from GitHub Models API:', JSON.stringify(data));
}
} catch (error) {
console.log('Error while calling or parsing response from GitHub Models API, falling back to templated analysis:', error);
}
} else {
console.log(`GitHub Models API returned ${response.status}: ${await response.text()}`);

Copilot uses AI. Check for mistakes.
}
Comment on lines +102 to +120
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing closing brace for the try block that starts at line 104. The code has a try statement at line 104 but no corresponding closing brace, which will cause a JavaScript syntax error when the workflow runs. After the duplicate code block is removed, ensure there's a proper closing brace for the try block and a corresponding catch block to handle any exceptions.

Copilot uses AI. Check for mistakes.

// Fallback: structured analysis without AI
if (!analysisText) {
let changedFilesSection;
if (!changedFiles || changedFiles === 'unknown') {
changedFilesSection = 'No changed file list is available for this PR.';
} else {
const files = changedFiles.split(',').map(f => f.trim()).filter(f => f.length > 0);
if (files.length === 0) {
changedFilesSection = 'No files listed.';
} else {
changedFilesSection = files.map(f => `- ${f}`).join('\n');
}
}
analysisText = `**Summary**\nPR #${prNumber} titled *"${prTitle}"* was submitted by @${prUser} merging \`${headBranch}\` into \`${baseBranch}\`.\n\n**Changed Files**\n${changedFilesSection}\n\n**Stats:** +${additions} additions / -${deletions} deletions\n\n**Suggested Actions**\n- Review all changed files for correctness and consistency.\n- Ensure the description clearly explains the motivation for each change.\n- Verify no unintended files are included in this PR.`;
}

// Sanitize and limit the AI-generated analysis text before posting as a comment.
const MAX_COMMENT_LENGTH = 5000;
const sanitizeAnalysisText = (text) => {
if (typeof text !== 'string') {
return '';
}
// Remove script-like tags and generic HTML tags as a defense-in-depth measure.
let cleaned = text
.replace(/<\s*\/?\s*script[^>]*>/gi, '')
.replace(/<[^>]+>/g, '')
.trim();
if (cleaned.length > MAX_COMMENT_LENGTH) {
cleaned = cleaned.slice(0, MAX_COMMENT_LENGTH) +
'\n\n*Note: Output truncated to fit comment length limits.*';
}
return cleaned;
};

const safeAnalysisText = sanitizeAnalysisText(analysisText);

const comment = `## 🤖 Agent Review\n\n${safeAnalysisText}\n\n---\n*This comment was generated automatically by the PR Agent workflow.*`;
// Try to find an existing PR Agent comment to update, to avoid spamming the thread
const { data: existingComments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: prNumber
});

const existingAgentComment = existingComments.find(c =>
c &&
c.body &&
c.body.includes('This comment was generated automatically by the PR Agent workflow.')
);

if (existingAgentComment) {
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: existingAgentComment.id,
body: comment
});
} else {
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: prNumber,
body: comment
});
}