Skip to content

Conversation

@KUASWoodyLIN
Copy link
Contributor

…simplify prerequisites

Copilot AI review requested due to automatic review settings January 7, 2026 02:57
@KUASWoodyLIN KUASWoodyLIN requested a review from iasthc as a code owner January 7, 2026 02:57
@gemini-code-assist
Copy link

Summary of Changes

Hello @KUASWoodyLIN, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request aims to enhance the accessibility and ease of use for the LLM model demo by eliminating the need for an OpenAI API key and the associated openai Python library. The changes simplify the prerequisites and streamline the demo examples, focusing on a more straightforward interaction. This makes it easier for users to quickly get started and test LLM models without external dependencies or complex setup.

Highlights

  • OpenAI API Key Removal: The LLM model demo no longer requires an OpenAI API key, significantly simplifying the setup process for users.
  • Simplified Prerequisites: The openai Python library dependency has been removed from the installation instructions, making the demo lighter and reducing external requirements.
  • Streamlined Demo Examples: The documentation has been streamlined by removing the 'Conversation' and 'Complete Example' tabs, focusing on a single, simpler 'Simple Question' example for clarity.
  • Updated Connection Instructions: Guidance for obtaining connection details now directs users to internal service information (Service URL, Name, Model Name) on the deployment page, rather than external API keys.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@KUASWoodyLIN KUASWoodyLIN merged commit 95f6456 into main Jan 7, 2026
5 checks passed
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the LLM model demo documentation to remove the dependency on an OpenAI API key and to simplify the prerequisites and examples. The changes make the documentation clearer and easier to follow by removing complex, multi-tab examples and focusing on a single, updated script. My review includes a suggestion to add an example value for a configuration variable to further enhance usability for the user.

MODEL_NAME = "<your_model_name>" # e.g., llm-demo
MODEL_ID = "RedHatAI/Llama-3.2-1B-Instruct-FP8"
NAME = "<your_name>" # e.g., llm-demo
MODEL_NAME = "<your_model_name>"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve clarity for the user, consider adding an example value for MODEL_NAME, similar to how it's done for SERVICE_URL and NAME. Based on the text earlier in this document, a good example would be meta-llama/Llama-2-7b-chat.

MODEL_NAME = "<your_model_name>"    # e.g., meta-llama/Llama-2-7b-chat

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR simplifies the LLM model demo documentation by removing the OpenAI API key dependency and streamlining the testing prerequisites. The changes make the demo more straightforward by focusing on direct API interaction with the deployed LLM service rather than through OpenAI's SDK.

Key changes:

  • Removed OpenAI API key requirement from prerequisites
  • Simplified Python dependencies to only require requests library
  • Consolidated testing examples from multiple tabs into a single, clearer example
  • Updated connection information section to reference OtterScale UI fields directly

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +83 to +84
NAME = "<your_name>" # e.g., llm-demo
MODEL_NAME = "<your_model_name>"
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable naming and usage here is confusing. Based on the Connection Information section, there are two separate fields from the UI: "Name" and "Model Name". However, the variable names NAME and MODEL_NAME don't clearly indicate their purposes:

  • NAME is used in the "OtterScale-Model-Name" header
  • MODEL_NAME is used as the "model" field in the payload

Consider renaming these variables to be more descriptive of their purposes, such as OTTERSCALE_MODEL_NAME and MODEL_ID, or add inline comments explaining what each represents to help users correctly map the UI fields to the code variables.

Copilot uses AI. Check for mistakes.

<Tabs>

Before running the test scripts, you'll need to find the following information from the `<url>/scope/<scope-name>/models/llm` page:
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation mentions finding information from the page <url>/scope/<scope-name>/models/llm but doesn't explain what <url> and <scope-name> should be replaced with, or where users can find these values. This could leave users uncertain about where to navigate. Consider providing more context about what these placeholders represent or linking to documentation about URL structure and scope names.

Suggested change
Before running the test scripts, you'll need to find the following information from the `<url>/scope/<scope-name>/models/llm` page:
Before running the test scripts, you'll need to find the following information from the Models page in your OtterScale cluster. You can open it by going to your cluster URL (for example, `https://your-cluster.example.com`), then navigating to the appropriate scope (namespace) and opening the **Models** page for your LLM, which corresponds to the `<url>/scope/<scope-name>/models/llm` path (where `<url>` is your cluster URL and `<scope-name>` is the name of the scope/namespace you selected):

Copilot uses AI. Check for mistakes.
Comment on lines +71 to +74
Before running the test scripts, you'll need to find the following information from the `<url>/scope/<scope-name>/models/llm` page:
- **Service URL**: The URL information from the Service card
- **Name**: The `name` field in the model table
- **Model Name**: The `Model Name` field in the model table
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The naming convention for these configuration fields is confusing and inconsistent with the code example below. The documentation mentions three fields: "Service URL", "Name", and "Model Name", but the code example uses variables named SERVICE_URL, NAME, and MODEL_NAME. The relationship between these is unclear:

  • "Name" maps to the NAME variable which is used in the "OtterScale-Model-Name" header
  • "Model Name" maps to the MODEL_NAME variable which is used as the "model" in the payload

This creates ambiguity about which UI field corresponds to which variable. Consider using clearer naming that explicitly distinguishes between the model identifier used in the header versus the model identifier used in the API payload, or provide more explicit mapping between the UI fields and code variables.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants