Skip to content

Conversation

@KUASWoodyLIN
Copy link
Contributor

…l deployments

Copilot AI review requested due to automatic review settings January 6, 2026 14:37
@KUASWoodyLIN KUASWoodyLIN requested a review from iasthc as a code owner January 6, 2026 14:37
@KUASWoodyLIN KUASWoodyLIN merged commit c6dae9a into main Jan 6, 2026
3 checks passed
@gemini-code-assist
Copy link

Summary of Changes

Hello @KUASWoodyLIN, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the documentation by introducing a suite of new demo guides. These guides are designed to help users quickly set up and interact with various development and data-related applications, ranging from cloud-based IDEs and collaborative notebook environments to robust databases and large language models. The addition aims to streamline the onboarding process for these tools and provide practical examples for their usage.

Highlights

  • New Demo Guides: Added comprehensive guides for deploying and configuring Coder, JupyterHub, PostgreSQL, and LLM models.
  • Documentation Structure Update: The astro.config.mjs file was updated to include a new 'Demos' section in the sidebar navigation, making these new guides easily discoverable.
  • Practical Deployment Steps: Each new guide provides detailed, step-by-step instructions for deploying the respective application, including prerequisites, installation via application store or Helm, and verification.
  • Python Integration Examples: The PostgreSQL and LLM Model demos include Python code snippets to demonstrate connection, data manipulation, and API interaction, enhancing practical usability.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a series of helpful demo guides for Coder, JupyterHub, PostgreSQL, and LLM model deployments. The changes are well-structured and provide valuable examples for users. My review focuses on improving the consistency across these new guides, enhancing the robustness of the Python code snippets, and correcting some inconsistencies in the LLM model guide to ensure clarity and accuracy for the end-users.

Comment on lines +15 to +16
- **OpenAI API Key**: Obtain from [OpenAI Platform](https://platform.openai.com/api-keys)
- **Python Libraries**: `requests` and `openai`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There is a significant inconsistency in the prerequisites. The guide requires an OpenAI API Key and the openai library, but the code examples use requests to call a custom endpoint and do not use either the key or the openai library. This is very confusing for the user. Please align the documentation with the code examples by removing these unnecessary prerequisites.

- **Python Libraries**: `requests`

Comment on lines +68 to +70
<LinkCard title="PostgreSQL Demo" href="/demos/03-postgres/" description="Learn how to deploy PostgreSQL database and test read/write operations with Python." />

<LinkCard title="LLM Model Demo" href="/demos/04-llm-model/" description="Learn how to deploy and configure language model applications." /> No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and better navigation between the new demo guides, I recommend adding a link to the JupyterHub demo. It's also a good practice to keep these links in a consistent order (e.g., numerical).

<LinkCard title="JupyterHub Demo" href="/demos/02-jupyterhub/" description="Deploy JupyterHub for multi-user Jupyter notebook environments." />

<LinkCard title="PostgreSQL Demo" href="/demos/03-postgres/" description="Learn how to deploy PostgreSQL database and test read/write operations with Python." />

<LinkCard title="LLM Model Demo" href="/demos/04-llm-model/" description="Learn how to deploy and configure language model applications." />

Comment on lines +71 to +73
<LinkCard title="PostgreSQL Demo" href="/demos/03-postgres/" description="Learn how to deploy PostgreSQL database and test read/write operations with Python." />

<LinkCard title="LLM Model Demo" href="/demos/04-llm-model/" description="Learn how to deploy and configure language model applications." /> No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve navigation and maintain consistency across the demo guides, please add a link to the Coder demo in this 'Next Steps' section. Ordering the links numerically would also improve user experience.

<LinkCard title="Coder Demo" href="/demos/01-coder/" description="Deploy Coder for cloud-based VS Code IDE environments with support for both Go and Python backends." />

<LinkCard title="PostgreSQL Demo" href="/demos/03-postgres/" description="Learn how to deploy PostgreSQL database and test read/write operations with Python." />

<LinkCard title="LLM Model Demo" href="/demos/04-llm-model/" description="Learn how to deploy and configure language model applications." />

Comment on lines +123 to +157
connection = psycopg2.connect(
host="<postgres_host>",
port=5432,
database="postgres",
user="postgres",
password="<password>"
)
cursor = connection.cursor()

# Create a test table
cursor.execute("""
CREATE TABLE IF NOT EXISTS demo_users (
id SERIAL PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")

# Insert sample data
cursor.execute("""
INSERT INTO demo_users (name, email)
VALUES (%s, %s)
""", ("John Doe", "john@example.com"))

cursor.execute("""
INSERT INTO demo_users (name, email)
VALUES (%s, %s)
""", ("Jane Smith", "jane@example.com"))

connection.commit()
print("✓ Data inserted successfully!")

cursor.close()
connection.close()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current Python code for database operations doesn't guarantee that the connection is closed if an error occurs, which can lead to resource leaks. It's a best practice in Python to use a with statement for resource management like database connections. This ensures the connection is always closed properly and also handles transactions (commit on success, rollback on error) automatically.

with psycopg2.connect(
    host="<postgres_host>",
    port=5432,
    database="postgres",
    user="postgres",
    password="<password>"
) as connection:
    with connection.cursor() as cursor:
        # Create a test table
        cursor.execute("""
            CREATE TABLE IF NOT EXISTS demo_users (
                id SERIAL PRIMARY KEY,
                name VARCHAR(100),
                email VARCHAR(100),
                created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
            )
        """)

        # Insert sample data
        cursor.execute("""
            INSERT INTO demo_users (name, email) 
            VALUES (%s, %s)
        """, ("John Doe", "john@example.com"))

        cursor.execute("""
            INSERT INTO demo_users (name, email) 
            VALUES (%s, %s)
        """, ("Jane Smith", "jane@example.com"))

connection.commit()
print("✓ Data inserted successfully!")

cursor.close()
connection.close()

Comment on lines +262 to +266
db.test_connection()
db.write_data("Alice Johnson", "alice@example.com")
db.write_data("Bob Wilson", "bob@example.com")
db.read_data()
db.close()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To ensure the database connection is always closed, even if an error occurs during one of the operations, it's best to wrap the database calls in a try...finally block. This guarantees that db.close() is executed.

    try:
        db.test_connection()
        db.write_data("Alice Johnson", "alice@example.com")
        db.write_data("Bob Wilson", "bob@example.com")
        db.read_data()
    finally:
        db.close()

Comment on lines +272 to +274
<Aside type="note">
Replace `<postgres_host>` and `<password>` with your actual PostgreSQL connection details. You can find the service endpoint in your Applications Services page.
</Aside> No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with the other demo guides, please add a 'Next Steps' section here with links to the other available demos.

<Aside type="note">
Replace `<postgres_host>` and `<password>` with your actual PostgreSQL connection details. You can find the service endpoint in your Applications Services page.
</Aside>

## Next Steps

Explore other demo applications to enhance your infrastructure:

<LinkCard title="Coder Demo" href="/demos/01-coder/" description="Deploy Coder for cloud-based VS Code IDE environments." />

<LinkCard title="JupyterHub Demo" href="/demos/02-jupyterhub/" description="Deploy JupyterHub for multi-user Jupyter notebook environments." />

<LinkCard title="LLM Model Demo" href="/demos/04-llm-model/" description="Learn how to deploy and configure language model applications." />


```python
import requests
import json

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The json library is imported but never used in this code snippet. To keep the code clean, it's best to remove unused imports. This applies to the other Python examples in this file as well.

Comment on lines +301 to +303
<Aside type="note">
Replace `<your_openai_api_key>` with your actual OpenAI API key and `<your_model_name>` with your deployed model name. The examples use the OpenAI API format, which is compatible with many LLM services.
</Aside> No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This note is misleading because the code examples do not use an OpenAI API key. It should be updated to reflect the actual placeholders used in the code. Additionally, for consistency with other guides, a 'Next Steps' section should be added.

<Aside type="note">
Replace `<your_service_url>` and `<your_model_name>` with your actual connection details.
</Aside>

## Next Steps

Explore other demo applications to enhance your infrastructure:

<LinkCard title="Coder Demo" href="/demos/01-coder/" description="Deploy Coder for cloud-based VS Code IDE environments." />

<LinkCard title="JupyterHub Demo" href="/demos/02-jupyterhub/" description="Deploy JupyterHub for multi-user Jupyter notebook environments." />

<LinkCard title="PostgreSQL Demo" href="/demos/03-postgres/" description="Learn how to deploy PostgreSQL database and test read/write operations with Python." />

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive demo guides for four different deployment scenarios on the OtterScale platform: Coder (cloud-based VS Code IDE), JupyterHub (multi-user Jupyter notebooks), PostgreSQL (database deployment), and LLM Model deployments. The guides include step-by-step deployment instructions and Python code examples for testing each service.

Key Changes:

  • Added configuration in astro.config.mjs to create a new "Demos" sidebar section with Chinese translations
  • Created four new demo guide files with deployment instructions and testing code examples
  • Included multiple Python code examples in tabbed interfaces for different use cases

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
astro.config.mjs Adds "Demos" section to sidebar navigation with internationalization support
src/content/docs/demos/01-coder.mdx Deployment guide for Coder VS Code IDE with access instructions
src/content/docs/demos/02-jupyterhub.mdx JupyterHub deployment guide for multi-user notebook environments
src/content/docs/demos/03-postgres.mdx PostgreSQL deployment guide with Python connection and CRUD examples
src/content/docs/demos/04-llm-model.mdx LLM model deployment and testing guide with Python API integration examples

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

</Tabs>

<Aside type="note">
Replace `<your_openai_api_key>` with your actual OpenAI API key and `<your_model_name>` with your deployed model name. The examples use the OpenAI API format, which is compatible with many LLM services.
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The note mentions <your_openai_api_key> but none of the code examples in the tabs above use an OpenAI API key. The examples use SERVICE_URL, MODEL_NAME, and MODEL_ID instead. This note is misleading and should be updated to match the actual placeholders used in the code examples (<your_service_url>, <your_model_name>).

Suggested change
Replace `<your_openai_api_key>` with your actual OpenAI API key and `<your_model_name>` with your deployed model name. The examples use the OpenAI API format, which is compatible with many LLM services.
Replace `<your_service_url>` with your actual service URL and `<your_model_name>` with your deployed model name. The examples use the OpenAI API format, which is compatible with many LLM services.

Copilot uses AI. Check for mistakes.
Comment on lines +3 to +19
description: Deploy and test LLM models integrated with OpenAI API.
---

import { Steps, Aside, Tabs, TabItem } from '@astrojs/starlight/components';

This guide demonstrates how to deploy a Large Language Model (LLM) in your OtterScale cluster and test it using Python with OpenAI API integration.

## Prerequisites

Ensure you have the following:

- **Python 3.8+**: For running the test scripts
- **OpenAI API Key**: Obtain from [OpenAI Platform](https://platform.openai.com/api-keys)
- **Python Libraries**: `requests` and `openai`

```bash
pip install requests openai
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Prerequisites section mentions obtaining an OpenAI API Key from OpenAI Platform, but this API key is never used in any of the code examples. The examples use a custom service URL with a custom header (OtterScale-Model-Name) and endpoint (/v1/chat), which suggests this is interfacing with an OtterScale-hosted model service, not directly with OpenAI's API. Either remove the OpenAI API key requirement from Prerequisites, or clarify how it's actually used in the deployment.

Suggested change
description: Deploy and test LLM models integrated with OpenAI API.
---
import { Steps, Aside, Tabs, TabItem } from '@astrojs/starlight/components';
This guide demonstrates how to deploy a Large Language Model (LLM) in your OtterScale cluster and test it using Python with OpenAI API integration.
## Prerequisites
Ensure you have the following:
- **Python 3.8+**: For running the test scripts
- **OpenAI API Key**: Obtain from [OpenAI Platform](https://platform.openai.com/api-keys)
- **Python Libraries**: `requests` and `openai`
```bash
pip install requests openai
description: Deploy and test LLM models via an HTTP API in your OtterScale cluster.
---
import { Steps, Aside, Tabs, TabItem } from '@astrojs/starlight/components';
This guide demonstrates how to deploy a Large Language Model (LLM) in your OtterScale cluster and test it using Python HTTP requests against the model API exposed by OtterScale.
## Prerequisites
Ensure you have the following:
- **Python 3.8+**: For running the test scripts
- **Python Library**: `requests`
```bash
pip install requests

Copilot uses AI. Check for mistakes.
Comment on lines +68 to +69

Obtain your OpenAI API key from [OpenAI Platform](https://platform.openai.com/api-keys).
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar issue here - the Prerequisites section again mentions the OpenAI API key (line 69), but it's not used in the code examples. This is redundant with the earlier prerequisites section and should be removed or corrected.

Suggested change
Obtain your OpenAI API key from [OpenAI Platform](https://platform.openai.com/api-keys).

Copilot uses AI. Check for mistakes.

import { Steps, Aside, Tabs, TabItem } from '@astrojs/starlight/components';

This guide demonstrates how to deploy PostgreSQL to your applications using a Helm chart from Artifact Hub.
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent terminology: Line 8 says "deploy PostgreSQL to your applications" which is grammatically awkward. It should be "deploy PostgreSQL for your applications" or "deploy a PostgreSQL database for your applications" to be clearer and more accurate.

Suggested change
This guide demonstrates how to deploy PostgreSQL to your applications using a Helm chart from Artifact Hub.
This guide demonstrates how to deploy a PostgreSQL database for your applications using a Helm chart from Artifact Hub.

Copilot uses AI. Check for mistakes.
Comment on lines +254 to +260
db = PostgreSQLDemo(
host="<postgres_host>",
port=5432,
database="postgres",
user="postgres",
password="<password>"
)
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security concern: The code examples show hardcoded credentials (password placeholders) directly in the connection strings. While these are just examples with placeholders, the documentation should include a note about security best practices, such as using environment variables or secure credential management systems instead of hardcoding passwords, especially given that this is demonstration code that users might copy directly.

Copilot uses AI. Check for mistakes.
Comment on lines +167 to +185
connection = psycopg2.connect(
host="<postgres_host>",
port=5432,
database="postgres",
user="postgres",
password="<password>"
)
cursor = connection.cursor()

# Read all data from the table
cursor.execute("SELECT id, name, email, created_at FROM demo_users")
rows = cursor.fetchall()

print("Records in demo_users table:")
for row in rows:
print(f"ID: {row[0]}, Name: {row[1]}, Email: {row[2]}, Created: {row[3]}")

cursor.close()
connection.close()
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling: The "Read Data" example doesn't include a try-except block. If the table doesn't exist or there's a connection issue, this will raise an unhandled exception. For consistency with the other examples and following best practices, this code should include proper error handling.

Suggested change
connection = psycopg2.connect(
host="<postgres_host>",
port=5432,
database="postgres",
user="postgres",
password="<password>"
)
cursor = connection.cursor()
# Read all data from the table
cursor.execute("SELECT id, name, email, created_at FROM demo_users")
rows = cursor.fetchall()
print("Records in demo_users table:")
for row in rows:
print(f"ID: {row[0]}, Name: {row[1]}, Email: {row[2]}, Created: {row[3]}")
cursor.close()
connection.close()
try:
connection = psycopg2.connect(
host="<postgres_host>",
port=5432,
database="postgres",
user="postgres",
password="<password>"
)
cursor = connection.cursor()
try:
# Read all data from the table
cursor.execute("SELECT id, name, email, created_at FROM demo_users")
rows = cursor.fetchall()
print("Records in demo_users table:")
for row in rows:
print(f"ID: {row[0]}, Name: {row[1]}, Email: {row[2]}, Created: {row[3]}")
except Exception as e:
print("Error reading data from demo_users:", str(e))
finally:
cursor.close()
connection.close()
except Exception as e:
print("Database connection failed:", str(e))

Copilot uses AI. Check for mistakes.

try:
response = requests.post(
f"{SERVICE_URL}/v1/chat",
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The endpoint /v1/chat used throughout the examples (lines 107, 167, 229, 260) appears to be custom to OtterScale and not a standard OpenAI-compatible endpoint. The standard OpenAI chat completions endpoint is /v1/chat/completions. If this is intentional and specific to the OtterScale platform, it should be clearly documented. If this is meant to be OpenAI-compatible, the endpoint should be corrected.

Suggested change
f"{SERVICE_URL}/v1/chat",
f"{SERVICE_URL}/v1/chat/completions",

Copilot uses AI. Check for mistakes.
# Configuration
SERVICE_URL = "<your_service_url>" # e.g., http://localhost:8000
MODEL_NAME = "<your_model_name>" # e.g., llm-demo
MODEL_ID = "RedHatAI/Llama-3.2-1B-Instruct-FP8"
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent model references: The code uses MODEL_ID = "RedHatAI/Llama-3.2-1B-Instruct-FP8" (lines 91, 135, 208) while the deployment instructions reference meta-llama/Llama-2-7b-chat (lines 25, 37). This inconsistency could confuse users. The examples should either use the same model referenced in the deployment instructions or clarify that different models can be used and provide guidance on how to specify the correct MODEL_ID for their deployed model.

Suggested change
MODEL_ID = "RedHatAI/Llama-3.2-1B-Instruct-FP8"
MODEL_ID = "meta-llama/Llama-2-7b-chat" # should match the model artifact you deployed

Copilot uses AI. Check for mistakes.
Comment on lines +36 to +43
helm push postgresql-18.2.0.tgz oci://<registry_url>/postgres --plain-http
```

Replace `<registry_url>` with your actual registry URL (e.g., `192.168.196.42:5736`).

**Example:**
```bash
helm push postgresql-18.2.0.tgz oci://192.168.196.42:5736/postgres --plain-http
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The helm push example uses the --plain-http flag to upload charts to the registry, which disables TLS and exposes credentials and chart contents to interception or tampering on the network. An attacker with access to the network path could capture registry credentials or modify pushed artifacts in transit, compromising your software supply chain. Use an HTTPS-enabled registry and remove --plain-http from the documented command so pushes are performed over TLS-protected connections.

Suggested change
helm push postgresql-18.2.0.tgz oci://<registry_url>/postgres --plain-http
```
Replace `<registry_url>` with your actual registry URL (e.g., `192.168.196.42:5736`).
**Example:**
```bash
helm push postgresql-18.2.0.tgz oci://192.168.196.42:5736/postgres --plain-http
helm push postgresql-18.2.0.tgz oci://<registry_url>/postgres

Replace <registry_url> with your actual registry URL (e.g., 192.168.196.42:5736).

Example:

helm push postgresql-18.2.0.tgz oci://192.168.196.42:5736/postgres

Copilot uses AI. Check for mistakes.
Comment on lines +61 to +66
- Copy the password and decode it using base64:
```bash
echo "<copied_password>" | base64 --decode
```
Or use this online tool: [base64decode.org](https://www.base64decode.org/)
- Use the decoded password for your PostgreSQL connections
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This step recommends pasting the base64-encoded PostgreSQL password into base64decode.org, which sends your database credentials to a third-party service and creates an immediate credential leak risk. An attacker controlling or observing that external service could harvest these secrets and use them to access your database. Instead, advise users to decode the secret locally using trusted CLI tools only and remove the suggestion to use an external web-based decoder.

Suggested change
- Copy the password and decode it using base64:
```bash
echo "<copied_password>" | base64 --decode
```
Or use this online tool: [base64decode.org](https://www.base64decode.org/)
- Use the decoded password for your PostgreSQL connections
- Copy the password and decode it locally using base64:
```bash
echo "<copied_password>" | base64 --decode
```
- Use the decoded password for your PostgreSQL connections

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants