-
Notifications
You must be signed in to change notification settings - Fork 0
feat: add demo guides for Coder, JupyterHub, PostgreSQL, and LLM Mode… #60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @KUASWoodyLIN, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the documentation by introducing a suite of new demo guides. These guides are designed to help users quickly set up and interact with various development and data-related applications, ranging from cloud-based IDEs and collaborative notebook environments to robust databases and large language models. The addition aims to streamline the onboarding process for these tools and provide practical examples for their usage. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds a series of helpful demo guides for Coder, JupyterHub, PostgreSQL, and LLM model deployments. The changes are well-structured and provide valuable examples for users. My review focuses on improving the consistency across these new guides, enhancing the robustness of the Python code snippets, and correcting some inconsistencies in the LLM model guide to ensure clarity and accuracy for the end-users.
| - **OpenAI API Key**: Obtain from [OpenAI Platform](https://platform.openai.com/api-keys) | ||
| - **Python Libraries**: `requests` and `openai` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a significant inconsistency in the prerequisites. The guide requires an OpenAI API Key and the openai library, but the code examples use requests to call a custom endpoint and do not use either the key or the openai library. This is very confusing for the user. Please align the documentation with the code examples by removing these unnecessary prerequisites.
- **Python Libraries**: `requests`
| <LinkCard title="PostgreSQL Demo" href="/demos/03-postgres/" description="Learn how to deploy PostgreSQL database and test read/write operations with Python." /> | ||
|
|
||
| <LinkCard title="LLM Model Demo" href="/demos/04-llm-model/" description="Learn how to deploy and configure language model applications." /> No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency and better navigation between the new demo guides, I recommend adding a link to the JupyterHub demo. It's also a good practice to keep these links in a consistent order (e.g., numerical).
<LinkCard title="JupyterHub Demo" href="/demos/02-jupyterhub/" description="Deploy JupyterHub for multi-user Jupyter notebook environments." />
<LinkCard title="PostgreSQL Demo" href="/demos/03-postgres/" description="Learn how to deploy PostgreSQL database and test read/write operations with Python." />
<LinkCard title="LLM Model Demo" href="/demos/04-llm-model/" description="Learn how to deploy and configure language model applications." />
| <LinkCard title="PostgreSQL Demo" href="/demos/03-postgres/" description="Learn how to deploy PostgreSQL database and test read/write operations with Python." /> | ||
|
|
||
| <LinkCard title="LLM Model Demo" href="/demos/04-llm-model/" description="Learn how to deploy and configure language model applications." /> No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To improve navigation and maintain consistency across the demo guides, please add a link to the Coder demo in this 'Next Steps' section. Ordering the links numerically would also improve user experience.
<LinkCard title="Coder Demo" href="/demos/01-coder/" description="Deploy Coder for cloud-based VS Code IDE environments with support for both Go and Python backends." />
<LinkCard title="PostgreSQL Demo" href="/demos/03-postgres/" description="Learn how to deploy PostgreSQL database and test read/write operations with Python." />
<LinkCard title="LLM Model Demo" href="/demos/04-llm-model/" description="Learn how to deploy and configure language model applications." />
| connection = psycopg2.connect( | ||
| host="<postgres_host>", | ||
| port=5432, | ||
| database="postgres", | ||
| user="postgres", | ||
| password="<password>" | ||
| ) | ||
| cursor = connection.cursor() | ||
|
|
||
| # Create a test table | ||
| cursor.execute(""" | ||
| CREATE TABLE IF NOT EXISTS demo_users ( | ||
| id SERIAL PRIMARY KEY, | ||
| name VARCHAR(100), | ||
| email VARCHAR(100), | ||
| created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP | ||
| ) | ||
| """) | ||
|
|
||
| # Insert sample data | ||
| cursor.execute(""" | ||
| INSERT INTO demo_users (name, email) | ||
| VALUES (%s, %s) | ||
| """, ("John Doe", "john@example.com")) | ||
|
|
||
| cursor.execute(""" | ||
| INSERT INTO demo_users (name, email) | ||
| VALUES (%s, %s) | ||
| """, ("Jane Smith", "jane@example.com")) | ||
|
|
||
| connection.commit() | ||
| print("✓ Data inserted successfully!") | ||
|
|
||
| cursor.close() | ||
| connection.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current Python code for database operations doesn't guarantee that the connection is closed if an error occurs, which can lead to resource leaks. It's a best practice in Python to use a with statement for resource management like database connections. This ensures the connection is always closed properly and also handles transactions (commit on success, rollback on error) automatically.
with psycopg2.connect(
host="<postgres_host>",
port=5432,
database="postgres",
user="postgres",
password="<password>"
) as connection:
with connection.cursor() as cursor:
# Create a test table
cursor.execute("""
CREATE TABLE IF NOT EXISTS demo_users (
id SERIAL PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
# Insert sample data
cursor.execute("""
INSERT INTO demo_users (name, email)
VALUES (%s, %s)
""", ("John Doe", "john@example.com"))
cursor.execute("""
INSERT INTO demo_users (name, email)
VALUES (%s, %s)
""", ("Jane Smith", "jane@example.com"))
connection.commit()
print("✓ Data inserted successfully!")
cursor.close()
connection.close()
| db.test_connection() | ||
| db.write_data("Alice Johnson", "alice@example.com") | ||
| db.write_data("Bob Wilson", "bob@example.com") | ||
| db.read_data() | ||
| db.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To ensure the database connection is always closed, even if an error occurs during one of the operations, it's best to wrap the database calls in a try...finally block. This guarantees that db.close() is executed.
try:
db.test_connection()
db.write_data("Alice Johnson", "alice@example.com")
db.write_data("Bob Wilson", "bob@example.com")
db.read_data()
finally:
db.close()
| <Aside type="note"> | ||
| Replace `<postgres_host>` and `<password>` with your actual PostgreSQL connection details. You can find the service endpoint in your Applications Services page. | ||
| </Aside> No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency with the other demo guides, please add a 'Next Steps' section here with links to the other available demos.
<Aside type="note">
Replace `<postgres_host>` and `<password>` with your actual PostgreSQL connection details. You can find the service endpoint in your Applications Services page.
</Aside>
## Next Steps
Explore other demo applications to enhance your infrastructure:
<LinkCard title="Coder Demo" href="/demos/01-coder/" description="Deploy Coder for cloud-based VS Code IDE environments." />
<LinkCard title="JupyterHub Demo" href="/demos/02-jupyterhub/" description="Deploy JupyterHub for multi-user Jupyter notebook environments." />
<LinkCard title="LLM Model Demo" href="/demos/04-llm-model/" description="Learn how to deploy and configure language model applications." />
|
|
||
| ```python | ||
| import requests | ||
| import json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| <Aside type="note"> | ||
| Replace `<your_openai_api_key>` with your actual OpenAI API key and `<your_model_name>` with your deployed model name. The examples use the OpenAI API format, which is compatible with many LLM services. | ||
| </Aside> No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This note is misleading because the code examples do not use an OpenAI API key. It should be updated to reflect the actual placeholders used in the code. Additionally, for consistency with other guides, a 'Next Steps' section should be added.
<Aside type="note">
Replace `<your_service_url>` and `<your_model_name>` with your actual connection details.
</Aside>
## Next Steps
Explore other demo applications to enhance your infrastructure:
<LinkCard title="Coder Demo" href="/demos/01-coder/" description="Deploy Coder for cloud-based VS Code IDE environments." />
<LinkCard title="JupyterHub Demo" href="/demos/02-jupyterhub/" description="Deploy JupyterHub for multi-user Jupyter notebook environments." />
<LinkCard title="PostgreSQL Demo" href="/demos/03-postgres/" description="Learn how to deploy PostgreSQL database and test read/write operations with Python." />
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds comprehensive demo guides for four different deployment scenarios on the OtterScale platform: Coder (cloud-based VS Code IDE), JupyterHub (multi-user Jupyter notebooks), PostgreSQL (database deployment), and LLM Model deployments. The guides include step-by-step deployment instructions and Python code examples for testing each service.
Key Changes:
- Added configuration in astro.config.mjs to create a new "Demos" sidebar section with Chinese translations
- Created four new demo guide files with deployment instructions and testing code examples
- Included multiple Python code examples in tabbed interfaces for different use cases
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| astro.config.mjs | Adds "Demos" section to sidebar navigation with internationalization support |
| src/content/docs/demos/01-coder.mdx | Deployment guide for Coder VS Code IDE with access instructions |
| src/content/docs/demos/02-jupyterhub.mdx | JupyterHub deployment guide for multi-user notebook environments |
| src/content/docs/demos/03-postgres.mdx | PostgreSQL deployment guide with Python connection and CRUD examples |
| src/content/docs/demos/04-llm-model.mdx | LLM model deployment and testing guide with Python API integration examples |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| </Tabs> | ||
|
|
||
| <Aside type="note"> | ||
| Replace `<your_openai_api_key>` with your actual OpenAI API key and `<your_model_name>` with your deployed model name. The examples use the OpenAI API format, which is compatible with many LLM services. |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The note mentions <your_openai_api_key> but none of the code examples in the tabs above use an OpenAI API key. The examples use SERVICE_URL, MODEL_NAME, and MODEL_ID instead. This note is misleading and should be updated to match the actual placeholders used in the code examples (<your_service_url>, <your_model_name>).
| Replace `<your_openai_api_key>` with your actual OpenAI API key and `<your_model_name>` with your deployed model name. The examples use the OpenAI API format, which is compatible with many LLM services. | |
| Replace `<your_service_url>` with your actual service URL and `<your_model_name>` with your deployed model name. The examples use the OpenAI API format, which is compatible with many LLM services. |
| description: Deploy and test LLM models integrated with OpenAI API. | ||
| --- | ||
|
|
||
| import { Steps, Aside, Tabs, TabItem } from '@astrojs/starlight/components'; | ||
|
|
||
| This guide demonstrates how to deploy a Large Language Model (LLM) in your OtterScale cluster and test it using Python with OpenAI API integration. | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| Ensure you have the following: | ||
|
|
||
| - **Python 3.8+**: For running the test scripts | ||
| - **OpenAI API Key**: Obtain from [OpenAI Platform](https://platform.openai.com/api-keys) | ||
| - **Python Libraries**: `requests` and `openai` | ||
|
|
||
| ```bash | ||
| pip install requests openai |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Prerequisites section mentions obtaining an OpenAI API Key from OpenAI Platform, but this API key is never used in any of the code examples. The examples use a custom service URL with a custom header (OtterScale-Model-Name) and endpoint (/v1/chat), which suggests this is interfacing with an OtterScale-hosted model service, not directly with OpenAI's API. Either remove the OpenAI API key requirement from Prerequisites, or clarify how it's actually used in the deployment.
| description: Deploy and test LLM models integrated with OpenAI API. | |
| --- | |
| import { Steps, Aside, Tabs, TabItem } from '@astrojs/starlight/components'; | |
| This guide demonstrates how to deploy a Large Language Model (LLM) in your OtterScale cluster and test it using Python with OpenAI API integration. | |
| ## Prerequisites | |
| Ensure you have the following: | |
| - **Python 3.8+**: For running the test scripts | |
| - **OpenAI API Key**: Obtain from [OpenAI Platform](https://platform.openai.com/api-keys) | |
| - **Python Libraries**: `requests` and `openai` | |
| ```bash | |
| pip install requests openai | |
| description: Deploy and test LLM models via an HTTP API in your OtterScale cluster. | |
| --- | |
| import { Steps, Aside, Tabs, TabItem } from '@astrojs/starlight/components'; | |
| This guide demonstrates how to deploy a Large Language Model (LLM) in your OtterScale cluster and test it using Python HTTP requests against the model API exposed by OtterScale. | |
| ## Prerequisites | |
| Ensure you have the following: | |
| - **Python 3.8+**: For running the test scripts | |
| - **Python Library**: `requests` | |
| ```bash | |
| pip install requests |
|
|
||
| Obtain your OpenAI API key from [OpenAI Platform](https://platform.openai.com/api-keys). |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar issue here - the Prerequisites section again mentions the OpenAI API key (line 69), but it's not used in the code examples. This is redundant with the earlier prerequisites section and should be removed or corrected.
| Obtain your OpenAI API key from [OpenAI Platform](https://platform.openai.com/api-keys). |
|
|
||
| import { Steps, Aside, Tabs, TabItem } from '@astrojs/starlight/components'; | ||
|
|
||
| This guide demonstrates how to deploy PostgreSQL to your applications using a Helm chart from Artifact Hub. |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent terminology: Line 8 says "deploy PostgreSQL to your applications" which is grammatically awkward. It should be "deploy PostgreSQL for your applications" or "deploy a PostgreSQL database for your applications" to be clearer and more accurate.
| This guide demonstrates how to deploy PostgreSQL to your applications using a Helm chart from Artifact Hub. | |
| This guide demonstrates how to deploy a PostgreSQL database for your applications using a Helm chart from Artifact Hub. |
| db = PostgreSQLDemo( | ||
| host="<postgres_host>", | ||
| port=5432, | ||
| database="postgres", | ||
| user="postgres", | ||
| password="<password>" | ||
| ) |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security concern: The code examples show hardcoded credentials (password placeholders) directly in the connection strings. While these are just examples with placeholders, the documentation should include a note about security best practices, such as using environment variables or secure credential management systems instead of hardcoding passwords, especially given that this is demonstration code that users might copy directly.
| connection = psycopg2.connect( | ||
| host="<postgres_host>", | ||
| port=5432, | ||
| database="postgres", | ||
| user="postgres", | ||
| password="<password>" | ||
| ) | ||
| cursor = connection.cursor() | ||
|
|
||
| # Read all data from the table | ||
| cursor.execute("SELECT id, name, email, created_at FROM demo_users") | ||
| rows = cursor.fetchall() | ||
|
|
||
| print("Records in demo_users table:") | ||
| for row in rows: | ||
| print(f"ID: {row[0]}, Name: {row[1]}, Email: {row[2]}, Created: {row[3]}") | ||
|
|
||
| cursor.close() | ||
| connection.close() |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling: The "Read Data" example doesn't include a try-except block. If the table doesn't exist or there's a connection issue, this will raise an unhandled exception. For consistency with the other examples and following best practices, this code should include proper error handling.
| connection = psycopg2.connect( | |
| host="<postgres_host>", | |
| port=5432, | |
| database="postgres", | |
| user="postgres", | |
| password="<password>" | |
| ) | |
| cursor = connection.cursor() | |
| # Read all data from the table | |
| cursor.execute("SELECT id, name, email, created_at FROM demo_users") | |
| rows = cursor.fetchall() | |
| print("Records in demo_users table:") | |
| for row in rows: | |
| print(f"ID: {row[0]}, Name: {row[1]}, Email: {row[2]}, Created: {row[3]}") | |
| cursor.close() | |
| connection.close() | |
| try: | |
| connection = psycopg2.connect( | |
| host="<postgres_host>", | |
| port=5432, | |
| database="postgres", | |
| user="postgres", | |
| password="<password>" | |
| ) | |
| cursor = connection.cursor() | |
| try: | |
| # Read all data from the table | |
| cursor.execute("SELECT id, name, email, created_at FROM demo_users") | |
| rows = cursor.fetchall() | |
| print("Records in demo_users table:") | |
| for row in rows: | |
| print(f"ID: {row[0]}, Name: {row[1]}, Email: {row[2]}, Created: {row[3]}") | |
| except Exception as e: | |
| print("Error reading data from demo_users:", str(e)) | |
| finally: | |
| cursor.close() | |
| connection.close() | |
| except Exception as e: | |
| print("Database connection failed:", str(e)) |
|
|
||
| try: | ||
| response = requests.post( | ||
| f"{SERVICE_URL}/v1/chat", |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The endpoint /v1/chat used throughout the examples (lines 107, 167, 229, 260) appears to be custom to OtterScale and not a standard OpenAI-compatible endpoint. The standard OpenAI chat completions endpoint is /v1/chat/completions. If this is intentional and specific to the OtterScale platform, it should be clearly documented. If this is meant to be OpenAI-compatible, the endpoint should be corrected.
| f"{SERVICE_URL}/v1/chat", | |
| f"{SERVICE_URL}/v1/chat/completions", |
| # Configuration | ||
| SERVICE_URL = "<your_service_url>" # e.g., http://localhost:8000 | ||
| MODEL_NAME = "<your_model_name>" # e.g., llm-demo | ||
| MODEL_ID = "RedHatAI/Llama-3.2-1B-Instruct-FP8" |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent model references: The code uses MODEL_ID = "RedHatAI/Llama-3.2-1B-Instruct-FP8" (lines 91, 135, 208) while the deployment instructions reference meta-llama/Llama-2-7b-chat (lines 25, 37). This inconsistency could confuse users. The examples should either use the same model referenced in the deployment instructions or clarify that different models can be used and provide guidance on how to specify the correct MODEL_ID for their deployed model.
| MODEL_ID = "RedHatAI/Llama-3.2-1B-Instruct-FP8" | |
| MODEL_ID = "meta-llama/Llama-2-7b-chat" # should match the model artifact you deployed |
| helm push postgresql-18.2.0.tgz oci://<registry_url>/postgres --plain-http | ||
| ``` | ||
|
|
||
| Replace `<registry_url>` with your actual registry URL (e.g., `192.168.196.42:5736`). | ||
|
|
||
| **Example:** | ||
| ```bash | ||
| helm push postgresql-18.2.0.tgz oci://192.168.196.42:5736/postgres --plain-http |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The helm push example uses the --plain-http flag to upload charts to the registry, which disables TLS and exposes credentials and chart contents to interception or tampering on the network. An attacker with access to the network path could capture registry credentials or modify pushed artifacts in transit, compromising your software supply chain. Use an HTTPS-enabled registry and remove --plain-http from the documented command so pushes are performed over TLS-protected connections.
| helm push postgresql-18.2.0.tgz oci://<registry_url>/postgres --plain-http | |
| ``` | |
| Replace `<registry_url>` with your actual registry URL (e.g., `192.168.196.42:5736`). | |
| **Example:** | |
| ```bash | |
| helm push postgresql-18.2.0.tgz oci://192.168.196.42:5736/postgres --plain-http | |
| helm push postgresql-18.2.0.tgz oci://<registry_url>/postgres |
Replace <registry_url> with your actual registry URL (e.g., 192.168.196.42:5736).
Example:
helm push postgresql-18.2.0.tgz oci://192.168.196.42:5736/postgres| - Copy the password and decode it using base64: | ||
| ```bash | ||
| echo "<copied_password>" | base64 --decode | ||
| ``` | ||
| Or use this online tool: [base64decode.org](https://www.base64decode.org/) | ||
| - Use the decoded password for your PostgreSQL connections |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This step recommends pasting the base64-encoded PostgreSQL password into base64decode.org, which sends your database credentials to a third-party service and creates an immediate credential leak risk. An attacker controlling or observing that external service could harvest these secrets and use them to access your database. Instead, advise users to decode the secret locally using trusted CLI tools only and remove the suggestion to use an external web-based decoder.
| - Copy the password and decode it using base64: | |
| ```bash | |
| echo "<copied_password>" | base64 --decode | |
| ``` | |
| Or use this online tool: [base64decode.org](https://www.base64decode.org/) | |
| - Use the decoded password for your PostgreSQL connections | |
| - Copy the password and decode it locally using base64: | |
| ```bash | |
| echo "<copied_password>" | base64 --decode | |
| ``` | |
| - Use the decoded password for your PostgreSQL connections |
…l deployments