Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions go-genai/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,9 @@ FROM alpine:3.18

WORKDIR /app

# Install curl for healthcheck
RUN apk add --no-cache curl

# Create non-root user
RUN adduser -D -g '' nomadicmehul

Expand All @@ -39,12 +42,12 @@ RUN chown -R nomadicmehul:nomadicmehul /app
# Switch to non-root user
USER nomadicmehul

# Expose port 8080
# Expose port
EXPOSE 8080

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD wget -qO- http://localhost:8080/health || exit 1
CMD curl -f http://localhost:8080/health || exit 1

# Run the application
CMD ["./main"]
11 changes: 9 additions & 2 deletions go-genai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,18 @@ A Go-powered GenAI app you can run locally using your favorite LLM — just foll

## Environment Variables

### Docker Desktop AI Integration (Recommended)
When using Docker Desktop with AI models:
- `LLAMA_URL`: Automatically injected by Docker Desktop (AI model endpoint)
- `LLAMA_MODEL`: Automatically injected by Docker Desktop (model name)
- `PORT`: The port to run the server on (default: 8080)
- `LLM_BASE_URL`: The base URL of the LLM API (required)
- `LLM_MODEL_NAME`: The model name to use for API requests (required)
- `LOG_LEVEL`: The logging level (default: INFO)

### Legacy Configuration
For custom LLM endpoints:
- `LLM_BASE_URL`: The base URL of the LLM API (fallback if LLAMA_URL not set)
- `LLM_MODEL_NAME`: The model name to use (fallback if LLAMA_MODEL not set)

## API Endpoints

- `GET /`: Main chat interface
Expand Down
12 changes: 8 additions & 4 deletions py-genai/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,9 @@ FROM python:3.11-slim

WORKDIR /app

# Install curl for healthcheck
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*

# Create non-root user
RUN adduser --disabled-password --gecos "" nomadicmehul

Expand All @@ -28,16 +31,17 @@ COPY static/ static/
# Switch to non-root user
USER nomadicmehul

# Expose port 8081 (matching docker-compose.yml)
EXPOSE 8081
# Expose port 8080
EXPOSE 8080

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
PYTHONUNBUFFERED=1 \
PORT=8080

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8081/health || exit 1
CMD curl -f http://localhost:8080/health || exit 1

# Run the application
CMD ["python", "app.py"]
15 changes: 11 additions & 4 deletions py-genai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,18 @@ A Python-powered GenAI app you can run locally using your favorite LLM — just

The application uses the following environment variables:

- `LLM_BASE_URL`: The base URL of the LLM API
- `LLM_MODEL_NAME`: The model name to use
- `PORT`: The port to run the application on (default: 8081)
- `DEBUG`: Set to "true" to enable debug mode (default: "false")
### Docker Desktop AI Integration (Recommended)
When using Docker Desktop with AI models:
- `LLAMA_URL`: Automatically injected by Docker Desktop (AI model endpoint)
- `LLAMA_MODEL`: Automatically injected by Docker Desktop (model name)
- `PORT`: The port to run the application on (default: 8081 for local, 8080 in Docker)
- `LOG_LEVEL`: Set the logging level (default: "INFO")
- `DEBUG`: Set to "true" to enable debug mode (default: "false")

### Legacy Configuration
For custom LLM endpoints:
- `LLM_BASE_URL`: The base URL of the LLM API (fallback if LLAMA_URL not set)
- `LLM_MODEL_NAME`: The model name to use (fallback if LLAMA_MODEL not set)

## API Endpoints

Expand Down
4 changes: 2 additions & 2 deletions rust-genai/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@ RUN useradd -m nomadicmehul
COPY --from=builder /usr/src/rust-genai/rust-genai/target/release/rust-genai .
COPY static/ ./static/
COPY templates/ ./templates/
EXPOSE 8083
EXPOSE 8080
USER nomadicmehul
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8083/health || exit 1
CMD curl -f http://localhost:${PORT:-8080}/health || exit 1
CMD ["./rust-genai"]
11 changes: 7 additions & 4 deletions rust-genai/INSTRUCTIONS.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,9 @@ cargo run
```bash
docker build -t rust-genai .
docker run -p 8083:8083 \
-e LLM_BASE_URL=http://your-llm-api \
-e LLM_MODEL_NAME=your-model \
-e PORT=8083 \
-e LLAMA_URL=http://your-llm-api \
-e LLAMA_MODEL=your-model \
rust-genai
```

Expand All @@ -38,8 +39,10 @@ docker run -p 8083:8083 \
## 4. Configuration
- Edit `.env` or set environment variables:
- `PORT` (default: 8083)
- `LLM_BASE_URL` (required)
- `LLM_MODEL_NAME` (required)
- `LLAMA_URL` (recommended, injected by Docker Desktop AI)
- `LLAMA_MODEL` (recommended, injected by Docker Desktop AI)
- `LLM_BASE_URL` (legacy fallback)
- `LLM_MODEL_NAME` (legacy fallback)
- `LOG_LEVEL` (default: info)

## 5. Notes
Expand Down
16 changes: 12 additions & 4 deletions rust-genai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,18 @@
This is a Rust implementation of the Hello-GenAI application.

## Environment Variables
- `PORT`: The port to run the server on (default: 8083)
- `LLM_BASE_URL`: The base URL of the LLM API (required)
- `LLM_MODEL_NAME`: The model name to use for API requests (required)
- `LOG_LEVEL`: The logging level (default: INFO)

### Docker Desktop AI Integration (Recommended)
When using Docker Desktop with AI models:
- `LLAMA_URL`: Automatically injected by Docker Desktop (AI model endpoint)
- `LLAMA_MODEL`: Automatically injected by Docker Desktop (model name)
- `PORT`: The port to run the server on (default: 8083 for local, 8080 in Docker)
- `LOG_LEVEL`: The logging level (default: info)

### Legacy Configuration
For custom LLM endpoints:
- `LLM_BASE_URL`: The base URL of the LLM API (fallback if LLAMA_URL not set)
- `LLM_MODEL_NAME`: The model name to use (fallback if LLAMA_MODEL not set)

## API Endpoints
- `GET /`: Main chat interface
Expand Down