Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
91 changes: 43 additions & 48 deletions python/agents/machine-learning-engineering/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,14 +46,14 @@ to implement this workflow.

1. **Prerequisites**

* Python 3.12+
* Poetry
* Python 3.10+
* uv
* For dependency management and packaging. Please follow the
instructions on the official
[Poetry website](https://python-poetry.org/docs/) for installation.
[uv website](https://docs.astral.sh/uv/) for installation.

```bash
pip install poetry
curl -LsSf https://astral.sh/uv/install.sh | sh
```
* Git
* Git can be downloaded from https://git-scm.com/. Then follow the [installation guide](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).
Expand All @@ -75,40 +75,10 @@ to implement this workflow.
cd adk-samples/python/agents/machine-learning-engineering
```

* Install Poetry
* Install dependencies
```bash
# Install the Poetry package and dependencies.
# Note for Linux users: If you get an error related to `keyring` during the installation, you can disable it by running the following command:
# poetry config keyring.enabled false
# This is a one-time setup.
poetry install
```

This command reads the `pyproject.toml` file and installs all the necessary dependencies into a virtual environment managed by Poetry.

If the above command returns with a `command not found` error, then use:

```bash
python -m poetry install
```

* Activate the shell

```bash
poetry env activate
```

This activates the virtual environment, allowing you to run commands within the project's environment. To make sure the environment is active, use for example
```bash
$> poetry env list
machine-learning-engineering-Gb54hHID-py3.12 (Activated)
```
If the above command did not activate the environment for you, you can also activate it through
```bash
source $(poetry env info --path)/bin/activate
# Install the package and dependencies.
uv sync
```

<a name="configuration"></a>
Expand Down Expand Up @@ -154,9 +124,9 @@ You may talk to the agent using the CLI:
adk run machine_learning_engineering
```

Or via the Poetry shell:
Or via the `uv` shell:
```bash
poetry run adk run machine_learning_engineering
uv run adk run machine_learning_engineering
```

Or on a web interface:
Expand Down Expand Up @@ -197,15 +167,15 @@ print(f"Submission file saved successfully to {submission_file_path}")
For running tests and evaluation, install the extra dependencies:

```bash
poetry install --with dev
uv sync --dev
```

Then the tests and evaluation can be run from the `machine-learning-engineering` directory using
the `pytest` module:

```bash
python3 -m pytest tests
python3 -m pytest eval
uv run pytest tests
uv run pytest eval
```

`tests` runs the agent on a sample request, and makes sure that every component
Expand All @@ -224,8 +194,8 @@ The Machine Learning Engineering Agent can be deployed to Vertex AI Agent Engine
commands:
```bash
poetry install --with deployment
python3 deployment/deploy.py --create
uv sync --group deployment
uv run deployment/deploy.py --create
```
When the deployment finishes, it will print a line like this:
Expand All @@ -237,7 +207,7 @@ Created remote agent: projects/<PROJECT_NUMBER>/locations/<PROJECT_LOCATION>/rea
If you forget the AGENT_ENGINE_ID, you can list the existing agents using:
```bash
python3 deployment/deploy.py --list
uv run deployment/deploy.py --list
```
The output will be like:
Expand All @@ -253,7 +223,7 @@ All remote agents:
You may interact with the deployed agent using the `test_deployment.py` script
```bash
$ export USER_ID=<any string>
$ python3 deployment/test_deployment.py --resource_id=${AGENT_ENGINE_ID} --user_id=${USER_ID}
$ uv run deployment/test_deployment.py --resource_id=${AGENT_ENGINE_ID} --user_id=${USER_ID}
Found agent with resource ID: ...
Created session for user ID: ...
Type 'quit' to exit.
Expand All @@ -266,9 +236,34 @@ To get started, please provide the task description of the competition.
To delete the deployed agent, you may run the following command:
```bash
python3 deployment/deploy.py --delete --resource_id=${AGENT_ENGINE_ID}
uv run deployment/deploy.py --delete --resource_id=${AGENT_ENGINE_ID}
```
### Alternative: Using Agent Starter Pack
You can also use the [Agent Starter Pack](https://goo.gle/agent-starter-pack) to create a production-ready version of this agent with additional deployment options:
```bash
# Create and activate a virtual environment
python -m venv .venv && source .venv/bin/activate # On Windows: .venv\Scripts\activate

# Install the starter pack and create your project
pip install --upgrade agent-starter-pack
agent-starter-pack create my-mle-agent -a adk@machine-learning-engineering
```
<details>
<summary>⚡️ Alternative: Using uv</summary>
If you have [`uv`](https://github.com/astral-sh/uv) installed, you can create and set up your project with a single command:
```bash
uvx agent-starter-pack create my-mle-agent -a adk@machine-learning-engineering
```
This command handles creating the project without needing to pre-install the package into a virtual environment.
</details>
The starter pack will prompt you to select deployment options and provides additional production-ready features including automated CI/CD deployment scripts.
## Appendix
Expand Down Expand Up @@ -316,4 +311,4 @@ This document describes the required configuration parameters in the `DefaultCon
#### `agent_model`
- **Description:** Specifies the identifier for the LLM model to be used by the agent. It defaults to the value of the environment variable `ROOT_AGENT_MODEL` or `"gemini-2.0-flash-001"` if the variable is not set.
- **Type:** `str`
- **Default:** `os.environ.get("ROOT_AGENT_MODEL", "gemini-2.0-flash-001")`
- **Default:** `os.environ.get("ROOT_AGENT_MODEL", "gemini-2.0-flash-001")`
Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
"""Machine Learning Engineer: automate the implementation of ML models."""

import os

import google.auth

os.environ.setdefault("GOOGLE_CLOUD_PROJECT", project_id)
os.environ.setdefault("GOOGLE_CLOUD_LOCATION", "us-central1")
os.environ.setdefault("GOOGLE_GENAI_USE_VERTEXAI", "True")

from . import agent
25 changes: 19 additions & 6 deletions python/agents/machine-learning-engineering/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ authors = [
{ name = "Jaehyun Nam", email = "jaehyunnam@google.com" },
{ name = "Jinsung Yoon", email = "jinsungyoon@google.com" }
]
license = {text = "Apache License 2.0"}
license = "Apache-2.0"
readme = "README.md"
requires-python = ">=3.12"
requires-python = ">=3.11,<3.13"

dependencies = [
"google-adk>=1.5.0",
Expand All @@ -26,12 +26,13 @@ dependencies = [
"torch>=2.7.1",
]

[dependency-groups]
[project.optional-dependencies]
dev = [
"pytest>=8.3.5",
"pytest-asyncio>=0.26.0",
"google-adk[eval]>=1.5.0",
"google-cloud-aiplatform[evaluation]>=1.93.0",
"agent-starter-pack>=0.14.1",
]
deployment = [
"absl-py>=2.2.1",
Expand All @@ -50,8 +51,20 @@ ignore = ["C901", "PLR0911", "PLR0912", "PLR0915"]
known-first-party = ["machine_learning_engineering"]

[tool.pytest.ini_options]
asyncio_mode = "auto"
pythonpath = "."
asyncio_default_fixture_loop_scope = "function"

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
requires = ["uv_build>=0.8.14,<0.9.0"]
build-backend = "uv_build"

[tool.uv.build-backend]
module-root = ""

# This configuration file is used by goo.gle/agent-starter-pack to power remote templating.
# It defines the template's properties and settings.
[tool.agent-starter-pack]
example_question = "Can you build a model to classify the iris dataset?"

[tool.agent-starter-pack.settings]
agent_directory = "machine_learning_engineering"
Loading