An autonomous AI agent that can research a user's query and write a detailed, well-structured research report.
This project is an autonomous report builder that uses AI agents using the PydanticAI framework to perform research and synthesize the findings into a comprehensive report.
The application starts with ResearchAgent, an agent that takes a user's prompt, breaks it down into a number of sub-questions, and uses a web search tool to gather information on each. Once the research is complete, a SynthesizerAgent processes the collected data and writes a formal research report, complete with a title, abstract, distinct sections, and a list of references.
The application is built with FastAPI and exposes a simple web interface where a user can input a topic and receive a generated report.
Follow these steps to set up and run the project locally.
Important
You will need an LLM server for the Agents to use. Below is how to install one locally, but you can use a remote one too. Settings for changing the model used are not currently implemented, but will be soon!
This project uses a local LLM server to run the two Agents
-
Download and Install Ollama:
- Navigate to the Ollama download website.
- Download and run the installer for your operating system (macOS, Linux, or Windows).
-
Pull a Model:
- Start Ollama with
ollama serve - Once Ollama is running, pull the models
llama3.1:8bandqwen3:14bby running the following commands:ollama pull llama3.1:8b ollama pull qwen3:14b
- Start Ollama with
-
Verify Installation:
- To ensure Ollama is running correctly, you can list the models you have downloaded:
ollama list
- To ensure Ollama is running correctly, you can list the models you have downloaded:
It is recommended to use a virtual environment to manage project dependencies. This project uses uv.
-
Install
uv: If you don't haveuvinstalled, you can install it with pip:pip install uv
-
Create and activate a virtual environment:
uv venv source .venv/bin/activate -
Install requirements: Install all the necessary packages from
requirements.txt:uv pip install -r requirements.txt
The application's behavior can be configured either by modifying the config.py file directly or by setting environment variables in a .env file. The application uses a .env file to manage secrets and settings.
-
Create a
.envfile: Create a file named.envin the root of the project directory. -
Add required environment variables: You will need to provide the host for your local Ollama instance. Add the following to your
.envfile:OLLAMA_HOST="localhost:11434" -
Customize behavior (Optional): You can customize the behavior of the web search tool and agents by adding nested environment variables to your
.envfile. The delimiter for nested settings is__.For example, to change the number of search results for each subquestion for the
WebSearchTool, you would add:WEB_SEARCH_TOOL__NUM_SEARCH_RESULTS=5Available settings can be found in
config.py.
Once the project is configured, you can start the application using uvicorn.
- Run the application:
The synthesizer's prompt containing the research result and the final report output will be saved to the file defined in
python3 main.py
save_to_file.py. This utility is mostly for debugging/development purposes, and a setting to turn it off/on will be implemented in the future.
After starting the application, you can interact with it through the web interface.
- Open your web browser and navigate to:
http://localhost:8000/app/ - Enter a topic you want to research in the input box and click "Generate Report".
- The generated report and the raw JSON output will be displayed on the page.
- FastAPI: https://fastapi.tiangolo.com/
- Pydantic: https://docs.pydantic.dev/latest/
- PydanticAI: https://ai.pydantic.dev/
- duckduckgo-search: https://pypi.org/project/duckduckgo-search/
- googlesearch-python: https://pypi.org/project/googlesearch-python/
- crawl4ai: https://docs.crawl4ai.com/