Skip to content

Commit aaeab48

Browse files
committed
docs: update default model example to gemini-2.5-flash in README
1 parent c0e2549 commit aaeab48

File tree

1 file changed

+30
-12
lines changed

1 file changed

+30
-12
lines changed

README-GEMINI-MCP.md

Lines changed: 30 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ You can configure the server's behavior via command-line arguments and environme
7676
- **Note**: Command-line argument `--port` takes precedence over this environment variable.
7777
- `GEMINI_TOOLS_DEFAULT_MODEL`: Sets a default LLM model specifically for tools hosted by the server (like `google_web_search`).
7878
- **Purpose**: When a tool needs to invoke an LLM during its execution (e.g., to summarize search results), it will use the model specified by this variable. This allows you to use a different (potentially faster or cheaper) model for tool execution than for the main chat.
79-
- **Example**: `GEMINI_TOOLS_DEFAULT_MODEL=gemini-1.5-flash`
79+
- **Example**: `GEMINI_TOOLS_DEFAULT_MODEL=gemini-2.5-flash`
8080

8181
## Usage
8282

@@ -102,7 +102,7 @@ npm run start --workspace=@gemini-community/gemini-mcp-server
102102
npm run start --workspace=@gemini-community/gemini-mcp-server -- --port=9000 --debug
103103

104104
# Use a faster model for tool calls
105-
GEMINI_TOOLS_DEFAULT_MODEL=gemini-1.5-flash npm run start --workspace=@gemini-community/gemini-mcp-server
105+
GEMINI_TOOLS_DEFAULT_MODEL=gemini-2.5-flash npm run start --workspace=@gemini-community/gemini-mcp-server
106106

107107
# Use environment variable to set the port
108108
GEMINI_MCP_PORT=9000 npm run start --workspace=@gemini-community/gemini-mcp-server
@@ -111,10 +111,19 @@ GEMINI_MCP_PORT=9000 npm run start --workspace=@gemini-community/gemini-mcp-serv
111111
When the server starts successfully, you will see output similar to this:
112112

113113
```
114-
🚀 Gemini CLI MCP Server and OpenAI Bridge are running on port 8765
115-
- MCP transport listening on http://localhost:8765/mcp
116-
- OpenAI-compatible endpoints available at http://localhost:8765/v1
117-
⚙️ Using default model for tools: gemini-2.5-pro
114+
🚀 Starting Gemini CLI MCP Server...
115+
🚀 Gemini CLI MCP Server running on port 8765
116+
```
117+
118+
In debug mode (`--debug`), you will see additional information:
119+
120+
```
121+
🚀 Starting Gemini CLI MCP Server...
122+
Using authentication method: USE_GEMINI
123+
Using default model for tools: gemini-2.5-pro
124+
🚀 Gemini CLI MCP Server running on port 8765
125+
- MCP transport: http://localhost:8765/mcp
126+
- OpenAI endpoints: http://localhost:8765/v1
118127
```
119128

120129
### 3. Testing the Endpoints
@@ -239,7 +248,7 @@ Please note that the name of this package, `@gemini-community/gemini-mcp-server`
239248
- **注意**: 命令行参数 `--port` 的优先级高于此环境变量。
240249
- `GEMINI_TOOLS_DEFAULT_MODEL`: 为服务器托管的工具(如 `google_web_search`)设置一个默认的 LLM 模型。
241250
- **用途**: 当一个工具在执行过程中需要调用 LLM(例如,对搜索结果进行总结)时,它将使用此环境变量指定的模型。这允许您为主聊天和工具执行使用不同的模型,从而可能优化成本和速度。
242-
- **示例**: `GEMINI_TOOLS_DEFAULT_MODEL=gemini-1.5-flash`
251+
- **示例**: `GEMINI_TOOLS_DEFAULT_MODEL=gemini-2.5-flash`
243252

244253
## 使用方法
245254

@@ -265,7 +274,7 @@ npm run start --workspace=@gemini-community/gemini-mcp-server
265274
npm run start --workspace=@gemini-community/gemini-mcp-server -- --port=9000 --debug
266275

267276
# 使用一个更快的模型进行工具调用
268-
GEMINI_TOOLS_DEFAULT_MODEL=gemini-1.5-flash npm run start --workspace=@gemini-community/gemini-mcp-server
277+
GEMINI_TOOLS_DEFAULT_MODEL=gemini-2.5-flash npm run start --workspace=@gemini-community/gemini-mcp-server
269278

270279
# 使用环境变量设置端口
271280
GEMINI_MCP_PORT=9000 npm run start --workspace=@gemini-community/gemini-mcp-server
@@ -274,10 +283,19 @@ GEMINI_MCP_PORT=9000 npm run start --workspace=@gemini-community/gemini-mcp-serv
274283
服务器成功启动后,您将看到类似以下的输出:
275284

276285
```
277-
🚀 Gemini CLI MCP Server and OpenAI Bridge are running on port 8765
278-
- MCP transport listening on http://localhost:8765/mcp
279-
- OpenAI-compatible endpoints available at http://localhost:8765/v1
280-
⚙️ Using default model for tools: gemini-2.5-pro
286+
🚀 Starting Gemini CLI MCP Server...
287+
🚀 Gemini CLI MCP Server running on port 8765
288+
```
289+
290+
在调试模式下(`--debug`),您将看到额外的信息:
291+
292+
```
293+
🚀 Starting Gemini CLI MCP Server...
294+
Using authentication method: USE_GEMINI
295+
Using default model for tools: gemini-2.5-pro
296+
🚀 Gemini CLI MCP Server running on port 8765
297+
- MCP transport: http://localhost:8765/mcp
298+
- OpenAI endpoints: http://localhost:8765/v1
281299
```
282300

283301
### 3. 测试端点

0 commit comments

Comments
 (0)