Skip to content

Commit 9276d43

Browse files
committed
* Update port setting
1 parent fa5c350 commit 9276d43

File tree

4 files changed

+78246
-2
lines changed

4 files changed

+78246
-2
lines changed

README-GEMINI-MCP.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -66,10 +66,14 @@ You can configure the server's behavior via command-line arguments and environme
6666

6767
- `--port=<number>`: Specifies the port for the server to listen on.
6868
- **Default**: `8765`
69+
- **Note**: Can also be set via the `GEMINI_MCP_PORT` environment variable. Command-line argument takes precedence.
6970
- `--debug`: Enables detailed debug logging to the console.
7071

7172
### Environment Variables
7273

74+
- `GEMINI_MCP_PORT`: Specifies the port for the server to listen on.
75+
- **Default**: `8765`
76+
- **Note**: Command-line argument `--port` takes precedence over this environment variable.
7377
- `GEMINI_TOOLS_DEFAULT_MODEL`: Sets a default LLM model specifically for tools hosted by the server (like `google_web_search`).
7478
- **Purpose**: When a tool needs to invoke an LLM during its execution (e.g., to summarize search results), it will use the model specified by this variable. This allows you to use a different (potentially faster or cheaper) model for tool execution than for the main chat.
7579
- **Example**: `GEMINI_TOOLS_DEFAULT_MODEL=gemini-1.5-flash`
@@ -99,6 +103,9 @@ npm run start --workspace=@gemini-community/gemini-mcp-server -- --port=9000 --d
99103

100104
# Use a faster model for tool calls
101105
GEMINI_TOOLS_DEFAULT_MODEL=gemini-1.5-flash npm run start --workspace=@gemini-community/gemini-mcp-server
106+
107+
# Use environment variable to set the port
108+
GEMINI_MCP_PORT=9000 npm run start --workspace=@gemini-community/gemini-mcp-server
102109
```
103110

104111
When the server starts successfully, you will see output similar to this:
@@ -222,10 +229,14 @@ Please note that the name of this package, `@gemini-community/gemini-mcp-server`
222229

223230
- `--port=<number>`: 指定服务器监听的端口。
224231
- **默认值**: `8765`
232+
- **注意**: 也可以通过 `GEMINI_MCP_PORT` 环境变量设置。命令行参数的优先级更高。
225233
- `--debug`: 启用详细的调试日志输出。
226234

227235
### 环境变量
228236

237+
- `GEMINI_MCP_PORT`: 指定服务器监听的端口。
238+
- **默认值**: `8765`
239+
- **注意**: 命令行参数 `--port` 的优先级高于此环境变量。
229240
- `GEMINI_TOOLS_DEFAULT_MODEL`: 为服务器托管的工具(如 `google_web_search`)设置一个默认的 LLM 模型。
230241
- **用途**: 当一个工具在执行过程中需要调用 LLM(例如,对搜索结果进行总结)时,它将使用此环境变量指定的模型。这允许您为主聊天和工具执行使用不同的模型,从而可能优化成本和速度。
231242
- **示例**: `GEMINI_TOOLS_DEFAULT_MODEL=gemini-1.5-flash`
@@ -255,6 +266,9 @@ npm run start --workspace=@gemini-community/gemini-mcp-server -- --port=9000 --d
255266

256267
# 使用一个更快的模型进行工具调用
257268
GEMINI_TOOLS_DEFAULT_MODEL=gemini-1.5-flash npm run start --workspace=@gemini-community/gemini-mcp-server
269+
270+
# 使用环境变量设置端口
271+
GEMINI_MCP_PORT=9000 npm run start --workspace=@gemini-community/gemini-mcp-server
258272
```
259273

260274
服务器成功启动后,您将看到类似以下的输出:

0 commit comments

Comments
 (0)