Skip to content

Commit c64bef8

Browse files
committed
refactor: Convert config comments to English and improve model display
- Remove all Chinese comments from mindmap_ai_config.toml - Always display both prompt and mindmap model configuration at start - Show model name when generating each language version - Improve visibility of which models are being used during generation
1 parent 9b7f1d4 commit c64bef8

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

tools/mindmap_ai_config.toml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,10 @@
2020
# - Optimizing existing prompts (when selecting [o] Optimize option)
2121
# - First-time prompt generation (when selecting [o] Generate prompt with AI option)
2222
#
23-
# Recommended: Models good at understanding and optimizing text (e.g., GPT-4 series)
24-
prompt_model = "gpt-4o"
23+
# Recommended: Models good at understanding and optimizing text (e.g., gpt-5.2, gpt-5.1)
24+
prompt_model = "gpt-5.2"
2525
prompt_temperature = 0.7
26-
prompt_max_completion_tokens = 8000
26+
prompt_max_completion_tokens = 10000
2727

2828
# -----------------------------------------------------------------------------
2929
# Mind Map Generation Model
@@ -32,7 +32,7 @@ prompt_max_completion_tokens = 8000
3232
# - Generating final mind map content based on prompts
3333
# - This is the main generation task and consumes more tokens
3434
#
35-
# Recommended: Models good at creative generation and long text output (e.g., GPT-5.1-codex, GPT-5.2)
35+
# Recommended: Models good at creative generation and long text output (e.g., gpt-5.1-codex, gpt-5.2, gpt-5.1)
3636
mindmap_model = "gpt-5.1-codex"
3737
mindmap_temperature = 0.7
3838
# GPT-5.1-codex uses max_completion_tokens (older models use max_tokens)

0 commit comments

Comments
 (0)