Skip to content

[PZ COMPETITION] Competition jouw#171

Open
jouw wants to merge 5 commits intoFlagAI-Open:mainfrom
jouw:competition-jouw
Open

[PZ COMPETITION] Competition jouw#171
jouw wants to merge 5 commits intoFlagAI-Open:mainfrom
jouw:competition-jouw

Conversation

@jouw
Copy link

@jouw jouw commented Sep 12, 2025

submit the materials

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @jouw, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a collection of new shell scripts aimed at supporting advanced training and refinement workflows for large language models within a competition context. The changes provide robust setups for both reinforcement learning-based fine-tuning on mathematical reasoning datasets and a comprehensive pre-training pipeline for a GPT model leveraging a Mixture of Experts architecture, all while integrating with WANDB for effective experiment management.

Highlights

  • New PPO Training Scripts: Two new PPO (Proximal Policy Optimization) training scripts (train-grpo-v4.sh and train-grpo.sh) have been added for fine-tuning the OpenSeek-Small-v1-SFT model on math reasoning tasks, with varying configurations for prompt/response lengths and GPU usage.
  • SGLang Engine Configuration: Both PPO training scripts now include specific sglang engine keyword arguments, enabling the use of flashinfer for the attention backend and explicit tokenizer path and mode settings for improved performance and control.
  • Comprehensive GPT Training Job Submission: A detailed SLURM job submission script (submit-training-job.sh) has been introduced for pre-training a GPT model. This script includes extensive configurations for a Mixture of Experts (MoE) architecture, data paths, optimization, and WANDB integration.
  • WANDB Integration: All new training scripts are configured to integrate with Weights & Biases (WANDB) for experiment tracking, including API key, project names, and run names, facilitating monitoring and reproducibility of training runs.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds several shell scripts for running training jobs. My review focuses on improving the security, portability, and maintainability of these scripts. I've found several critical security issues related to hardcoded credentials and API keys that must be addressed. Additionally, there are multiple instances of hardcoded paths that make the scripts difficult to reuse in different environments. I've provided suggestions to replace these with variables and dynamic values to make the scripts more robust and portable.

Comment on lines +38 to +39
export http_proxy=http://u-cEoRwn:EDvFuZTe@172.16.4.9:3128
export https_proxy=http://u-cEoRwn:EDvFuZTe@172.16.4.9:3128

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Hardcoded credentials in the proxy URLs are a major security risk. These should be removed from the script and managed securely, for example, by loading them from environment variables which are set outside of version control.

Suggested change
export http_proxy=http://u-cEoRwn:EDvFuZTe@172.16.4.9:3128
export https_proxy=http://u-cEoRwn:EDvFuZTe@172.16.4.9:3128
export http_proxy=${HTTP_PROXY}
export https_proxy=${HTTPS_PROXY}

--bf16 --attention-softmax-in-fp32 --accumulate-allreduce-grads-in-fp32 \
--log-interval 1 --tensorboard-log-interval 1 \
--wandb-mode online \
--wandb-api-key 2356f969f25a7b0f375f3bcf3aff92e70d912bda \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

A hardcoded wandb-api-key is present. This is a significant security risk. The key should not be part of the source code. Please use the WANDB_API_KEY environment variable, which is already being exported (as a placeholder) on line 41. If the training script doesn't automatically pick up the environment variable, you can pass it as an argument like this.

Suggested change
--wandb-api-key 2356f969f25a7b0f375f3bcf3aff92e70d912bda \
--wandb-api-key "$WANDB_API_KEY" \

Comment on lines +26 to +32
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/checkpoints
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/checkpoints
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/logs
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/logs/pids
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/logs/details
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/tensorboard
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/wandb

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This block has two issues:

  1. Hardcoded absolute paths make the script non-portable.
  2. A duplicated mkdir command for checkpoints.

It's recommended to use a variable for the base path (e.g., BASE_PATH, defined at the top of the script) and remove the duplicate line to improve portability and correctness.

Suggested change
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/checkpoints
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/checkpoints
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/logs
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/logs/pids
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/logs/details
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/tensorboard
mkdir -p /home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/wandb
mkdir -p "${BASE_PATH}/checkpoints"
# The duplicate line was here
mkdir -p "${BASE_PATH}/logs"
mkdir -p "${BASE_PATH}/logs/pids"
mkdir -p "${BASE_PATH}/logs/details"
mkdir -p "${BASE_PATH}/tensorboard"
mkdir -p "${BASE_PATH}/wandb"

actor_rollout_ref.rollout.tensor_model_parallel_size=1 \
+actor_rollout_ref.rollout.engine_kwargs.sglang.attention_backend=flashinfer \
+actor_rollout_ref.rollout.engine_kwargs.sglang.use_tiktoken=false \
+actor_rollout_ref.rollout.engine_kwargs.sglang.tokenizer_path=/root/workspace/OpenSeek-Small-v1-SFT \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The tokenizer path is hardcoded. It's better to use the MODEL_PATH variable defined at the top of the script for consistency and easier maintenance.

Suggested change
+actor_rollout_ref.rollout.engine_kwargs.sglang.tokenizer_path=/root/workspace/OpenSeek-Small-v1-SFT \
+actor_rollout_ref.rollout.engine_kwargs.sglang.tokenizer_path=$MODEL_PATH \

actor_rollout_ref.rollout.tensor_model_parallel_size=1 \
+actor_rollout_ref.rollout.engine_kwargs.sglang.attention_backend=flashinfer \
+actor_rollout_ref.rollout.engine_kwargs.sglang.use_tiktoken=false \
+actor_rollout_ref.rollout.engine_kwargs.sglang.tokenizer_path=/root/workspace/OpenSeek-Small-v1-SFT \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The tokenizer path is hardcoded. It's better to use the MODEL_PATH variable defined at the top of the script for consistency and easier maintenance.

Suggested change
+actor_rollout_ref.rollout.engine_kwargs.sglang.tokenizer_path=/root/workspace/OpenSeek-Small-v1-SFT \
+actor_rollout_ref.rollout.engine_kwargs.sglang.tokenizer_path=$MODEL_PATH \

module load miniforge3/24.1

WANDB_RUN="OpenSeek-Small-v1_tokens-15B-math-exp2"
LOG_DIR="/home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/logs/details/host_0_localhost/20250814_2024"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The LOG_DIR contains a hardcoded timestamp. This makes the script less reusable for multiple runs. It's better to generate the timestamp dynamically using the date command. This will create a new log directory for each run.

Suggested change
LOG_DIR="/home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/logs/details/host_0_localhost/20250814_2024"
LOG_DIR="/home/bingxing2/home/scx7353/workspace/OpenSeek/OpenSeek-Small-v1-Baseline/logs/details/host_0_localhost/$(date +'%Y%m%d_%H%M%S')"

export WANDB_START_METHOD=thread
export WANDB_GROUP=$WANDB_RUN

VISIBLE_DEVICES=0,1,2,3 DEVICE_MAX_CONNECTIONS=4 torchrun \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable VISIBLE_DEVICES is likely a typo and should be CUDA_VISIBLE_DEVICES. However, CUDA_VISIBLE_DEVICES is already exported on line 15. This makes this variable assignment redundant and potentially confusing. It's recommended to remove it as torchrun does not use it.

Suggested change
VISIBLE_DEVICES=0,1,2,3 DEVICE_MAX_CONNECTIONS=4 torchrun \
DEVICE_MAX_CONNECTIONS=4 torchrun \

@ftgreat ftgreat changed the title Competition jouw [PZ COMPETITION] Competition jouw Sep 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant