Skip to content

Conversation

@dzh19990407
Copy link

@dzh19990407 dzh19990407 commented Sep 17, 2025

What does this PR do?

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

This PR implements the Single-stream Policy Optimization proposed by paper https://arxiv.org/abs/2509.13232.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: [algo] feat: add GSPO-token policy loss computation function #2775
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

dzh19990407 and others added 4 commits September 17, 2025 11:37
- Add SPO algorithm implementation with KL-adaptive value tracker
- Implement single-stream architecture eliminating group synchronization
- Add prioritized sampling and global advantage normalization
- Include comprehensive README with performance results and usage guide
- Add configuration files and training scripts
- Achieve +3.4 pp improvement on math benchmarks vs GRPO
Remove Chinese language comments from spo_ray_trainer.py to improve code readability and maintain English-only codebase standards.
@CLAassistant
Copy link

CLAassistant commented Sep 17, 2025

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the Single-stream Policy Optimization (SPO) algorithm, a novel reinforcement learning method for Large Language Models. The changes primarily consist of new files for the SPO recipe, including configuration, the main training script, a run script, and the core Ray trainer implementation. My review has identified two critical issues. First, the run_spo.sh script uses an undefined variable which will cause the training to fail at launch. Second, the spo_ray_trainer.py contains unsafe exception handling during data resampling, which could lead to silent data corruption and hard-to-debug training failures. Addressing these issues is crucial for the correctness and stability of the new algorithm's implementation.

@vermouth1992
Copy link
Collaborator

Could you pin the verl commit in your readme?

```bash
# Enable SPO training mode
export SPO_ENABLE=True
export SPO_OFFLINE_VALUES="/path/to/offline/values.json"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the purpose of this file?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see Appendix A

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is an offline value estimate (Appendix A), I have added a link to huggingface in the README.

dzh19990407 and others added 3 commits September 28, 2025 14:54
@dzh19990407
Copy link
Author

Could you pin the verl commit in your readme?

I have update it in the readme file.

- Switch offline values from local JSON file to HuggingFace dataset loading
- Update README with offline value generation instructions
- Add debug mode support with RAY_DEBUG flag in config
- Fix config name reference from ppo_trainer to spo_trainer
- Update batch sizes and paths to use environment variables
- Change custom module paths from retool to spo directory
- Switch multi-turn format from retool_paper to hermes
- Adjust offline value threshold from 0 to 0.5 for binary classification

This improves the SPO training pipeline by using centralized dataset storage
and providing better configuration flexibility through environment variables.
@dzh19990407 dzh19990407 requested a review from hustnn September 28, 2025 08:35
@dzh19990407
Copy link
Author

@wuxibin89 @vermouth1992 @tongyx361 @PeterSH6
Hi team,

Thanks for all the great feedback! I have updated the code based on the review comments and pushed the changes.

Please take a quick look when you have a chance. Thanks!

@zhongwen-xu
Copy link

@hustnn @wuxibin89 @vermouth1992 @tongyx361 @PeterSH6

Hey folks, can you review this PR?

@hustnn
Copy link

hustnn commented Oct 16, 2025

Hi @zhongwen-xu ,I am not in the verl team. I am interested in your work and testing it in my case (a coding model). If it working, I will let you know.

Do I need to first run it to get offline values before training?

@zhongwen-xu
Copy link

Hi @hustnn,

Thanks for your interests. For your question, please refer to Appendix E in https://arxiv.org/abs/2509.13232, especially Figure 6 (c) for "Offline Initialization Ablation".

Short answer: the offline initialization helps for early training steps, in the long run, with and without the initializaon are similar.

My recommendation is to get the offline values before training though, a good v_hat estimation with the offline sampling is relatively cheap, same cost as what people do for filtering prompts, and they can be shared across multiple experiment runs.

Happy to answer any further questions by email!

@zsgvivo
Copy link

zsgvivo commented Oct 18, 2025

Hi @zhongwen-xu ,Thanks for your great work on SPO. I'm testing your implementation using the training scripts provided in this PR (recipe/spo/run_spo.sh). I've noticed that in both SPO and GRPO setup, the model use tools at the beginning, but as training progresses, it gradually stops calling them and instead generates direct answers. Is this behavior expected?
截屏2025-10-18 15 23 06

@zhongwen-xu
Copy link

zhongwen-xu commented Oct 18, 2025

Hi @zsgvivo

Thanks for your interests in our work!
The reason we decided to close this PR was that we initially thought a quick PR with the original ReTool tool protocol might work with Qwen3 8B without SFT, so we submitted the PR before the holiday. However, during the long holiday, we did a verification run with this PR implementation, it turned out it did't work, similar to what you just reported. Now we suspected that the ReTool protocol requires SFT first, rather than directly RL from it.

We are working on a full replication of what we had in the paper where the tool call protocol is with
```python
the generated code here
```

, rather than the JSON function calls as it was in ReTool implementation. (Note that we actually present all the core code in this PR so if you can modify the tool call protocol as what I just said, it should just work, without waiting for our implementation.)

Stay tuned!

@longls777
Copy link

Hi @zhongwen-xu and SPO authors,

Thanks a lot for sharing your work and the initial SPO implementation in this PR.

From your last comment, I understand that this PR version didn’t pass your verification run and that you’re working on a full replication with the updated tool-calling protocol (instead of the original ReTool JSON function calls). I’m very interested in reproducing the SPO results and applying the method to my own use cases.

May I ask if there is currently a validated/recommended implementation of SPO available (e.g., a newer branch, updated scripts, or a different repo) that you would suggest users follow?
If there is any WIP code, configs, or guidance you’re comfortable sharing—even if it’s not fully polished yet—it would be extremely helpful.

Thanks again for the great work, and really appreciate any pointers!

@dzh19990407
Copy link
Author

@longls777 SPO has been merged into https://github.com/verl-project/verl-recipe

@longls777
Copy link

Thanks so much for sharing this implementation - really appreciate your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants