[trainer, megatron, rollout, sglang, model] feat: Support Async rl state-machine and add red-moe model #1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
This PR implements fully asynchronous RL training. The Async-RL Pipeline is a state-of-the-art implementation of asynchronous reinforcement learning training based on a fully decoupled architecture. It separates actor-train, actor-forward-logp, ref_logp, and rollout-generate components to achieve optimal performance and scalability.
Async-rl workflow:

Async-rl can achieve up to 50-100% performance improvement and convergence can be maintained.
Benchmark Configuration:
Checklist Before Starting
Search for similar PRs. Paste at least one query link here: [trainer, fsdp, vllm, recipe] feat: one step off async training recipe volcengine/verl#2231 , The implementation of PR2231 is only a one-step off-policy asynchronous one, and parameter synchronization through nccl is not scalable. This PR adds a state machine mechanism to implement asynchronous parameter updates, which can achieve separate deployment and asynchrony of any component, such as separate parallel pipelines of actor-train/param-update/logp/rollout.
Format the PR title as
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,like[megatron, fsdp, doc]{type}is infeat,fix,refactor,chore,test[BREAKING]to the beginning of the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
API and Usage Example
Design & Code Changes
State-machine design for async-rl: RL training workflows are inherently complex. While synchronous approaches can simply execute tasks sequentially, async-RL requires complex state transitions between different tasks. To ensure both performance and accuracy, the system employs flexible scheduling strategies that bind tasks to resources logically. Each task maintains its own production and consumption loop to prevent errors. In this context, designing RL state machines provides a friendly and manageable approach.
The pipeline implements a sophisticated state machine design where different state transitions correspond to the entire async-RL pipeline workflow:
dataloader→generate→rollout→logp→ref_logp→reward→train→param_updateAsynchronous Parameter Synchronization:
The parameter update process is decomposed into three main components:
2.1. Gather: Uses NCCL for parameter aggregation (must be serial)
2.2. Send/Recv: Asynchronous CPU communication
2.3. Load: Parameter loading without affecting GPU compute
Add red-moe model for grpo
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)