-
Notifications
You must be signed in to change notification settings - Fork 3k
[rollout, vllm] fix: make LoRA with async vLLM work properly #3821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively resolves a critical issue with LoRA adapters in async vLLM rollouts, where the base model was incorrectly used instead of the updated LoRA model. The fixes, including standardizing LoRA identifiers, ensuring correct engine initialization, and passing LoRARequest during generation, are well-implemented and directly address the root causes. The introduction of constant identifiers and the explicit removal of old adapters are solid improvements. The new utility for calculating max_lora_rank also enhances configuration robustness. I have one suggestion to improve the robustness of this new utility function.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
… initially (#3907) ### What does this PR do? This PR fixes a remaining issue that was not fully addressed by #3821 and led to downstream errors such as #3882. The bug occurs when the configuration `rollout.load_format` is set to `"dummy"`, which causes the initial `base_sync_done` flag to be set to `False`: <https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/fsdp_workers.py#L619-L620> As a result, during the first invocation of `update_weights`, the `TensorLoRARequest` is **not** successfully added to the engine: <https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625> Later, when `generate()` is called, the code assumes that the LoRA with the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption fails in the scenario above, and the engine attempts to reload it using the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError` exceptions. > **Note:** The correct LoRA weights should always be loaded from the custom `TensorLoRARequest`. This is made possible because `verl` manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s `WorkerLoraManager`. --- ### Proposed Fix We add a safety check in the `generate` method to ensure the LoRA is loaded before constructing the request: ```python # Add LoRA request lora_request = None # Ensure the LoRA is already loaded in the engine if self.model_config.lora_rank > 0: lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras() # <-- NEW if lora_loaded: lora_request = LoRARequest( lora_name=VLLM_LORA_NAME, lora_int_id=VLLM_LORA_INT_ID, lora_path=VLLM_LORA_PATH, ) generator = self.engine.generate( prompt=prompt, sampling_params=sampling_params, request_id=request_id, lora_request=lora_request, ) ``` ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: #3821 - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. I've manually reproduced the bug mentioned in #3882, and confirmed that the above fix resolve this bug. For interested, some experiment runs on [wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the code **after the proposed fix** (and with `load_format = "dummy"`) -- even though this PR has nothing to do with LoRA performances. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **N/A** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **N/A: see tests above** - [x] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…ine#3821) ### What does this PR do? > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. The previous volcengine#3639 addressed the **crashing issues** in `update_weights` of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal an implicit **off-policy issue**: the rollout generation still uses the **base model** instead of the updated **LoRA model**, resulting in degraded performance. We traced this to a bug in `vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA updates and rollout generation. Specifically: * In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights from the FSDP actor to the rollout `AsyncLLM` engine. However, the updated adapter is assigned a random `lora_name` and `lora_int_id` (generated from `time.ns()`), which are not stored—making them hard to reuse. https://github.com/volcengine/verl/blob/e94366d46a027d38e48e3a859b745387f131b0ad/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604 * During rollout generation, the newly added LoRA adapter is **never used** due to two issues: 1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig` (e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based generation requests. See https://github.com/volcengine/verl/blob/e94366d46a027d38e48e3a859b745387f131b0ad/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304 2. When calling `generate` in `vLLMHttpServerBase`, the request to `self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**, meaning generation always uses the base model. See https://github.com/volcengine/verl/blob/e94366d46a027d38e48e3a859b745387f131b0ad/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360 #### Proposed Fixes in this PR * Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across the training process to consistently locate and apply updated LoRA weights. * Inject `LoRAConfig` during `AsyncLLM` initialization and ensure `vLLMHttpServerBase` passes a proper `LoRARequest` (identified via `VLLM_LORA_NAME`) during rollout generation. * Add utility methods to automatically validate and set `max_lora_rank` in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing issues like volcengine#3696 #### Remarks Special thanks to @sanxing-chen for inspiring this fix with his prior patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA performance -- seems to be orthogonal to the issues addressed here. ### Checklist Before Starting * [x] Search for similar PRs. Paste at least one query link here: volcengine#3639 volcengine#3765 * [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) * `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` * If this PR involves multiple modules, separate them with `,`, e.g., `[megatron, fsdp, doc]` * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`} * If this PR breaks any API, prepend `[BREAKING]` to the title. * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that cannot be tested by CI (e.g., algorithm implementation, new model support), validate with experiments and include results such as training curves or evaluation metrics. Controlled experiments based on `examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b)) **clearly demonstrate both the issue and the effectiveness of the fix**. <img width="2528" height="1328" alt="kl-loss" src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec" /> <img width="2528" height="1328" alt="val-reward" src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6" /> See the full [W&B training log](https://wandb.ai/listar2000/verl-latest-lora). Summary: * **sync-lora-32** — baseline (synchronous mode). * **async-lora-32-before-fix** — async LoRA on `main` branch, showing degraded performance. * **async-lora-32-no-remove** — ablation variant with fixes applied **but without removing old LoRA adapters** between updates (showing the importance of removal). * **async-lora-32-after-fix** — full fix applied, achieving expected improvement. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **Not Applicable** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **This PR can hardly be covered by regular CI. I instead run concrete experiments with GSM8K dataset.** - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
… initially (volcengine#3907) ### What does this PR do? This PR fixes a remaining issue that was not fully addressed by volcengine#3821 and led to downstream errors such as volcengine#3882. The bug occurs when the configuration `rollout.load_format` is set to `"dummy"`, which causes the initial `base_sync_done` flag to be set to `False`: <https://github.com/volcengine/verl/blob/a3e92e124f401c36f1afdbe0026ddb213e2beaa3/verl/workers/fsdp_workers.py#L619-L620> As a result, during the first invocation of `update_weights`, the `TensorLoRARequest` is **not** successfully added to the engine: <https://github.com/volcengine/verl/blob/a3e92e124f401c36f1afdbe0026ddb213e2beaa3/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625> Later, when `generate()` is called, the code assumes that the LoRA with the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption fails in the scenario above, and the engine attempts to reload it using the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError` exceptions. > **Note:** The correct LoRA weights should always be loaded from the custom `TensorLoRARequest`. This is made possible because `verl` manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s `WorkerLoraManager`. --- ### Proposed Fix We add a safety check in the `generate` method to ensure the LoRA is loaded before constructing the request: ```python # Add LoRA request lora_request = None # Ensure the LoRA is already loaded in the engine if self.model_config.lora_rank > 0: lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras() # <-- NEW if lora_loaded: lora_request = LoRARequest( lora_name=VLLM_LORA_NAME, lora_int_id=VLLM_LORA_INT_ID, lora_path=VLLM_LORA_PATH, ) generator = self.engine.generate( prompt=prompt, sampling_params=sampling_params, request_id=request_id, lora_request=lora_request, ) ``` ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that the above fix resolve this bug. For interested, some experiment runs on [wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the code **after the proposed fix** (and with `load_format = "dummy"`) -- even though this PR has nothing to do with LoRA performances. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **N/A** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **N/A: see tests above** - [x] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…ine#3821) ### What does this PR do? > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. The previous volcengine#3639 addressed the **crashing issues** in `update_weights` of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal an implicit **off-policy issue**: the rollout generation still uses the **base model** instead of the updated **LoRA model**, resulting in degraded performance. We traced this to a bug in `vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA updates and rollout generation. Specifically: * In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights from the FSDP actor to the rollout `AsyncLLM` engine. However, the updated adapter is assigned a random `lora_name` and `lora_int_id` (generated from `time.ns()`), which are not stored—making them hard to reuse. https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604 * During rollout generation, the newly added LoRA adapter is **never used** due to two issues: 1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig` (e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based generation requests. See https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304 2. When calling `generate` in `vLLMHttpServerBase`, the request to `self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**, meaning generation always uses the base model. See https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360 #### Proposed Fixes in this PR * Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across the training process to consistently locate and apply updated LoRA weights. * Inject `LoRAConfig` during `AsyncLLM` initialization and ensure `vLLMHttpServerBase` passes a proper `LoRARequest` (identified via `VLLM_LORA_NAME`) during rollout generation. * Add utility methods to automatically validate and set `max_lora_rank` in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing issues like volcengine#3696 #### Remarks Special thanks to @sanxing-chen for inspiring this fix with his prior patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA performance -- seems to be orthogonal to the issues addressed here. ### Checklist Before Starting * [x] Search for similar PRs. Paste at least one query link here: volcengine#3639 volcengine#3765 * [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) * `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` * If this PR involves multiple modules, separate them with `,`, e.g., `[megatron, fsdp, doc]` * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`} * If this PR breaks any API, prepend `[BREAKING]` to the title. * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that cannot be tested by CI (e.g., algorithm implementation, new model support), validate with experiments and include results such as training curves or evaluation metrics. Controlled experiments based on `examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b)) **clearly demonstrate both the issue and the effectiveness of the fix**. <img width="2528" height="1328" alt="kl-loss" src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec" /> <img width="2528" height="1328" alt="val-reward" src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6" /> See the full [W&B training log](https://wandb.ai/listar2000/verl-latest-lora). Summary: * **sync-lora-32** — baseline (synchronous mode). * **async-lora-32-before-fix** — async LoRA on `main` branch, showing degraded performance. * **async-lora-32-no-remove** — ablation variant with fixes applied **but without removing old LoRA adapters** between updates (showing the importance of removal). * **async-lora-32-after-fix** — full fix applied, achieving expected improvement. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **Not Applicable** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **This PR can hardly be covered by regular CI. I instead run concrete experiments with GSM8K dataset.** - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
… initially (volcengine#3907) ### What does this PR do? This PR fixes a remaining issue that was not fully addressed by volcengine#3821 and led to downstream errors such as volcengine#3882. The bug occurs when the configuration `rollout.load_format` is set to `"dummy"`, which causes the initial `base_sync_done` flag to be set to `False`: <https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/fsdp_workers.py#L619-L620> As a result, during the first invocation of `update_weights`, the `TensorLoRARequest` is **not** successfully added to the engine: <https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625> Later, when `generate()` is called, the code assumes that the LoRA with the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption fails in the scenario above, and the engine attempts to reload it using the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError` exceptions. > **Note:** The correct LoRA weights should always be loaded from the custom `TensorLoRARequest`. This is made possible because `verl` manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s `WorkerLoraManager`. --- ### Proposed Fix We add a safety check in the `generate` method to ensure the LoRA is loaded before constructing the request: ```python # Add LoRA request lora_request = None # Ensure the LoRA is already loaded in the engine if self.model_config.lora_rank > 0: lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras() # <-- NEW if lora_loaded: lora_request = LoRARequest( lora_name=VLLM_LORA_NAME, lora_int_id=VLLM_LORA_INT_ID, lora_path=VLLM_LORA_PATH, ) generator = self.engine.generate( prompt=prompt, sampling_params=sampling_params, request_id=request_id, lora_request=lora_request, ) ``` ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that the above fix resolve this bug. For interested, some experiment runs on [wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the code **after the proposed fix** (and with `load_format = "dummy"`) -- even though this PR has nothing to do with LoRA performances. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **N/A** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **N/A: see tests above** - [x] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…ine#3821) ### What does this PR do? > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. The previous volcengine#3639 addressed the **crashing issues** in `update_weights` of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal an implicit **off-policy issue**: the rollout generation still uses the **base model** instead of the updated **LoRA model**, resulting in degraded performance. We traced this to a bug in `vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA updates and rollout generation. Specifically: * In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights from the FSDP actor to the rollout `AsyncLLM` engine. However, the updated adapter is assigned a random `lora_name` and `lora_int_id` (generated from `time.ns()`), which are not stored—making them hard to reuse. https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604 * During rollout generation, the newly added LoRA adapter is **never used** due to two issues: 1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig` (e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based generation requests. See https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304 2. When calling `generate` in `vLLMHttpServerBase`, the request to `self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**, meaning generation always uses the base model. See https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360 #### Proposed Fixes in this PR * Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across the training process to consistently locate and apply updated LoRA weights. * Inject `LoRAConfig` during `AsyncLLM` initialization and ensure `vLLMHttpServerBase` passes a proper `LoRARequest` (identified via `VLLM_LORA_NAME`) during rollout generation. * Add utility methods to automatically validate and set `max_lora_rank` in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing issues like volcengine#3696 #### Remarks Special thanks to @sanxing-chen for inspiring this fix with his prior patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA performance -- seems to be orthogonal to the issues addressed here. ### Checklist Before Starting * [x] Search for similar PRs. Paste at least one query link here: volcengine#3639 volcengine#3765 * [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) * `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` * If this PR involves multiple modules, separate them with `,`, e.g., `[megatron, fsdp, doc]` * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`} * If this PR breaks any API, prepend `[BREAKING]` to the title. * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that cannot be tested by CI (e.g., algorithm implementation, new model support), validate with experiments and include results such as training curves or evaluation metrics. Controlled experiments based on `examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b)) **clearly demonstrate both the issue and the effectiveness of the fix**. <img width="2528" height="1328" alt="kl-loss" src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec" /> <img width="2528" height="1328" alt="val-reward" src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6" /> See the full [W&B training log](https://wandb.ai/listar2000/verl-latest-lora). Summary: * **sync-lora-32** — baseline (synchronous mode). * **async-lora-32-before-fix** — async LoRA on `main` branch, showing degraded performance. * **async-lora-32-no-remove** — ablation variant with fixes applied **but without removing old LoRA adapters** between updates (showing the importance of removal). * **async-lora-32-after-fix** — full fix applied, achieving expected improvement. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **Not Applicable** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **This PR can hardly be covered by regular CI. I instead run concrete experiments with GSM8K dataset.** - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…ine#3821) ### What does this PR do? > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. The previous volcengine#3639 addressed the **crashing issues** in `update_weights` of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal an implicit **off-policy issue**: the rollout generation still uses the **base model** instead of the updated **LoRA model**, resulting in degraded performance. We traced this to a bug in `vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA updates and rollout generation. Specifically: * In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights from the FSDP actor to the rollout `AsyncLLM` engine. However, the updated adapter is assigned a random `lora_name` and `lora_int_id` (generated from `time.ns()`), which are not stored—making them hard to reuse. https://github.com/volcengine/verl/blob/8044584e32516719811c16523eb10cb0bcb906e6/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604 * During rollout generation, the newly added LoRA adapter is **never used** due to two issues: 1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig` (e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based generation requests. See https://github.com/volcengine/verl/blob/8044584e32516719811c16523eb10cb0bcb906e6/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304 2. When calling `generate` in `vLLMHttpServerBase`, the request to `self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**, meaning generation always uses the base model. See https://github.com/volcengine/verl/blob/8044584e32516719811c16523eb10cb0bcb906e6/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360 #### Proposed Fixes in this PR * Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across the training process to consistently locate and apply updated LoRA weights. * Inject `LoRAConfig` during `AsyncLLM` initialization and ensure `vLLMHttpServerBase` passes a proper `LoRARequest` (identified via `VLLM_LORA_NAME`) during rollout generation. * Add utility methods to automatically validate and set `max_lora_rank` in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing issues like volcengine#3696 #### Remarks Special thanks to @sanxing-chen for inspiring this fix with his prior patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA performance -- seems to be orthogonal to the issues addressed here. ### Checklist Before Starting * [x] Search for similar PRs. Paste at least one query link here: volcengine#3639 volcengine#3765 * [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) * `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` * If this PR involves multiple modules, separate them with `,`, e.g., `[megatron, fsdp, doc]` * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`} * If this PR breaks any API, prepend `[BREAKING]` to the title. * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that cannot be tested by CI (e.g., algorithm implementation, new model support), validate with experiments and include results such as training curves or evaluation metrics. Controlled experiments based on `examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b)) **clearly demonstrate both the issue and the effectiveness of the fix**. <img width="2528" height="1328" alt="kl-loss" src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec" /> <img width="2528" height="1328" alt="val-reward" src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6" /> See the full [W&B training log](https://wandb.ai/listar2000/verl-latest-lora). Summary: * **sync-lora-32** — baseline (synchronous mode). * **async-lora-32-before-fix** — async LoRA on `main` branch, showing degraded performance. * **async-lora-32-no-remove** — ablation variant with fixes applied **but without removing old LoRA adapters** between updates (showing the importance of removal). * **async-lora-32-after-fix** — full fix applied, achieving expected improvement. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **Not Applicable** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **This PR can hardly be covered by regular CI. I instead run concrete experiments with GSM8K dataset.** - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
… initially (volcengine#3907) ### What does this PR do? This PR fixes a remaining issue that was not fully addressed by volcengine#3821 and led to downstream errors such as volcengine#3882. The bug occurs when the configuration `rollout.load_format` is set to `"dummy"`, which causes the initial `base_sync_done` flag to be set to `False`: <https://github.com/volcengine/verl/blob/04dc5f438b0e35f20dd3fdda7df06970ea754cb4/verl/workers/fsdp_workers.py#L619-L620> As a result, during the first invocation of `update_weights`, the `TensorLoRARequest` is **not** successfully added to the engine: <https://github.com/volcengine/verl/blob/04dc5f438b0e35f20dd3fdda7df06970ea754cb4/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625> Later, when `generate()` is called, the code assumes that the LoRA with the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption fails in the scenario above, and the engine attempts to reload it using the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError` exceptions. > **Note:** The correct LoRA weights should always be loaded from the custom `TensorLoRARequest`. This is made possible because `verl` manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s `WorkerLoraManager`. --- ### Proposed Fix We add a safety check in the `generate` method to ensure the LoRA is loaded before constructing the request: ```python # Add LoRA request lora_request = None # Ensure the LoRA is already loaded in the engine if self.model_config.lora_rank > 0: lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras() # <-- NEW if lora_loaded: lora_request = LoRARequest( lora_name=VLLM_LORA_NAME, lora_int_id=VLLM_LORA_INT_ID, lora_path=VLLM_LORA_PATH, ) generator = self.engine.generate( prompt=prompt, sampling_params=sampling_params, request_id=request_id, lora_request=lora_request, ) ``` ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that the above fix resolve this bug. For interested, some experiment runs on [wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the code **after the proposed fix** (and with `load_format = "dummy"`) -- even though this PR has nothing to do with LoRA performances. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **N/A** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **N/A: see tests above** - [x] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…ine#3821) ### What does this PR do? > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. The previous volcengine#3639 addressed the **crashing issues** in `update_weights` of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal an implicit **off-policy issue**: the rollout generation still uses the **base model** instead of the updated **LoRA model**, resulting in degraded performance. We traced this to a bug in `vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA updates and rollout generation. Specifically: * In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights from the FSDP actor to the rollout `AsyncLLM` engine. However, the updated adapter is assigned a random `lora_name` and `lora_int_id` (generated from `time.ns()`), which are not stored—making them hard to reuse. https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604 * During rollout generation, the newly added LoRA adapter is **never used** due to two issues: 1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig` (e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based generation requests. See https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304 2. When calling `generate` in `vLLMHttpServerBase`, the request to `self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**, meaning generation always uses the base model. See https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360 #### Proposed Fixes in this PR * Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across the training process to consistently locate and apply updated LoRA weights. * Inject `LoRAConfig` during `AsyncLLM` initialization and ensure `vLLMHttpServerBase` passes a proper `LoRARequest` (identified via `VLLM_LORA_NAME`) during rollout generation. * Add utility methods to automatically validate and set `max_lora_rank` in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing issues like volcengine#3696 #### Remarks Special thanks to @sanxing-chen for inspiring this fix with his prior patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA performance -- seems to be orthogonal to the issues addressed here. ### Checklist Before Starting * [x] Search for similar PRs. Paste at least one query link here: volcengine#3639 volcengine#3765 * [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) * `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` * If this PR involves multiple modules, separate them with `,`, e.g., `[megatron, fsdp, doc]` * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`} * If this PR breaks any API, prepend `[BREAKING]` to the title. * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that cannot be tested by CI (e.g., algorithm implementation, new model support), validate with experiments and include results such as training curves or evaluation metrics. Controlled experiments based on `examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b)) **clearly demonstrate both the issue and the effectiveness of the fix**. <img width="2528" height="1328" alt="kl-loss" src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec" /> <img width="2528" height="1328" alt="val-reward" src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6" /> See the full [W&B training log](https://wandb.ai/listar2000/verl-latest-lora). Summary: * **sync-lora-32** — baseline (synchronous mode). * **async-lora-32-before-fix** — async LoRA on `main` branch, showing degraded performance. * **async-lora-32-no-remove** — ablation variant with fixes applied **but without removing old LoRA adapters** between updates (showing the importance of removal). * **async-lora-32-after-fix** — full fix applied, achieving expected improvement. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **Not Applicable** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **This PR can hardly be covered by regular CI. I instead run concrete experiments with GSM8K dataset.** - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
… initially (volcengine#3907) ### What does this PR do? This PR fixes a remaining issue that was not fully addressed by volcengine#3821 and led to downstream errors such as volcengine#3882. The bug occurs when the configuration `rollout.load_format` is set to `"dummy"`, which causes the initial `base_sync_done` flag to be set to `False`: <https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/fsdp_workers.py#L619-L620> As a result, during the first invocation of `update_weights`, the `TensorLoRARequest` is **not** successfully added to the engine: <https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625> Later, when `generate()` is called, the code assumes that the LoRA with the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption fails in the scenario above, and the engine attempts to reload it using the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError` exceptions. > **Note:** The correct LoRA weights should always be loaded from the custom `TensorLoRARequest`. This is made possible because `verl` manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s `WorkerLoraManager`. --- ### Proposed Fix We add a safety check in the `generate` method to ensure the LoRA is loaded before constructing the request: ```python # Add LoRA request lora_request = None # Ensure the LoRA is already loaded in the engine if self.model_config.lora_rank > 0: lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras() # <-- NEW if lora_loaded: lora_request = LoRARequest( lora_name=VLLM_LORA_NAME, lora_int_id=VLLM_LORA_INT_ID, lora_path=VLLM_LORA_PATH, ) generator = self.engine.generate( prompt=prompt, sampling_params=sampling_params, request_id=request_id, lora_request=lora_request, ) ``` ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that the above fix resolve this bug. For interested, some experiment runs on [wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the code **after the proposed fix** (and with `load_format = "dummy"`) -- even though this PR has nothing to do with LoRA performances. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **N/A** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **N/A: see tests above** - [x] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…ine#3821) ### What does this PR do? > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. The previous volcengine#3639 addressed the **crashing issues** in `update_weights` of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal an implicit **off-policy issue**: the rollout generation still uses the **base model** instead of the updated **LoRA model**, resulting in degraded performance. We traced this to a bug in `vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA updates and rollout generation. Specifically: * In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights from the FSDP actor to the rollout `AsyncLLM` engine. However, the updated adapter is assigned a random `lora_name` and `lora_int_id` (generated from `time.ns()`), which are not stored—making them hard to reuse. https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604 * During rollout generation, the newly added LoRA adapter is **never used** due to two issues: 1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig` (e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based generation requests. See https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304 2. When calling `generate` in `vLLMHttpServerBase`, the request to `self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**, meaning generation always uses the base model. See https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360 #### Proposed Fixes in this PR * Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across the training process to consistently locate and apply updated LoRA weights. * Inject `LoRAConfig` during `AsyncLLM` initialization and ensure `vLLMHttpServerBase` passes a proper `LoRARequest` (identified via `VLLM_LORA_NAME`) during rollout generation. * Add utility methods to automatically validate and set `max_lora_rank` in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing issues like volcengine#3696 #### Remarks Special thanks to @sanxing-chen for inspiring this fix with his prior patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA performance -- seems to be orthogonal to the issues addressed here. ### Checklist Before Starting * [x] Search for similar PRs. Paste at least one query link here: volcengine#3639 volcengine#3765 * [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) * `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` * If this PR involves multiple modules, separate them with `,`, e.g., `[megatron, fsdp, doc]` * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`} * If this PR breaks any API, prepend `[BREAKING]` to the title. * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that cannot be tested by CI (e.g., algorithm implementation, new model support), validate with experiments and include results such as training curves or evaluation metrics. Controlled experiments based on `examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b)) **clearly demonstrate both the issue and the effectiveness of the fix**. <img width="2528" height="1328" alt="kl-loss" src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec" /> <img width="2528" height="1328" alt="val-reward" src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6" /> See the full [W&B training log](https://wandb.ai/listar2000/verl-latest-lora). Summary: * **sync-lora-32** — baseline (synchronous mode). * **async-lora-32-before-fix** — async LoRA on `main` branch, showing degraded performance. * **async-lora-32-no-remove** — ablation variant with fixes applied **but without removing old LoRA adapters** between updates (showing the importance of removal). * **async-lora-32-after-fix** — full fix applied, achieving expected improvement. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **Not Applicable** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **This PR can hardly be covered by regular CI. I instead run concrete experiments with GSM8K dataset.** - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
… initially (volcengine#3907) ### What does this PR do? This PR fixes a remaining issue that was not fully addressed by volcengine#3821 and led to downstream errors such as volcengine#3882. The bug occurs when the configuration `rollout.load_format` is set to `"dummy"`, which causes the initial `base_sync_done` flag to be set to `False`: <https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/fsdp_workers.py#L619-L620> As a result, during the first invocation of `update_weights`, the `TensorLoRARequest` is **not** successfully added to the engine: <https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625> Later, when `generate()` is called, the code assumes that the LoRA with the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption fails in the scenario above, and the engine attempts to reload it using the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError` exceptions. > **Note:** The correct LoRA weights should always be loaded from the custom `TensorLoRARequest`. This is made possible because `verl` manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s `WorkerLoraManager`. --- ### Proposed Fix We add a safety check in the `generate` method to ensure the LoRA is loaded before constructing the request: ```python # Add LoRA request lora_request = None # Ensure the LoRA is already loaded in the engine if self.model_config.lora_rank > 0: lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras() # <-- NEW if lora_loaded: lora_request = LoRARequest( lora_name=VLLM_LORA_NAME, lora_int_id=VLLM_LORA_INT_ID, lora_path=VLLM_LORA_PATH, ) generator = self.engine.generate( prompt=prompt, sampling_params=sampling_params, request_id=request_id, lora_request=lora_request, ) ``` ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that the above fix resolve this bug. For interested, some experiment runs on [wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the code **after the proposed fix** (and with `load_format = "dummy"`) -- even though this PR has nothing to do with LoRA performances. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **N/A** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **N/A: see tests above** - [x] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
… initially (volcengine#3907) ### What does this PR do? This PR fixes a remaining issue that was not fully addressed by volcengine#3821 and led to downstream errors such as volcengine#3882. The bug occurs when the configuration `rollout.load_format` is set to `"dummy"`, which causes the initial `base_sync_done` flag to be set to `False`: <https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/fsdp_workers.py#L619-L620> As a result, during the first invocation of `update_weights`, the `TensorLoRARequest` is **not** successfully added to the engine: <https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625> Later, when `generate()` is called, the code assumes that the LoRA with the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption fails in the scenario above, and the engine attempts to reload it using the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError` exceptions. > **Note:** The correct LoRA weights should always be loaded from the custom `TensorLoRARequest`. This is made possible because `verl` manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s `WorkerLoraManager`. --- ### Proposed Fix We add a safety check in the `generate` method to ensure the LoRA is loaded before constructing the request: ```python # Add LoRA request lora_request = None # Ensure the LoRA is already loaded in the engine if self.model_config.lora_rank > 0: lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras() # <-- NEW if lora_loaded: lora_request = LoRARequest( lora_name=VLLM_LORA_NAME, lora_int_id=VLLM_LORA_INT_ID, lora_path=VLLM_LORA_PATH, ) generator = self.engine.generate( prompt=prompt, sampling_params=sampling_params, request_id=request_id, lora_request=lora_request, ) ``` ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that the above fix resolve this bug. For interested, some experiment runs on [wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the code **after the proposed fix** (and with `load_format = "dummy"`) -- even though this PR has nothing to do with LoRA performances. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **N/A** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **N/A: see tests above** - [x] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…ine#3821) ### What does this PR do? > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. The previous volcengine#3639 addressed the **crashing issues** in `update_weights` of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal an implicit **off-policy issue**: the rollout generation still uses the **base model** instead of the updated **LoRA model**, resulting in degraded performance. We traced this to a bug in `vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA updates and rollout generation. Specifically: * In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights from the FSDP actor to the rollout `AsyncLLM` engine. However, the updated adapter is assigned a random `lora_name` and `lora_int_id` (generated from `time.ns()`), which are not stored—making them hard to reuse. https://github.com/volcengine/verl/blob/685492aac13babf00f92ed4aabb1e7133da59531/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604 * During rollout generation, the newly added LoRA adapter is **never used** due to two issues: 1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig` (e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based generation requests. See https://github.com/volcengine/verl/blob/685492aac13babf00f92ed4aabb1e7133da59531/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304 2. When calling `generate` in `vLLMHttpServerBase`, the request to `self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**, meaning generation always uses the base model. See https://github.com/volcengine/verl/blob/685492aac13babf00f92ed4aabb1e7133da59531/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360 #### Proposed Fixes in this PR * Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across the training process to consistently locate and apply updated LoRA weights. * Inject `LoRAConfig` during `AsyncLLM` initialization and ensure `vLLMHttpServerBase` passes a proper `LoRARequest` (identified via `VLLM_LORA_NAME`) during rollout generation. * Add utility methods to automatically validate and set `max_lora_rank` in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing issues like volcengine#3696 #### Remarks Special thanks to @sanxing-chen for inspiring this fix with his prior patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA performance -- seems to be orthogonal to the issues addressed here. ### Checklist Before Starting * [x] Search for similar PRs. Paste at least one query link here: volcengine#3639 volcengine#3765 * [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) * `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` * If this PR involves multiple modules, separate them with `,`, e.g., `[megatron, fsdp, doc]` * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`} * If this PR breaks any API, prepend `[BREAKING]` to the title. * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that cannot be tested by CI (e.g., algorithm implementation, new model support), validate with experiments and include results such as training curves or evaluation metrics. Controlled experiments based on `examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b)) **clearly demonstrate both the issue and the effectiveness of the fix**. <img width="2528" height="1328" alt="kl-loss" src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec" /> <img width="2528" height="1328" alt="val-reward" src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6" /> See the full [W&B training log](https://wandb.ai/listar2000/verl-latest-lora). Summary: * **sync-lora-32** — baseline (synchronous mode). * **async-lora-32-before-fix** — async LoRA on `main` branch, showing degraded performance. * **async-lora-32-no-remove** — ablation variant with fixes applied **but without removing old LoRA adapters** between updates (showing the importance of removal). * **async-lora-32-after-fix** — full fix applied, achieving expected improvement. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **Not Applicable** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **This PR can hardly be covered by regular CI. I instead run concrete experiments with GSM8K dataset.** - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
… initially (volcengine#3907) ### What does this PR do? This PR fixes a remaining issue that was not fully addressed by volcengine#3821 and led to downstream errors such as volcengine#3882. The bug occurs when the configuration `rollout.load_format` is set to `"dummy"`, which causes the initial `base_sync_done` flag to be set to `False`: <https://github.com/volcengine/verl/blob/65060897d66cc5fe010265c386bd536ea8dfb8f8/verl/workers/fsdp_workers.py#L619-L620> As a result, during the first invocation of `update_weights`, the `TensorLoRARequest` is **not** successfully added to the engine: <https://github.com/volcengine/verl/blob/65060897d66cc5fe010265c386bd536ea8dfb8f8/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625> Later, when `generate()` is called, the code assumes that the LoRA with the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption fails in the scenario above, and the engine attempts to reload it using the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError` exceptions. > **Note:** The correct LoRA weights should always be loaded from the custom `TensorLoRARequest`. This is made possible because `verl` manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s `WorkerLoraManager`. --- ### Proposed Fix We add a safety check in the `generate` method to ensure the LoRA is loaded before constructing the request: ```python # Add LoRA request lora_request = None # Ensure the LoRA is already loaded in the engine if self.model_config.lora_rank > 0: lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras() # <-- NEW if lora_loaded: lora_request = LoRARequest( lora_name=VLLM_LORA_NAME, lora_int_id=VLLM_LORA_INT_ID, lora_path=VLLM_LORA_PATH, ) generator = self.engine.generate( prompt=prompt, sampling_params=sampling_params, request_id=request_id, lora_request=lora_request, ) ``` ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that the above fix resolve this bug. For interested, some experiment runs on [wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the code **after the proposed fix** (and with `load_format = "dummy"`) -- even though this PR has nothing to do with LoRA performances. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). **N/A** - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: **N/A: see tests above** - [x] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
What does this PR do?
The previous #3639 addressed the crashing issues in
update_weightsofvLLMAsyncRollout. However, experiments (see Tests below) reveal an implicit off-policy issue: the rollout generation still uses the base model instead of the updated LoRA model, resulting in degraded performance. We traced this to a bug invllm_async_server.vLLMHttpServerBasecausing a mismatch between LoRA updates and rollout generation. Specifically:In
vLLMAsyncRollout,update_weightscorrectly updates LoRA weights from the FSDP actor to the rolloutAsyncLLMengine. However, the updated adapter is assigned a randomlora_nameandlora_int_id(generated fromtime.ns()), which are not stored—making them hard to reuse.verl/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py
Lines 595 to 604 in f209c6f
During rollout generation, the newly added LoRA adapter is never used due to two issues:
vllm_configused to createAsyncLLMlacks aLoRAConfig(e.g.,max_lora_rank), soAsyncLLMis not prepared for LoRA-based generation requests.See
verl/verl/workers/rollout/vllm_rollout/vllm_async_server.py
Lines 299 to 304 in f209c6f
generateinvLLMHttpServerBase, the request toself.engine(theAsyncLLMinstance) omits anyLoRARequest, meaning generation always uses the base model. Seeverl/verl/workers/rollout/vllm_rollout/vllm_async_server.py
Line 360 in f209c6f
Proposed Fixes in this PR
VLLM_LORA_INT_IDandVLLM_LORA_NAMEacross the training process to consistently locate and apply updated LoRA weights.LoRAConfigduringAsyncLLMinitialization and ensurevLLMHttpServerBasepasses a properLoRARequest(identified viaVLLM_LORA_NAME) during rollout generation.max_lora_rankin vLLM fromconfig.actor_rollout_ref.model.lora_rank, addressing issues like Unable to Create Loras of Rank 1 #3696Remarks
Special thanks to @sanxing-chen for inspiring this fix with his prior patches. Also his PR #3765 -- while also tackling an issue hurting LoRA performance -- seems to be orthogonal to the issues addressed here.
Checklist Before Starting
Search for similar PRs. Paste at least one query link here: [rollout,vllm] fix: Add LoRA Loading to Async vLLM #3639 fix: gradient vanish in FusedLinearForPPO #3765
Format the PR title as
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,, e.g.,[megatron, fsdp, doc]{type}∈ {feat,fix,refactor,chore,test}[BREAKING]to the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
Controlled experiments based on
examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh(see adapted script) clearly demonstrate both the issue and the effectiveness of the fix.See the full W&B training log.
Summary:
mainbranch, showing degraded performance.API and Usage Example
# Add code snippet or script demonstrating how to use thisDesign & Code Changes
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)