Skip to content

Conversation

@listar2000
Copy link
Contributor

@listar2000 listar2000 commented Oct 19, 2025

What does this PR do?

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

The previous #3639 addressed the crashing issues in update_weights of vLLMAsyncRollout. However, experiments (see Tests below) reveal an implicit off-policy issue: the rollout generation still uses the base model instead of the updated LoRA model, resulting in degraded performance. We traced this to a bug in vllm_async_server.vLLMHttpServerBase causing a mismatch between LoRA updates and rollout generation. Specifically:

  • In vLLMAsyncRollout, update_weights correctly updates LoRA weights from the FSDP actor to the rollout AsyncLLM engine. However, the updated adapter is assigned a random lora_name and lora_int_id (generated from time.ns()), which are not stored—making them hard to reuse.

    if peft_config and base_sync_done:
    lora_int_id = int(time.time_ns() % 0x7FFFFFFF)
    lora_reqest = TensorLoRARequest(
    lora_name=f"{lora_int_id}",
    lora_int_id=lora_int_id,
    lora_path="simon_lora_path",
    peft_config=asdict(peft_config),
    lora_tensors=dict(weights),
    )
    self.inference_engine.worker.add_lora(lora_reqest)

  • During rollout generation, the newly added LoRA adapter is never used due to two issues:

    1. The vllm_config used to create AsyncLLM lacks a LoRAConfig (e.g., max_lora_rank), so AsyncLLM is not prepared for LoRA-based generation requests.
      See
      engine_client = AsyncLLM.from_vllm_config(
      vllm_config=vllm_config,
      usage_context=usage_context,
      disable_log_requests=engine_args.disable_log_requests,
      disable_log_stats=engine_args.disable_log_stats,
      )
    2. When calling generate in vLLMHttpServerBase, the request to self.engine (the AsyncLLM instance) omits any LoRARequest, meaning generation always uses the base model. See
      generator = self.engine.generate(prompt=prompt, sampling_params=sampling_params, request_id=request_id)

Proposed Fixes in this PR

  • Standardize and persist VLLM_LORA_INT_ID and VLLM_LORA_NAME across the training process to consistently locate and apply updated LoRA weights.
  • Inject LoRAConfig during AsyncLLM initialization and ensure vLLMHttpServerBase passes a proper LoRARequest (identified via VLLM_LORA_NAME) during rollout generation.
  • Add utility methods to automatically validate and set max_lora_rank in vLLM from config.actor_rollout_ref.model.lora_rank, addressing issues like Unable to Create Loras of Rank 1 #3696

Remarks

Special thanks to @sanxing-chen for inspiring this fix with his prior patches. Also his PR #3765 -- while also tackling an issue hurting LoRA performance -- seems to be orthogonal to the issues addressed here.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: [rollout,vllm] fix: Add LoRA Loading to Async vLLM #3639 fix: gradient vanish in FusedLinearForPPO #3765

  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)

    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data
    • If this PR involves multiple modules, separate them with ,, e.g., [megatron, fsdp, doc]
    • {type} ∈ {feat, fix, refactor, chore, test}
    • If this PR breaks any API, prepend [BREAKING] to the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that cannot be tested by CI (e.g., algorithm implementation, new model support), validate with experiments and include results such as training curves or evaluation metrics.

Controlled experiments based on examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh (see adapted script) clearly demonstrate both the issue and the effectiveness of the fix.

kl-loss val-reward

See the full W&B training log.
Summary:

  • sync-lora-32 — baseline (synchronous mode).
  • async-lora-32-before-fix — async LoRA on main branch, showing degraded performance.
  • async-lora-32-no-remove — ablation variant with fixes applied but without removing old LoRA adapters between updates (showing the importance of removal).
  • async-lora-32-after-fix — full fix applied, achieving expected improvement.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively resolves a critical issue with LoRA adapters in async vLLM rollouts, where the base model was incorrectly used instead of the updated LoRA model. The fixes, including standardizing LoRA identifiers, ensuring correct engine initialization, and passing LoRARequest during generation, are well-implemented and directly address the root causes. The introduction of constant identifiers and the explicit removal of old adapters are solid improvements. The new utility for calculating max_lora_rank also enhances configuration robustness. I have one suggestion to improve the robustness of this new utility function.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@listar2000 listar2000 changed the title [rollout,vllm] fix: make LoRA with async vLLM work properly [rollout, vllm] fix: make LoRA with async vLLM work properly Oct 20, 2025
@vermouth1992 vermouth1992 merged commit 1546ce2 into volcengine:main Oct 20, 2025
70 of 72 checks passed
@listar2000 listar2000 deleted the async-lora-fix branch October 20, 2025 17:33
wuxibin89 pushed a commit that referenced this pull request Oct 28, 2025
… initially (#3907)

### What does this PR do?

This PR fixes a remaining issue that was not fully addressed by #3821
and led to downstream errors such as #3882.

The bug occurs when the configuration `rollout.load_format` is set to
`"dummy"`, which causes the initial `base_sync_done` flag to be set to
`False`:


<https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/fsdp_workers.py#L619-L620>

As a result, during the first invocation of `update_weights`, the
`TensorLoRARequest` is **not** successfully added to the engine:


<https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625>

Later, when `generate()` is called, the code assumes that the LoRA with
the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption
fails in the scenario above, and the engine attempts to reload it using
the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError`
exceptions.

> **Note:** The correct LoRA weights should always be loaded from the
custom `TensorLoRARequest`. This is made possible because `verl`
manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s
`WorkerLoraManager`.

---

### Proposed Fix

We add a safety check in the `generate` method to ensure the LoRA is
loaded before constructing the request:

```python
# Add LoRA request
lora_request = None
# Ensure the LoRA is already loaded in the engine
if self.model_config.lora_rank > 0:
    lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras()  # <-- NEW
    if lora_loaded:
        lora_request = LoRARequest(
            lora_name=VLLM_LORA_NAME,
            lora_int_id=VLLM_LORA_INT_ID,
            lora_path=VLLM_LORA_PATH,
        )

generator = self.engine.generate(
    prompt=prompt,
    sampling_params=sampling_params,
    request_id=request_id,
    lora_request=lora_request,
)
```

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: #3821 
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

I've manually reproduced the bug mentioned in #3882, and confirmed that
the above fix resolve this bug.

For interested, some experiment runs on
[wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the
code **after the proposed fix** (and with `load_format = "dummy"`) --
even though this PR has nothing to do with LoRA performances.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
**N/A**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **N/A: see tests
above**
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
wangboxiong320 pushed a commit to wangboxiong320/verl that referenced this pull request Nov 1, 2025
…ine#3821)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

The previous volcengine#3639 addressed the **crashing issues** in `update_weights`
of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal
an implicit **off-policy issue**: the rollout generation still uses the
**base model** instead of the updated **LoRA model**, resulting in
degraded performance. We traced this to a bug in
`vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA
updates and rollout generation. Specifically:

* In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights
from the FSDP actor to the rollout `AsyncLLM` engine. However, the
updated adapter is assigned a random `lora_name` and `lora_int_id`
(generated from `time.ns()`), which are not stored—making them hard to
reuse.

https://github.com/volcengine/verl/blob/e94366d46a027d38e48e3a859b745387f131b0ad/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604

* During rollout generation, the newly added LoRA adapter is **never
used** due to two issues:

1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig`
(e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based
generation requests.
See
https://github.com/volcengine/verl/blob/e94366d46a027d38e48e3a859b745387f131b0ad/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304
2. When calling `generate` in `vLLMHttpServerBase`, the request to
`self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**,
meaning generation always uses the base model. See
https://github.com/volcengine/verl/blob/e94366d46a027d38e48e3a859b745387f131b0ad/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360

#### Proposed Fixes in this PR

* Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across
the training process to consistently locate and apply updated LoRA
weights.
* Inject `LoRAConfig` during `AsyncLLM` initialization and ensure
`vLLMHttpServerBase` passes a proper `LoRARequest` (identified via
`VLLM_LORA_NAME`) during rollout generation.
* Add utility methods to automatically validate and set `max_lora_rank`
in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing
issues like volcengine#3696

#### Remarks

Special thanks to @sanxing-chen for inspiring this fix with his prior
patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA
performance -- seems to be orthogonal to the issues addressed here.

### Checklist Before Starting

* [x] Search for similar PRs. Paste at least one query link here: volcengine#3639
volcengine#3765
* [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)

* `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
* If this PR involves multiple modules, separate them with `,`, e.g.,
`[megatron, fsdp, doc]`
  * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`}
  * If this PR breaks any API, prepend `[BREAKING]` to the title.
  * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that cannot be tested by CI (e.g., algorithm
implementation, new model support), validate with experiments and
include results such as training curves or evaluation metrics.

Controlled experiments based on
`examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted
script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b))
**clearly demonstrate both the issue and the effectiveness of the fix**.

<img width="2528" height="1328" alt="kl-loss"
src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec"
/>
<img width="2528" height="1328" alt="val-reward"
src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6"
/>

See the full [W&B training
log](https://wandb.ai/listar2000/verl-latest-lora).
Summary:

* **sync-lora-32** — baseline (synchronous mode).
* **async-lora-32-before-fix** — async LoRA on `main` branch, showing
degraded performance.
* **async-lora-32-no-remove** — ablation variant with fixes applied
**but without removing old LoRA adapters** between updates (showing the
importance of removal).
* **async-lora-32-after-fix** — full fix applied, achieving expected
improvement.


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs). **Not
Applicable**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **This PR can
hardly be covered by regular CI. I instead run concrete experiments with
GSM8K dataset.**
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
wangboxiong320 pushed a commit to wangboxiong320/verl that referenced this pull request Nov 1, 2025
… initially (volcengine#3907)

### What does this PR do?

This PR fixes a remaining issue that was not fully addressed by volcengine#3821
and led to downstream errors such as volcengine#3882.

The bug occurs when the configuration `rollout.load_format` is set to
`"dummy"`, which causes the initial `base_sync_done` flag to be set to
`False`:


<https://github.com/volcengine/verl/blob/a3e92e124f401c36f1afdbe0026ddb213e2beaa3/verl/workers/fsdp_workers.py#L619-L620>

As a result, during the first invocation of `update_weights`, the
`TensorLoRARequest` is **not** successfully added to the engine:


<https://github.com/volcengine/verl/blob/a3e92e124f401c36f1afdbe0026ddb213e2beaa3/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625>

Later, when `generate()` is called, the code assumes that the LoRA with
the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption
fails in the scenario above, and the engine attempts to reload it using
the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError`
exceptions.

> **Note:** The correct LoRA weights should always be loaded from the
custom `TensorLoRARequest`. This is made possible because `verl`
manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s
`WorkerLoraManager`.

---

### Proposed Fix

We add a safety check in the `generate` method to ensure the LoRA is
loaded before constructing the request:

```python
# Add LoRA request
lora_request = None
# Ensure the LoRA is already loaded in the engine
if self.model_config.lora_rank > 0:
    lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras()  # <-- NEW
    if lora_loaded:
        lora_request = LoRARequest(
            lora_name=VLLM_LORA_NAME,
            lora_int_id=VLLM_LORA_INT_ID,
            lora_path=VLLM_LORA_PATH,
        )

generator = self.engine.generate(
    prompt=prompt,
    sampling_params=sampling_params,
    request_id=request_id,
    lora_request=lora_request,
)
```

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that
the above fix resolve this bug.

For interested, some experiment runs on
[wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the
code **after the proposed fix** (and with `load_format = "dummy"`) --
even though this PR has nothing to do with LoRA performances.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
**N/A**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **N/A: see tests
above**
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
NenoL2001 pushed a commit to NenoL2001/verl that referenced this pull request Nov 3, 2025
…ine#3821)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

The previous volcengine#3639 addressed the **crashing issues** in `update_weights`
of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal
an implicit **off-policy issue**: the rollout generation still uses the
**base model** instead of the updated **LoRA model**, resulting in
degraded performance. We traced this to a bug in
`vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA
updates and rollout generation. Specifically:

* In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights
from the FSDP actor to the rollout `AsyncLLM` engine. However, the
updated adapter is assigned a random `lora_name` and `lora_int_id`
(generated from `time.ns()`), which are not stored—making them hard to
reuse.

https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604

* During rollout generation, the newly added LoRA adapter is **never
used** due to two issues:

1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig`
(e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based
generation requests.
See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304
2. When calling `generate` in `vLLMHttpServerBase`, the request to
`self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**,
meaning generation always uses the base model. See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360

#### Proposed Fixes in this PR

* Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across
the training process to consistently locate and apply updated LoRA
weights.
* Inject `LoRAConfig` during `AsyncLLM` initialization and ensure
`vLLMHttpServerBase` passes a proper `LoRARequest` (identified via
`VLLM_LORA_NAME`) during rollout generation.
* Add utility methods to automatically validate and set `max_lora_rank`
in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing
issues like volcengine#3696

#### Remarks

Special thanks to @sanxing-chen for inspiring this fix with his prior
patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA
performance -- seems to be orthogonal to the issues addressed here.

### Checklist Before Starting

* [x] Search for similar PRs. Paste at least one query link here: volcengine#3639
volcengine#3765
* [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)

* `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
* If this PR involves multiple modules, separate them with `,`, e.g.,
`[megatron, fsdp, doc]`
  * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`}
  * If this PR breaks any API, prepend `[BREAKING]` to the title.
  * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that cannot be tested by CI (e.g., algorithm
implementation, new model support), validate with experiments and
include results such as training curves or evaluation metrics.

Controlled experiments based on
`examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted
script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b))
**clearly demonstrate both the issue and the effectiveness of the fix**.

<img width="2528" height="1328" alt="kl-loss"
src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec"
/>
<img width="2528" height="1328" alt="val-reward"
src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6"
/>

See the full [W&B training
log](https://wandb.ai/listar2000/verl-latest-lora).
Summary:

* **sync-lora-32** — baseline (synchronous mode).
* **async-lora-32-before-fix** — async LoRA on `main` branch, showing
degraded performance.
* **async-lora-32-no-remove** — ablation variant with fixes applied
**but without removing old LoRA adapters** between updates (showing the
importance of removal).
* **async-lora-32-after-fix** — full fix applied, achieving expected
improvement.


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs). **Not
Applicable**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **This PR can
hardly be covered by regular CI. I instead run concrete experiments with
GSM8K dataset.**
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
NenoL2001 pushed a commit to NenoL2001/verl that referenced this pull request Nov 3, 2025
… initially (volcengine#3907)

### What does this PR do?

This PR fixes a remaining issue that was not fully addressed by volcengine#3821
and led to downstream errors such as volcengine#3882.

The bug occurs when the configuration `rollout.load_format` is set to
`"dummy"`, which causes the initial `base_sync_done` flag to be set to
`False`:


<https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/fsdp_workers.py#L619-L620>

As a result, during the first invocation of `update_weights`, the
`TensorLoRARequest` is **not** successfully added to the engine:


<https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625>

Later, when `generate()` is called, the code assumes that the LoRA with
the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption
fails in the scenario above, and the engine attempts to reload it using
the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError`
exceptions.

> **Note:** The correct LoRA weights should always be loaded from the
custom `TensorLoRARequest`. This is made possible because `verl`
manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s
`WorkerLoraManager`.

---

### Proposed Fix

We add a safety check in the `generate` method to ensure the LoRA is
loaded before constructing the request:

```python
# Add LoRA request
lora_request = None
# Ensure the LoRA is already loaded in the engine
if self.model_config.lora_rank > 0:
    lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras()  # <-- NEW
    if lora_loaded:
        lora_request = LoRARequest(
            lora_name=VLLM_LORA_NAME,
            lora_int_id=VLLM_LORA_INT_ID,
            lora_path=VLLM_LORA_PATH,
        )

generator = self.engine.generate(
    prompt=prompt,
    sampling_params=sampling_params,
    request_id=request_id,
    lora_request=lora_request,
)
```

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that
the above fix resolve this bug.

For interested, some experiment runs on
[wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the
code **after the proposed fix** (and with `load_format = "dummy"`) --
even though this PR has nothing to do with LoRA performances.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
**N/A**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **N/A: see tests
above**
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
AlexJJ009 pushed a commit to AlexJJ009/verl that referenced this pull request Nov 5, 2025
…ine#3821)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

The previous volcengine#3639 addressed the **crashing issues** in `update_weights`
of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal
an implicit **off-policy issue**: the rollout generation still uses the
**base model** instead of the updated **LoRA model**, resulting in
degraded performance. We traced this to a bug in
`vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA
updates and rollout generation. Specifically:

* In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights
from the FSDP actor to the rollout `AsyncLLM` engine. However, the
updated adapter is assigned a random `lora_name` and `lora_int_id`
(generated from `time.ns()`), which are not stored—making them hard to
reuse.

https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604

* During rollout generation, the newly added LoRA adapter is **never
used** due to two issues:

1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig`
(e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based
generation requests.
See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304
2. When calling `generate` in `vLLMHttpServerBase`, the request to
`self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**,
meaning generation always uses the base model. See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360

#### Proposed Fixes in this PR

* Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across
the training process to consistently locate and apply updated LoRA
weights.
* Inject `LoRAConfig` during `AsyncLLM` initialization and ensure
`vLLMHttpServerBase` passes a proper `LoRARequest` (identified via
`VLLM_LORA_NAME`) during rollout generation.
* Add utility methods to automatically validate and set `max_lora_rank`
in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing
issues like volcengine#3696

#### Remarks

Special thanks to @sanxing-chen for inspiring this fix with his prior
patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA
performance -- seems to be orthogonal to the issues addressed here.

### Checklist Before Starting

* [x] Search for similar PRs. Paste at least one query link here: volcengine#3639
volcengine#3765
* [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)

* `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
* If this PR involves multiple modules, separate them with `,`, e.g.,
`[megatron, fsdp, doc]`
  * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`}
  * If this PR breaks any API, prepend `[BREAKING]` to the title.
  * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that cannot be tested by CI (e.g., algorithm
implementation, new model support), validate with experiments and
include results such as training curves or evaluation metrics.

Controlled experiments based on
`examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted
script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b))
**clearly demonstrate both the issue and the effectiveness of the fix**.

<img width="2528" height="1328" alt="kl-loss"
src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec"
/>
<img width="2528" height="1328" alt="val-reward"
src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6"
/>

See the full [W&B training
log](https://wandb.ai/listar2000/verl-latest-lora).
Summary:

* **sync-lora-32** — baseline (synchronous mode).
* **async-lora-32-before-fix** — async LoRA on `main` branch, showing
degraded performance.
* **async-lora-32-no-remove** — ablation variant with fixes applied
**but without removing old LoRA adapters** between updates (showing the
importance of removal).
* **async-lora-32-after-fix** — full fix applied, achieving expected
improvement.


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs). **Not
Applicable**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **This PR can
hardly be covered by regular CI. I instead run concrete experiments with
GSM8K dataset.**
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
chenjiaoAngel added a commit to chenjiaoAngel/verl that referenced this pull request Nov 14, 2025
…ine#3821)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

The previous volcengine#3639 addressed the **crashing issues** in `update_weights`
of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal
an implicit **off-policy issue**: the rollout generation still uses the
**base model** instead of the updated **LoRA model**, resulting in
degraded performance. We traced this to a bug in
`vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA
updates and rollout generation. Specifically:

* In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights
from the FSDP actor to the rollout `AsyncLLM` engine. However, the
updated adapter is assigned a random `lora_name` and `lora_int_id`
(generated from `time.ns()`), which are not stored—making them hard to
reuse.

https://github.com/volcengine/verl/blob/8044584e32516719811c16523eb10cb0bcb906e6/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604

* During rollout generation, the newly added LoRA adapter is **never
used** due to two issues:

1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig`
(e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based
generation requests.
See
https://github.com/volcengine/verl/blob/8044584e32516719811c16523eb10cb0bcb906e6/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304
2. When calling `generate` in `vLLMHttpServerBase`, the request to
`self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**,
meaning generation always uses the base model. See
https://github.com/volcengine/verl/blob/8044584e32516719811c16523eb10cb0bcb906e6/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360

#### Proposed Fixes in this PR

* Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across
the training process to consistently locate and apply updated LoRA
weights.
* Inject `LoRAConfig` during `AsyncLLM` initialization and ensure
`vLLMHttpServerBase` passes a proper `LoRARequest` (identified via
`VLLM_LORA_NAME`) during rollout generation.
* Add utility methods to automatically validate and set `max_lora_rank`
in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing
issues like volcengine#3696

#### Remarks

Special thanks to @sanxing-chen for inspiring this fix with his prior
patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA
performance -- seems to be orthogonal to the issues addressed here.

### Checklist Before Starting

* [x] Search for similar PRs. Paste at least one query link here: volcengine#3639
volcengine#3765
* [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)

* `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
* If this PR involves multiple modules, separate them with `,`, e.g.,
`[megatron, fsdp, doc]`
  * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`}
  * If this PR breaks any API, prepend `[BREAKING]` to the title.
  * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that cannot be tested by CI (e.g., algorithm
implementation, new model support), validate with experiments and
include results such as training curves or evaluation metrics.

Controlled experiments based on
`examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted
script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b))
**clearly demonstrate both the issue and the effectiveness of the fix**.

<img width="2528" height="1328" alt="kl-loss"
src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec"
/>
<img width="2528" height="1328" alt="val-reward"
src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6"
/>

See the full [W&B training
log](https://wandb.ai/listar2000/verl-latest-lora).
Summary:

* **sync-lora-32** — baseline (synchronous mode).
* **async-lora-32-before-fix** — async LoRA on `main` branch, showing
degraded performance.
* **async-lora-32-no-remove** — ablation variant with fixes applied
**but without removing old LoRA adapters** between updates (showing the
importance of removal).
* **async-lora-32-after-fix** — full fix applied, achieving expected
improvement.


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs). **Not
Applicable**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **This PR can
hardly be covered by regular CI. I instead run concrete experiments with
GSM8K dataset.**
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
chenjiaoAngel added a commit to chenjiaoAngel/verl that referenced this pull request Nov 14, 2025
… initially (volcengine#3907)

### What does this PR do?

This PR fixes a remaining issue that was not fully addressed by volcengine#3821
and led to downstream errors such as volcengine#3882.

The bug occurs when the configuration `rollout.load_format` is set to
`"dummy"`, which causes the initial `base_sync_done` flag to be set to
`False`:


<https://github.com/volcengine/verl/blob/04dc5f438b0e35f20dd3fdda7df06970ea754cb4/verl/workers/fsdp_workers.py#L619-L620>

As a result, during the first invocation of `update_weights`, the
`TensorLoRARequest` is **not** successfully added to the engine:


<https://github.com/volcengine/verl/blob/04dc5f438b0e35f20dd3fdda7df06970ea754cb4/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625>

Later, when `generate()` is called, the code assumes that the LoRA with
the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption
fails in the scenario above, and the engine attempts to reload it using
the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError`
exceptions.

> **Note:** The correct LoRA weights should always be loaded from the
custom `TensorLoRARequest`. This is made possible because `verl`
manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s
`WorkerLoraManager`.

---

### Proposed Fix

We add a safety check in the `generate` method to ensure the LoRA is
loaded before constructing the request:

```python
# Add LoRA request
lora_request = None
# Ensure the LoRA is already loaded in the engine
if self.model_config.lora_rank > 0:
    lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras()  # <-- NEW
    if lora_loaded:
        lora_request = LoRARequest(
            lora_name=VLLM_LORA_NAME,
            lora_int_id=VLLM_LORA_INT_ID,
            lora_path=VLLM_LORA_PATH,
        )

generator = self.engine.generate(
    prompt=prompt,
    sampling_params=sampling_params,
    request_id=request_id,
    lora_request=lora_request,
)
```

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that
the above fix resolve this bug.

For interested, some experiment runs on
[wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the
code **after the proposed fix** (and with `load_format = "dummy"`) --
even though this PR has nothing to do with LoRA performances.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
**N/A**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **N/A: see tests
above**
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
chenhaiq pushed a commit to The-Hierophant/verl-1 that referenced this pull request Nov 18, 2025
…ine#3821)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

The previous volcengine#3639 addressed the **crashing issues** in `update_weights`
of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal
an implicit **off-policy issue**: the rollout generation still uses the
**base model** instead of the updated **LoRA model**, resulting in
degraded performance. We traced this to a bug in
`vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA
updates and rollout generation. Specifically:

* In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights
from the FSDP actor to the rollout `AsyncLLM` engine. However, the
updated adapter is assigned a random `lora_name` and `lora_int_id`
(generated from `time.ns()`), which are not stored—making them hard to
reuse.

https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604

* During rollout generation, the newly added LoRA adapter is **never
used** due to two issues:

1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig`
(e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based
generation requests.
See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304
2. When calling `generate` in `vLLMHttpServerBase`, the request to
`self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**,
meaning generation always uses the base model. See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360

#### Proposed Fixes in this PR

* Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across
the training process to consistently locate and apply updated LoRA
weights.
* Inject `LoRAConfig` during `AsyncLLM` initialization and ensure
`vLLMHttpServerBase` passes a proper `LoRARequest` (identified via
`VLLM_LORA_NAME`) during rollout generation.
* Add utility methods to automatically validate and set `max_lora_rank`
in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing
issues like volcengine#3696

#### Remarks

Special thanks to @sanxing-chen for inspiring this fix with his prior
patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA
performance -- seems to be orthogonal to the issues addressed here.

### Checklist Before Starting

* [x] Search for similar PRs. Paste at least one query link here: volcengine#3639
volcengine#3765
* [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)

* `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
* If this PR involves multiple modules, separate them with `,`, e.g.,
`[megatron, fsdp, doc]`
  * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`}
  * If this PR breaks any API, prepend `[BREAKING]` to the title.
  * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that cannot be tested by CI (e.g., algorithm
implementation, new model support), validate with experiments and
include results such as training curves or evaluation metrics.

Controlled experiments based on
`examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted
script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b))
**clearly demonstrate both the issue and the effectiveness of the fix**.

<img width="2528" height="1328" alt="kl-loss"
src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec"
/>
<img width="2528" height="1328" alt="val-reward"
src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6"
/>

See the full [W&B training
log](https://wandb.ai/listar2000/verl-latest-lora).
Summary:

* **sync-lora-32** — baseline (synchronous mode).
* **async-lora-32-before-fix** — async LoRA on `main` branch, showing
degraded performance.
* **async-lora-32-no-remove** — ablation variant with fixes applied
**but without removing old LoRA adapters** between updates (showing the
importance of removal).
* **async-lora-32-after-fix** — full fix applied, achieving expected
improvement.


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs). **Not
Applicable**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **This PR can
hardly be covered by regular CI. I instead run concrete experiments with
GSM8K dataset.**
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
chenhaiq pushed a commit to The-Hierophant/verl-1 that referenced this pull request Nov 18, 2025
… initially (volcengine#3907)

### What does this PR do?

This PR fixes a remaining issue that was not fully addressed by volcengine#3821
and led to downstream errors such as volcengine#3882.

The bug occurs when the configuration `rollout.load_format` is set to
`"dummy"`, which causes the initial `base_sync_done` flag to be set to
`False`:


<https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/fsdp_workers.py#L619-L620>

As a result, during the first invocation of `update_weights`, the
`TensorLoRARequest` is **not** successfully added to the engine:


<https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625>

Later, when `generate()` is called, the code assumes that the LoRA with
the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption
fails in the scenario above, and the engine attempts to reload it using
the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError`
exceptions.

> **Note:** The correct LoRA weights should always be loaded from the
custom `TensorLoRARequest`. This is made possible because `verl`
manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s
`WorkerLoraManager`.

---

### Proposed Fix

We add a safety check in the `generate` method to ensure the LoRA is
loaded before constructing the request:

```python
# Add LoRA request
lora_request = None
# Ensure the LoRA is already loaded in the engine
if self.model_config.lora_rank > 0:
    lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras()  # <-- NEW
    if lora_loaded:
        lora_request = LoRARequest(
            lora_name=VLLM_LORA_NAME,
            lora_int_id=VLLM_LORA_INT_ID,
            lora_path=VLLM_LORA_PATH,
        )

generator = self.engine.generate(
    prompt=prompt,
    sampling_params=sampling_params,
    request_id=request_id,
    lora_request=lora_request,
)
```

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that
the above fix resolve this bug.

For interested, some experiment runs on
[wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the
code **after the proposed fix** (and with `load_format = "dummy"`) --
even though this PR has nothing to do with LoRA performances.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
**N/A**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **N/A: see tests
above**
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
NenoL2001 pushed a commit to NenoL2001/verl that referenced this pull request Nov 26, 2025
…ine#3821)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

The previous volcengine#3639 addressed the **crashing issues** in `update_weights`
of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal
an implicit **off-policy issue**: the rollout generation still uses the
**base model** instead of the updated **LoRA model**, resulting in
degraded performance. We traced this to a bug in
`vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA
updates and rollout generation. Specifically:

* In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights
from the FSDP actor to the rollout `AsyncLLM` engine. However, the
updated adapter is assigned a random `lora_name` and `lora_int_id`
(generated from `time.ns()`), which are not stored—making them hard to
reuse.

https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604

* During rollout generation, the newly added LoRA adapter is **never
used** due to two issues:

1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig`
(e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based
generation requests.
See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304
2. When calling `generate` in `vLLMHttpServerBase`, the request to
`self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**,
meaning generation always uses the base model. See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360

#### Proposed Fixes in this PR

* Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across
the training process to consistently locate and apply updated LoRA
weights.
* Inject `LoRAConfig` during `AsyncLLM` initialization and ensure
`vLLMHttpServerBase` passes a proper `LoRARequest` (identified via
`VLLM_LORA_NAME`) during rollout generation.
* Add utility methods to automatically validate and set `max_lora_rank`
in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing
issues like volcengine#3696

#### Remarks

Special thanks to @sanxing-chen for inspiring this fix with his prior
patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA
performance -- seems to be orthogonal to the issues addressed here.

### Checklist Before Starting

* [x] Search for similar PRs. Paste at least one query link here: volcengine#3639
volcengine#3765
* [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)

* `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
* If this PR involves multiple modules, separate them with `,`, e.g.,
`[megatron, fsdp, doc]`
  * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`}
  * If this PR breaks any API, prepend `[BREAKING]` to the title.
  * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that cannot be tested by CI (e.g., algorithm
implementation, new model support), validate with experiments and
include results such as training curves or evaluation metrics.

Controlled experiments based on
`examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted
script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b))
**clearly demonstrate both the issue and the effectiveness of the fix**.

<img width="2528" height="1328" alt="kl-loss"
src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec"
/>
<img width="2528" height="1328" alt="val-reward"
src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6"
/>

See the full [W&B training
log](https://wandb.ai/listar2000/verl-latest-lora).
Summary:

* **sync-lora-32** — baseline (synchronous mode).
* **async-lora-32-before-fix** — async LoRA on `main` branch, showing
degraded performance.
* **async-lora-32-no-remove** — ablation variant with fixes applied
**but without removing old LoRA adapters** between updates (showing the
importance of removal).
* **async-lora-32-after-fix** — full fix applied, achieving expected
improvement.


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs). **Not
Applicable**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **This PR can
hardly be covered by regular CI. I instead run concrete experiments with
GSM8K dataset.**
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
NenoL2001 pushed a commit to NenoL2001/verl that referenced this pull request Nov 26, 2025
… initially (volcengine#3907)

### What does this PR do?

This PR fixes a remaining issue that was not fully addressed by volcengine#3821
and led to downstream errors such as volcengine#3882.

The bug occurs when the configuration `rollout.load_format` is set to
`"dummy"`, which causes the initial `base_sync_done` flag to be set to
`False`:


<https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/fsdp_workers.py#L619-L620>

As a result, during the first invocation of `update_weights`, the
`TensorLoRARequest` is **not** successfully added to the engine:


<https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625>

Later, when `generate()` is called, the code assumes that the LoRA with
the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption
fails in the scenario above, and the engine attempts to reload it using
the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError`
exceptions.

> **Note:** The correct LoRA weights should always be loaded from the
custom `TensorLoRARequest`. This is made possible because `verl`
manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s
`WorkerLoraManager`.

---

### Proposed Fix

We add a safety check in the `generate` method to ensure the LoRA is
loaded before constructing the request:

```python
# Add LoRA request
lora_request = None
# Ensure the LoRA is already loaded in the engine
if self.model_config.lora_rank > 0:
    lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras()  # <-- NEW
    if lora_loaded:
        lora_request = LoRARequest(
            lora_name=VLLM_LORA_NAME,
            lora_int_id=VLLM_LORA_INT_ID,
            lora_path=VLLM_LORA_PATH,
        )

generator = self.engine.generate(
    prompt=prompt,
    sampling_params=sampling_params,
    request_id=request_id,
    lora_request=lora_request,
)
```

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that
the above fix resolve this bug.

For interested, some experiment runs on
[wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the
code **after the proposed fix** (and with `load_format = "dummy"`) --
even though this PR has nothing to do with LoRA performances.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
**N/A**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **N/A: see tests
above**
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
albertimff pushed a commit to albertimff/verl that referenced this pull request Dec 1, 2025
… initially (volcengine#3907)

### What does this PR do?

This PR fixes a remaining issue that was not fully addressed by volcengine#3821
and led to downstream errors such as volcengine#3882.

The bug occurs when the configuration `rollout.load_format` is set to
`"dummy"`, which causes the initial `base_sync_done` flag to be set to
`False`:


<https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/fsdp_workers.py#L619-L620>

As a result, during the first invocation of `update_weights`, the
`TensorLoRARequest` is **not** successfully added to the engine:


<https://github.com/volcengine/verl/blob/ecdaa8d9af75ab064bcbda0d797986a198d752b0/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625>

Later, when `generate()` is called, the code assumes that the LoRA with
the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption
fails in the scenario above, and the engine attempts to reload it using
the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError`
exceptions.

> **Note:** The correct LoRA weights should always be loaded from the
custom `TensorLoRARequest`. This is made possible because `verl`
manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s
`WorkerLoraManager`.

---

### Proposed Fix

We add a safety check in the `generate` method to ensure the LoRA is
loaded before constructing the request:

```python
# Add LoRA request
lora_request = None
# Ensure the LoRA is already loaded in the engine
if self.model_config.lora_rank > 0:
    lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras()  # <-- NEW
    if lora_loaded:
        lora_request = LoRARequest(
            lora_name=VLLM_LORA_NAME,
            lora_int_id=VLLM_LORA_INT_ID,
            lora_path=VLLM_LORA_PATH,
        )

generator = self.engine.generate(
    prompt=prompt,
    sampling_params=sampling_params,
    request_id=request_id,
    lora_request=lora_request,
)
```

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that
the above fix resolve this bug.

For interested, some experiment runs on
[wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the
code **after the proposed fix** (and with `load_format = "dummy"`) --
even though this PR has nothing to do with LoRA performances.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
**N/A**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **N/A: see tests
above**
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
TimurTaepov pushed a commit to giorgossideris/verl that referenced this pull request Dec 20, 2025
…ine#3821)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

The previous volcengine#3639 addressed the **crashing issues** in `update_weights`
of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal
an implicit **off-policy issue**: the rollout generation still uses the
**base model** instead of the updated **LoRA model**, resulting in
degraded performance. We traced this to a bug in
`vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA
updates and rollout generation. Specifically:

* In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights
from the FSDP actor to the rollout `AsyncLLM` engine. However, the
updated adapter is assigned a random `lora_name` and `lora_int_id`
(generated from `time.ns()`), which are not stored—making them hard to
reuse.

https://github.com/volcengine/verl/blob/685492aac13babf00f92ed4aabb1e7133da59531/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604

* During rollout generation, the newly added LoRA adapter is **never
used** due to two issues:

1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig`
(e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based
generation requests.
See
https://github.com/volcengine/verl/blob/685492aac13babf00f92ed4aabb1e7133da59531/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304
2. When calling `generate` in `vLLMHttpServerBase`, the request to
`self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**,
meaning generation always uses the base model. See
https://github.com/volcengine/verl/blob/685492aac13babf00f92ed4aabb1e7133da59531/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360

#### Proposed Fixes in this PR

* Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across
the training process to consistently locate and apply updated LoRA
weights.
* Inject `LoRAConfig` during `AsyncLLM` initialization and ensure
`vLLMHttpServerBase` passes a proper `LoRARequest` (identified via
`VLLM_LORA_NAME`) during rollout generation.
* Add utility methods to automatically validate and set `max_lora_rank`
in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing
issues like volcengine#3696

#### Remarks

Special thanks to @sanxing-chen for inspiring this fix with his prior
patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA
performance -- seems to be orthogonal to the issues addressed here.

### Checklist Before Starting

* [x] Search for similar PRs. Paste at least one query link here: volcengine#3639
volcengine#3765
* [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)

* `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
* If this PR involves multiple modules, separate them with `,`, e.g.,
`[megatron, fsdp, doc]`
  * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`}
  * If this PR breaks any API, prepend `[BREAKING]` to the title.
  * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that cannot be tested by CI (e.g., algorithm
implementation, new model support), validate with experiments and
include results such as training curves or evaluation metrics.

Controlled experiments based on
`examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted
script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b))
**clearly demonstrate both the issue and the effectiveness of the fix**.

<img width="2528" height="1328" alt="kl-loss"
src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec"
/>
<img width="2528" height="1328" alt="val-reward"
src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6"
/>

See the full [W&B training
log](https://wandb.ai/listar2000/verl-latest-lora).
Summary:

* **sync-lora-32** — baseline (synchronous mode).
* **async-lora-32-before-fix** — async LoRA on `main` branch, showing
degraded performance.
* **async-lora-32-no-remove** — ablation variant with fixes applied
**but without removing old LoRA adapters** between updates (showing the
importance of removal).
* **async-lora-32-after-fix** — full fix applied, achieving expected
improvement.


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs). **Not
Applicable**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **This PR can
hardly be covered by regular CI. I instead run concrete experiments with
GSM8K dataset.**
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
TimurTaepov pushed a commit to giorgossideris/verl that referenced this pull request Dec 20, 2025
… initially (volcengine#3907)

### What does this PR do?

This PR fixes a remaining issue that was not fully addressed by volcengine#3821
and led to downstream errors such as volcengine#3882.

The bug occurs when the configuration `rollout.load_format` is set to
`"dummy"`, which causes the initial `base_sync_done` flag to be set to
`False`:


<https://github.com/volcengine/verl/blob/65060897d66cc5fe010265c386bd536ea8dfb8f8/verl/workers/fsdp_workers.py#L619-L620>

As a result, during the first invocation of `update_weights`, the
`TensorLoRARequest` is **not** successfully added to the engine:


<https://github.com/volcengine/verl/blob/65060897d66cc5fe010265c386bd536ea8dfb8f8/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L613-L625>

Later, when `generate()` is called, the code assumes that the LoRA with
the internal ID `VLLM_LORA_INT_ID` is already loaded. This assumption
fails in the scenario above, and the engine attempts to reload it using
the placeholder `VLLM_LORA_PATH`, leading to `FileNotFoundError`
exceptions.

> **Note:** The correct LoRA weights should always be loaded from the
custom `TensorLoRARequest`. This is made possible because `verl`
manually overrides (`"hijacks"`) the `_load_adapter` method in `vllm`’s
`WorkerLoraManager`.

---

### Proposed Fix

We add a safety check in the `generate` method to ensure the LoRA is
loaded before constructing the request:

```python
# Add LoRA request
lora_request = None
# Ensure the LoRA is already loaded in the engine
if self.model_config.lora_rank > 0:
    lora_loaded = VLLM_LORA_INT_ID in await self.engine.list_loras()  # <-- NEW
    if lora_loaded:
        lora_request = LoRARequest(
            lora_name=VLLM_LORA_NAME,
            lora_int_id=VLLM_LORA_INT_ID,
            lora_path=VLLM_LORA_PATH,
        )

generator = self.engine.generate(
    prompt=prompt,
    sampling_params=sampling_params,
    request_id=request_id,
    lora_request=lora_request,
)
```

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: volcengine#3821 
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

I've manually reproduced the bug mentioned in volcengine#3882, and confirmed that
the above fix resolve this bug.

For interested, some experiment runs on
[wandb](https://wandb.ai/listar2000/solver-judge-workflow) using the
code **after the proposed fix** (and with `load_format = "dummy"`) --
even though this PR has nothing to do with LoRA performances.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
**N/A**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **N/A: see tests
above**
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants