Added sample_kind to ModelOutput, without breaking changes#168
Added sample_kind to ModelOutput, without breaking changes#168
sample_kind to ModelOutput, without breaking changes#168Conversation
4b3c32d to
87dc377
Compare
|
This is ready for a final look. I checked that one can take the conda |
dff557b to
ea5e7fd
Compare
|
Do not merge before #186, we need to have at least one release of metatomic-ase that works with an already released version of metatomic-torch |
9353d37 to
4df02e3
Compare
…lity Co-Authored-By: Guillaume Fraux <guillaume.fraux@epfl.ch>
a24c33e to
b387342
Compare
|
@HaoZeke I don't understand your change, we wanted it to work (not make engines fail) but deprecate it, issuing a warning so that the engines get changed before it gets removed. What's the problem with issuing a warning, it does not make the code fail no? |
yeah that's true, but there's also the PR description where I thought we don't even want deprecation warnings, but OK, rebasing on #199 and then extending it |
|
Aaah ok, noo the goal was to keep it compatible until metatomic introduces compatibility breaks (when multiple backends are supported probably), and at that point just remove |
…nd migrate call sites Pol's fca4e05 kept the existing `TORCH_WARN_DEPRECATION` in `set_per_atom` / `get_per_atom` intentionally: the new `sample_kind` API is preferred and `per_atom` is the deprecated shim. This commit mirrors the conftest/filterwarnings pattern the morfixes branch uses for `ModelOutput.quantity`, applied only to `per_atom` (no overlap with #199 -- this PR does not depend on morfixes). Tests / packaging - `tests/conftest.py` in all three subpackages: regex-strip the C++ `TORCH_WARN_DEPRECATION` line for `per_atom` from stderr in the autouse `fail_test_with_output` fixture. The regex tolerates both Torch 2.3 (`[W model.cpp:N]`) and Torch 2.11+ (`[W<MMDD> HH:MM:SS.fractZ model.cpp:N]`) prefixes. Any other unexpected stderr still fails the test. - `pyproject.toml` in all three subpackages: ignore the matching Python `DeprecationWarning` so tests exercising the field directly do not blow up under `-W error`. - `tests/outputs.py`: drop the two `assert message in captured.err` sections from `test_sample_kind`. On Torch 2.3 the C++ warning lands on stderr directly and the conftest regex scrubs it; on Torch 2.11 `PyErr_WarnEx` + the filterwarnings ignore leaves stderr empty. Asserting a specific stderr shape mid-test was torch-version-brittle. The `pytest.warns(match=...)` guards remain, so the shim is still verified. Drop the now-unused `capfd` parameter. Internal call site migration - `metatomic_torch/metatomic/torch/heat_flux.py`: the four `ModelOutput(..., per_atom=...)` call sites for masses/velocities/ energies/heat_flux output all use `sample_kind="atom"/"system"` now, so the wrapper does not emit warnings from its own construction path. - `metatomic_torch/metatomic/torch/model.py`: the dedup check in `_get_requested_inputs` compares `sample_kind` instead of reading the deprecated `per_atom` property. - `metatomic_torch/tests/heat_flux.py`, `metatomic_ase/tests/heat_flux.py`, `metatomic_ase/tests/calculator.py::test_inputs_different_units`: migrate all `ModelOutput(..., per_atom=...)` to `sample_kind`.
| # C++ TORCH_WARN_DEPRECATION writes to stderr; filter the known per_atom warning. | ||
| # Torch 2.3 emits "[W model.cpp:N] ..." (no timestamp); Torch 2.11+ emits | ||
| # "[W<MMDD> HH:MM:SS.fractZ model.cpp:N] ...". The [^\]]* tolerates both. | ||
| _PER_ATOM_DEPRECATION_RE = re.compile( | ||
| r"\[W[^\]]*model\.cpp:\d+\] Warning: `per_atom` is deprecated, " | ||
| r"please use `sample_kind` instead.*?\n" | ||
| ) | ||
|
|
||
|
|
There was a problem hiding this comment.
Like for #199, strong disagree from me on adding exception to this. Any warning should be handled as close as possible to where it is emitted.
| # ModelOutput.per_atom deprecation (use sample_kind instead) | ||
| "ignore:`per_atom` is deprecated, please use `sample_kind` instead:DeprecationWarning", |
There was a problem hiding this comment.
Why is this needed? Ideally none of the code in this repo should use the old style, so we should not have to ignore this warning
Review feedback: warnings should be handled as close to their emission point as possible, not via a broad `filterwarnings` ignore or a conftest regex scrub. No code in this repo (outside the shim test itself) uses `ModelOutput.per_atom` after the earlier call-site migrations, so the pyproject filter and the conftest `re.sub` are unnecessary. - `python/metatomic_*/tests/conftest.py`: revert all three to the original `assert captured.err == ""` form. No regex pre-processing. - `python/metatomic_*/pyproject.toml`: drop the `ignore:`per_atom` is deprecated...:DeprecationWarning` line. Under `-W error` the only place a `per_atom` `DeprecationWarning` can fire is inside the `test_sample_kind` shim test, which wraps every trigger in `pytest.warns(...)`. - `python/metatomic_torch/tests/outputs.py::test_sample_kind`: wrap the `ModelOutput(per_atom=True)` constructor in `pytest.warns` (the only remaining unwrapped emission); reintroduce the `capfd` parameter and consume the Torch 2.3 C++ `TORCH_WARN_DEPRECATION` stderr spillover at the end of the test, asserting every stray line is the known per_atom message. Torch 2.11+ routes through `PyErr_WarnEx` so stderr stays empty and the loop is a no-op there.
The `metatensor/lj-test` repo at the currently-pinned SHAs still constructs every `ModelOutput` with `per_atom=True/False`, which now emits the deprecation warning. Under `-W error` that escalates to a `DeprecationWarning` and kills `tests/heat_flux.py::test_wrap` plus the other heat-flux tests that load the LJ reference model. Migrate the pin to `HaoZeke/lj-test@f56f6b4`, a direct port of the two previous upstream SHAs with every `ModelOutput(quantity=..., per_atom=...)` rewritten to `ModelOutput(..., sample_kind=...)` and every `outputs[...].per_atom` read rewritten to `outputs[...].sample_kind == "atom"`. See `HaoZeke/lj-test:chore/issue-17-suppress-deprecations` (metatensor/lj-test#17). Bump this back to `metatensor/lj-test@<SHA>` once that branch lands on lj-test main. Both `install_lj_tests` and the inline `pip install` in the `torchsim-torch-tests` environment are bumped together.
My earlier switch to `HaoZeke/lj-test@f56f6b4` broke every test that computes an energy via the reference LJ model: the fork drops `quantity="energy"` from every `ModelOutput(...)` (forward-looking for the morfixes #199 `quantity` deprecation), which causes `AtomisticModel.forward()` to hit the early `continue` in if declared.quantity == "" or requested.quantity == "": continue and skip the `unit_conversion_factor(declared.unit, requested.unit)` step that the engine needs when asking for energy in `"ev"` while the model declares `"eV"`. Downstream ASE tests see ~100x-wrong energies. `metatensor/lj-test@eea52c6` is the existing upstream sample_kind branch commit: it already migrates every `ModelOutput(per_atom=...)` to `ModelOutput(sample_kind=...)` and every `outputs[...].per_atom` read to `outputs[...].sample_kind == "atom"` so the runtime deprecation path is not hit, but it keeps `quantity=...` intact so unit conversion still runs. That is the right pin for this PR's scope (per_atom only, no quantity overlap). Both `install_lj_tests` and the inline `pip install` in the `torch-tests` env are bumped to `eea52c6`. Bump again once that branch lands on lj-test main and gets a stable SHA.
This PR substitutes #165.
It also implements
sample_kind, with the short-term goal of enabling per-pair targets, but in this case in a way that doesn't break backward compatibility (new code using oldModelOutputmight fail, but old code using the newModelOutputwon't). This is achieved by keeping the possibility to passper_atomas an argument, and setting/getting it as a property, even though from now on the only thingModelOutputstores issample_kind.Thanks to @Luthaf for letting me know that one can set optional arguments in torch exportable classes :)
If you agree with merging this one I will finalize it with some more tests ✌️
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request