Skip to content

Added sample_kind to ModelOutput, without breaking changes#168

Open
pfebrer wants to merge 5 commits intomainfrom
sample_kind
Open

Added sample_kind to ModelOutput, without breaking changes#168
pfebrer wants to merge 5 commits intomainfrom
sample_kind

Conversation

@pfebrer
Copy link
Copy Markdown

@pfebrer pfebrer commented Feb 26, 2026

This PR substitutes #165.

It also implements sample_kind, with the short-term goal of enabling per-pair targets, but in this case in a way that doesn't break backward compatibility (new code using old ModelOutput might fail, but old code using the new ModelOutput won't). This is achieved by keeping the possibility to pass per_atom as an argument, and setting/getting it as a property, even though from now on the only thing ModelOutput stores is sample_kind.

Thanks to @Luthaf for letting me know that one can set optional arguments in torch exportable classes :)

If you agree with merging this one I will finalize it with some more tests ✌️

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

⚙️ Download Python wheels for this pull-request (you can install these with pip)

📚 Download documentation for this pull-request

Comment thread metatomic-torch/include/metatomic/torch/model.hpp Outdated
Comment thread metatomic-torch/src/model.cpp Outdated
Comment thread metatomic-torch/src/outputs.cpp Outdated
@pfebrer pfebrer force-pushed the sample_kind branch 3 times, most recently from 4b3c32d to 87dc377 Compare March 13, 2026 21:37
@pfebrer
Copy link
Copy Markdown
Author

pfebrer commented Mar 13, 2026

This is ready for a final look. I checked that one can take the conda lammps-metatomic and use this modified metatomic version without problems to run an MD with a pet-mad.pt exported using the last metatomic release. The results of the MD are exactly the same as when running without the modified version.

Comment thread metatomic-torch/include/metatomic/torch/model.hpp
Comment thread metatomic-torch/include/metatomic/torch/model.hpp Outdated
Comment thread metatomic-torch/include/metatomic/torch/model.hpp Outdated
Comment thread metatomic-torch/src/model.cpp Outdated
Comment thread metatomic-torch/src/model.cpp Outdated
Comment thread metatomic-torch/src/outputs.cpp Outdated
Comment thread metatomic-torch/src/outputs.cpp Outdated
@Luthaf
Copy link
Copy Markdown
Member

Luthaf commented Mar 25, 2026

Do not merge before #186, we need to have at least one release of metatomic-ase that works with an already released version of metatomic-torch

@Luthaf Luthaf force-pushed the sample_kind branch 3 times, most recently from 9353d37 to 4df02e3 Compare April 8, 2026 08:42
…lity

Co-Authored-By: Guillaume Fraux <guillaume.fraux@epfl.ch>
@HaoZeke HaoZeke force-pushed the sample_kind branch 2 times, most recently from a24c33e to b387342 Compare April 17, 2026 10:45
@HaoZeke HaoZeke requested a review from Luthaf April 17, 2026 10:56
@pfebrer
Copy link
Copy Markdown
Author

pfebrer commented Apr 17, 2026

@HaoZeke I don't understand your change, we wanted it to work (not make engines fail) but deprecate it, issuing a warning so that the engines get changed before it gets removed. What's the problem with issuing a warning, it does not make the code fail no?

@HaoZeke
Copy link
Copy Markdown
Member

HaoZeke commented Apr 17, 2026

@HaoZeke I don't understand your change, we wanted it to work (not make engines fail) but deprecate it, issuing a warning so that the engines get changed before it gets removed. What's the problem with issuing a warning, it does not make the code fail no?

yeah that's true, but there's also the PR description where I thought we don't even want deprecation warnings, but OK, rebasing on #199 and then extending it

@pfebrer
Copy link
Copy Markdown
Author

pfebrer commented Apr 17, 2026

Aaah ok, noo the goal was to keep it compatible until metatomic introduces compatibility breaks (when multiple backends are supported probably), and at that point just remove per_atom

…nd migrate call sites

Pol's fca4e05 kept the existing `TORCH_WARN_DEPRECATION` in
`set_per_atom` / `get_per_atom` intentionally: the new `sample_kind`
API is preferred and `per_atom` is the deprecated shim. This commit
mirrors the conftest/filterwarnings pattern the morfixes branch uses
for `ModelOutput.quantity`, applied only to `per_atom` (no overlap
with #199 -- this PR does not depend on morfixes).

Tests / packaging
- `tests/conftest.py` in all three subpackages: regex-strip the C++
  `TORCH_WARN_DEPRECATION` line for `per_atom` from stderr in the
  autouse `fail_test_with_output` fixture. The regex tolerates both
  Torch 2.3 (`[W model.cpp:N]`) and Torch 2.11+
  (`[W<MMDD> HH:MM:SS.fractZ model.cpp:N]`) prefixes. Any other
  unexpected stderr still fails the test.
- `pyproject.toml` in all three subpackages: ignore the matching
  Python `DeprecationWarning` so tests exercising the field directly
  do not blow up under `-W error`.
- `tests/outputs.py`: drop the two `assert message in captured.err`
  sections from `test_sample_kind`. On Torch 2.3 the C++ warning
  lands on stderr directly and the conftest regex scrubs it; on
  Torch 2.11 `PyErr_WarnEx` + the filterwarnings ignore leaves
  stderr empty. Asserting a specific stderr shape mid-test was
  torch-version-brittle. The `pytest.warns(match=...)` guards remain,
  so the shim is still verified. Drop the now-unused `capfd`
  parameter.

Internal call site migration
- `metatomic_torch/metatomic/torch/heat_flux.py`: the four
  `ModelOutput(..., per_atom=...)` call sites for masses/velocities/
  energies/heat_flux output all use `sample_kind="atom"/"system"`
  now, so the wrapper does not emit warnings from its own
  construction path.
- `metatomic_torch/metatomic/torch/model.py`: the dedup check in
  `_get_requested_inputs` compares `sample_kind` instead of reading
  the deprecated `per_atom` property.
- `metatomic_torch/tests/heat_flux.py`, `metatomic_ase/tests/heat_flux.py`,
  `metatomic_ase/tests/calculator.py::test_inputs_different_units`:
  migrate all `ModelOutput(..., per_atom=...)` to `sample_kind`.
Comment thread python/metatomic_ase/tests/conftest.py Outdated
Comment on lines +6 to +14
# C++ TORCH_WARN_DEPRECATION writes to stderr; filter the known per_atom warning.
# Torch 2.3 emits "[W model.cpp:N] ..." (no timestamp); Torch 2.11+ emits
# "[W<MMDD> HH:MM:SS.fractZ model.cpp:N] ...". The [^\]]* tolerates both.
_PER_ATOM_DEPRECATION_RE = re.compile(
r"\[W[^\]]*model\.cpp:\d+\] Warning: `per_atom` is deprecated, "
r"please use `sample_kind` instead.*?\n"
)


Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like for #199, strong disagree from me on adding exception to this. Any warning should be handled as close as possible to where it is emitted.

Comment thread python/metatomic_ase/pyproject.toml Outdated
Comment on lines +47 to +48
# ModelOutput.per_atom deprecation (use sample_kind instead)
"ignore:`per_atom` is deprecated, please use `sample_kind` instead:DeprecationWarning",
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed? Ideally none of the code in this repo should use the old style, so we should not have to ignore this warning

HaoZeke added 2 commits April 17, 2026 15:53
Review feedback: warnings should be handled as close to their emission
point as possible, not via a broad `filterwarnings` ignore or a
conftest regex scrub. No code in this repo (outside the shim test
itself) uses `ModelOutput.per_atom` after the earlier call-site
migrations, so the pyproject filter and the conftest `re.sub` are
unnecessary.

- `python/metatomic_*/tests/conftest.py`: revert all three to the
  original `assert captured.err == ""` form. No regex pre-processing.
- `python/metatomic_*/pyproject.toml`: drop the
  `ignore:`per_atom` is deprecated...:DeprecationWarning` line. Under
  `-W error` the only place a `per_atom` `DeprecationWarning` can fire
  is inside the `test_sample_kind` shim test, which wraps every
  trigger in `pytest.warns(...)`.
- `python/metatomic_torch/tests/outputs.py::test_sample_kind`: wrap
  the `ModelOutput(per_atom=True)` constructor in `pytest.warns` (the
  only remaining unwrapped emission); reintroduce the `capfd`
  parameter and consume the Torch 2.3 C++ `TORCH_WARN_DEPRECATION`
  stderr spillover at the end of the test, asserting every stray line
  is the known per_atom message. Torch 2.11+ routes through
  `PyErr_WarnEx` so stderr stays empty and the loop is a no-op there.
The `metatensor/lj-test` repo at the currently-pinned SHAs still constructs
every `ModelOutput` with `per_atom=True/False`, which now emits the
deprecation warning. Under `-W error` that escalates to a
`DeprecationWarning` and kills `tests/heat_flux.py::test_wrap` plus the
other heat-flux tests that load the LJ reference model.

Migrate the pin to `HaoZeke/lj-test@f56f6b4`, a direct port of the two
previous upstream SHAs with every `ModelOutput(quantity=..., per_atom=...)`
rewritten to `ModelOutput(..., sample_kind=...)` and every
`outputs[...].per_atom` read rewritten to
`outputs[...].sample_kind == "atom"`. See
`HaoZeke/lj-test:chore/issue-17-suppress-deprecations`
(metatensor/lj-test#17).

Bump this back to `metatensor/lj-test@<SHA>` once that branch lands on
lj-test main. Both `install_lj_tests` and the inline `pip install` in
the `torchsim-torch-tests` environment are bumped together.
My earlier switch to `HaoZeke/lj-test@f56f6b4` broke every test that
computes an energy via the reference LJ model: the fork drops
`quantity="energy"` from every `ModelOutput(...)` (forward-looking for
the morfixes #199 `quantity` deprecation), which causes
`AtomisticModel.forward()` to hit the early `continue` in

    if declared.quantity == "" or requested.quantity == "":
        continue

and skip the `unit_conversion_factor(declared.unit, requested.unit)`
step that the engine needs when asking for energy in `"ev"` while the
model declares `"eV"`. Downstream ASE tests see ~100x-wrong energies.

`metatensor/lj-test@eea52c6` is the existing upstream sample_kind
branch commit: it already migrates every `ModelOutput(per_atom=...)`
to `ModelOutput(sample_kind=...)` and every `outputs[...].per_atom`
read to `outputs[...].sample_kind == "atom"` so the runtime
deprecation path is not hit, but it keeps `quantity=...` intact so
unit conversion still runs. That is the right pin for this PR's
scope (per_atom only, no quantity overlap).

Both `install_lj_tests` and the inline `pip install` in the
`torch-tests` env are bumped to `eea52c6`. Bump again once that
branch lands on lj-test main and gets a stable SHA.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants