-
Notifications
You must be signed in to change notification settings - Fork 247
dsl: Move SubDimension thickness evaluation to the thicknesses themselves #2470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
FabioLuporini
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a very nice clean up!
| args.update(d._arg_values(self._dspace[d], grid, **kwargs)) | ||
|
|
||
| # Process SubDimensionThicknesses | ||
| for p in self.parameters: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be triggered from within SubDimension._arg_values itself
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, the issue is that the SubDimension isn't necessarily available to the operator. You can craft legal Operators which only contain the thickness, not the parent subdimension, hence the reworking in this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why isn't it caught by self.input 's default line 487
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming "it" means the subdomain, you can craft a legal operator as in the MFE at the top of this PR where the Operator never sees the SubDimension, only its thickness symbols
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't mean the subdomain no, I mean the thickness, you're saying you can create an operator where the Thickness (p here) is in op.parameters but is not in op.inputs?
If it's the case, i.e if it is not in inputs, then it should not need to be in args since it's not an input.
If it's not the case, the line 487 processes all inputs and should cover that case already
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried it, and this is not the case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is that possible? It's clearly in parameters since you added this so if you add is_Input and it doesn't catch it then something is broken
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no grid in the kwargs at the point where _arg_values is called on the inputs, which makes it no-op. It also needs to be done before objects get _arg_values called on them iirc, hence the current position of this loop.
You could reorder _prepare_arguments to process the grid earlier, but my functions-on-subdomains branch has a bunch of reworking here and fixing the merge conflicts will be a pain, so best to stick a pin in it and fix down the line imo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you open an issue about it then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also don't understand this thing honestly. We need an issue and ideally a comment on top of this loop explaining what's going on and what should actually happen instead. You can do it in another branch as it's a minor thing
| # the user | ||
|
|
||
| class SubDimensionThickness(DataSymbol): | ||
| """A DataSymbol to represent a thickness of a SubDimension""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
full stop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thought we didn't do those on single line docstrings?
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #2470 +/- ##
==========================================
+ Coverage 87.22% 87.26% +0.04%
==========================================
Files 238 238
Lines 45258 45278 +20
Branches 4019 4022 +3
==========================================
+ Hits 39475 39512 +37
+ Misses 5103 5085 -18
- Partials 680 681 +1 ☔ View full report in Codecov by Sentry. |
mloubout
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some mnior comments
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
| assert len(_SymbolCache) == init_cache_size + 4 | ||
| # Delete the grid and check that all symbols are subsequently garbage collected | ||
| del grid | ||
| for n in (10, 3, 0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is 10 init_cache_size ? If so just
del grid
assert len(_SymbolCache) == init_cache_size
And everything was deleted that's that's needed to check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Init cache size is 11 (and consists of a different set of symbols)
| args.update(d._arg_values(self._dspace[d], grid, **kwargs)) | ||
|
|
||
| # Process SubDimensionThicknesses | ||
| for p in self.parameters: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if you add is_Input = True to class Thickness, then you don't need this extra loop
for example https://github.com/devitocodes/devito/blob/master/devito/types/constant.py#L42
| def _arg_finalize(self, *args, **kwargs): | ||
| return {} | ||
|
|
||
| def _arg_apply(self, *args, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
drop this, irrelevant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A no-op _arg_check, _arg_finalize, and _arg_apply are required for _prepare_arguments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be fixed at the superclass site then, somehow, but fine..
|
|
||
| return {self.name: tkn} | ||
|
|
||
| def _arg_finalize(self, *args, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
drop this, irrelevant
|
Merged, thanks |
Fixes issues with standalone use of
SubDimension.symbolic_minorSubDimension.symbolic_maxin an operator. Previously variants onwould suffer from two issues:
ix.symbolic_max) would not be concretised fromx_ltkntox_ltkn0_arg_valuesunless the user explicitly passed it to the operator (and even then, it would not be adjusted to reflect any MPI decomposition)These are rectified by this PR.