-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[CUDA] Fixed the call of the min function in the schedule for cuda #14751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.
Generated by tvm-bot |
echuraev
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please add an unit test?
echuraev
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general LGTM. Please remove redundant print.
tests/python/relay/test_any.py
Outdated
| continue | ||
| if kind == "debug" and (only_vm or dev.device_type != tvm.cpu().device_type): | ||
| continue | ||
| print(tgt) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you really need this print?
|
cc: @junrushao, @masahi |
|
@tvm-bot rerun |
1 similar comment
|
@tvm-bot rerun |
When using a scatter layer for cuda target, it gives the following error:
ValueError: Cannot use and / or / not operator to Expr, hint: use tvm.tir.all / tvm.tir.any insteadThis is due to the call to the min function here:
tvm/python/tvm/topi/cuda/scatter.py
Lines 228 to 231 in d32dea8
This PR fixes this bug