Skip to content

Conversation

@valmat07
Copy link
Contributor

@valmat07 valmat07 commented May 2, 2023

When using a scatter layer for cuda target, it gives the following error:

ValueError: Cannot use and / or / not operator to Expr, hint: use tvm.tir.all / tvm.tir.any instead

This is due to the call to the min function here:

max_threads = int(tvm.target.Target.current(allow_none=False).max_num_threads)
tdim = min(max_threads, fused_updates_dimension)

This PR fixes this bug

@tvm-bot
Copy link
Collaborator

tvm-bot commented May 2, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

  • No users to tag found in teams: cuda See #10317 for details

Generated by tvm-bot

Copy link
Contributor

@echuraev echuraev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add an unit test?

Copy link
Contributor

@echuraev echuraev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general LGTM. Please remove redundant print.

continue
if kind == "debug" and (only_vm or dev.device_type != tvm.cpu().device_type):
continue
print(tgt)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you really need this print?

@echuraev
Copy link
Contributor

cc: @junrushao, @masahi

@echuraev
Copy link
Contributor

@tvm-bot rerun

1 similar comment
@echuraev
Copy link
Contributor

@tvm-bot rerun

@vinx13 vinx13 merged commit b4c1c38 into apache:main May 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants