test: connect inference/eval tests to algorithms#181
Conversation
d177d4f to
534e48b
Compare
johannaSommer
left a comment
There was a problem hiding this comment.
I cant believe you did this and its almost flawless on the first take, i really appreciate the effort you put into this, it's so key! just a bit of comments and logic changes then we're already GTG
There was a problem hiding this comment.
@nifleisch @johnrachwan123 if we require the fullgraph this often, should we consider making this the new default?
There was a problem hiding this comment.
Agreed. fullgraph=False is also the default torch.compile uses so it would be more convenient to match this default. Especially since fullgraph=True can lead to graph breaks for certain models and is less stable.
534e48b to
f991924
Compare
feb2379 to
0064b4f
Compare
b400123 to
52ac709
Compare
nifleisch
left a comment
There was a problem hiding this comment.
I like the PR, approved! 🙌
There was a problem hiding this comment.
Agreed. fullgraph=False is also the default torch.compile uses so it would be more convenient to match this default. Especially since fullgraph=True can lead to graph breaks for certain models and is less stable.
52ac709 to
bd53d1c
Compare
bd53d1c to
c5ee1a9
Compare
1d679b9 to
c2db893
Compare
b5eb3de to
9530ab9
Compare
8c8d10c to
5eca98d
Compare
Description
Adding simple evaluation runs to the algorithm tests to check if model is capable of running inference, and the inference outputs are compatible with the evaluation agent.
Small bug fixes here and there to get the tests up and running
Type of Change
Checklist