-
Notifications
You must be signed in to change notification settings - Fork 801
[Bench] Add TorchSlmSize benchmark #20937
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: sycl
Are you sure you want to change the base?
Conversation
| "max", | ||
| batchSize=512, | ||
| slmNum=-1, | ||
| warmupIterations=2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if we are interested in having scenarios with varying warmup iterations. I would go for one value that makes results variance minimal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
heh, well, I was even planning on removing this param from Slm benchmark in CB repo 😉 thx!
5e6ae8f to
bbadbcb
Compare
bbadbcb to
0037a75
Compare
| "small", | ||
| batchSize=512, | ||
| slmNum=1, | ||
| warmupIterations=1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move warmupIterations into createTorchSlmSizeBench as it doesn't change now
and add verbose option to bench's integration tests.