-
Notifications
You must be signed in to change notification settings - Fork 0
Add accuracy metric to results stats summary #31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Agree with this approach, otherwise there is too much chance of users just relying on what is default. |
|
One Q for printing: Right now we're printing accuracy with NAs for each type when it isn't being calculated. Do you like that, or should we remove the accuracy column from printing when it isn't calculated? To hide it, we just need to add a select statement to tibble(x = rep(NA, times = 5), y = 1:5) |> select(where(\(.x) all(!is.na(.x))))
#> # A tibble: 5 × 1
#> y
#> <int>
#> 1 1
#> 2 2
#> 3 3
#> 4 4
#> 5 5 |
| nrmse = c(0.411, 0.232, 0.199, 0.121, 0.486, 0.232), | ||
| mpe = c(-0.339, -0.004, -0.045, -0.002, -0.415, -0.004), | ||
| mape = c(0.445, 0.246, 0.166, 0.118, 0.558, 0.246), | ||
| accuracy = c(NA_real_, NA_real_, NA_real_, NA_real_, NA_real_, NA_real_) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would include at least one end-to-end test for run_eval() where we do calculate it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added here: 070a7bb
roninsightrx
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm!
This PR adds an (optional) accuracy metric to the results stats summary of
run_eval()(#30).The main challenge for this one was defining the API, since unlike other error metrics, the user needs to define the absolute and relative error margins for whether a prediction is considered accurate or not. I opted to only calculate accuracy if the user has supplied the absolute and relative error margins (via a new
.stats_summ_optionsargument and correspondingstats_summ_options()function). This lets us avoid defining universally suitable defaults, which is probably difficult to do appropriately.The API looks like this: