-
Notifications
You must be signed in to change notification settings - Fork 9
Evaluating
If you select Tools -> Evaluator in the main window, you will get a window that allows you to quickly score your own (or others') robots. This can help judge if a change in your code is beneficial in the grand scheme of things, or verify the scores other people have claimed to achieve with their code.
The interface is not unlike the Genetics one, and all parameters are explained over there. By default, the parameters are set in a way very similar to what the actual competition will used (i.e. score is only affected by gold collected). When you click Start, the current ROM of the sandbox (see Sharing for more details) is scored. The resulting score shown is an average of the scores in all the simulations ran.
The random worlds that are used for evaluation are generated based on a random seed calculated from all spawns (set in the Spawn Manager). By default, no spawns are set (as opposed to Genetics, where the current robot position is used), meaning that the random worlds remain consistent across program restarts.
To ensure that the score you get is meaningful/representative, make sure to set an appropriate amount of random worlds (or manual spawns). Be aware though that even user-created (i.e. "intelligent") bots without any particular world-dependent optimization may be susceptible to large variations in score depending on the random world seed(s) - we have observed that even with 100 random worlds per evaluation, robots may achieve 50% higher average scores for some seeds than for others!