You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`neuroevolution`| Yes | Neuroevolution-based agent. Its training is implemented using NEAT algorithm. |[here](dynamicalgorithmselection/agents/neuroevolution_agent.py)|
89
-
|`policy-gradient`| Yes | PPO-based agent. Main subject of experiments. |[here](dynamicalgorithmselection/agents/policy_gradient_agent.py)|
90
-
|`random`| Yes | Baseline for agents, that use Checkpoint division. Randomly selects actions using equal probabilities. |[here](dynamicalgorithmselection/agents/random_agent.py)|
91
-
|`RL-DAS`| No | Implementation of [Deep Reinforcement Learning for Dynamic Algorithm Selection: A Proof-of-Principle Study on Differential Evolution](https://doi.org/10.48550/arXiv.2403.02131). |[here](dynamicalgorithmselection/agents/RLDAS_agent.py)|
92
-
|`RL-DAS-random`| No | Implementation of the baseline proposed by the authors of `RL-DAS` algorithm. Randomly selects action using equal probabilities. |[here](dynamicalgorithmselection/agents/RLDAS_random_agent.py)|
|`policy-gradient`| Yes | PPO-based agent. Main subject of experiments. |[here](dynamicalgorithmselection/agents/policy_gradient_agent.py)|
89
+
|`random`| Yes | Baseline for agents, that use Checkpoint division. Randomly selects actions using equal probabilities. |[here](dynamicalgorithmselection/agents/random_agent.py)|
90
+
|`RL-DAS`| No | Implementation of [Deep Reinforcement Learning for Dynamic Algorithm Selection: A Proof-of-Principle Study on Differential Evolution](https://doi.org/10.48550/arXiv.2403.02131). |[here](dynamicalgorithmselection/agents/RLDAS_agent.py)|
91
+
|`RL-DAS-random`| No | Implementation of the baseline proposed by the authors of `RL-DAS` algorithm. Randomly selects action using equal probabilities. |[here](dynamicalgorithmselection/agents/RLDAS_random_agent.py)|
93
92
---
94
93
95
94
## 📊 Train-Test Split Strategies
@@ -151,16 +150,12 @@ Calculates the scaled improvement between checkpoints and provides a binary outc
151
150
152
151
## 🧠 State Representation
153
152
154
-
There are three options for representing the optimization state (`-r` flag):
153
+
There are two options for representing the optimization state (`-r` flag):
155
154
156
155
1.**`ELA` (Exploratory Landscape Analysis):**
157
156
Implemented using [pflacco](https://pflacco.readthedocs.io/en/latest/index.html). Uses a subset of features to manage
158
157
computational complexity.
159
-
2.**`NeurELA`:**
160
-
Uses a pre-trained model for feature extraction. The implementation is
161
-
available [here](https://github.com/MetaEvo/Neur-ELA) and described in
162
-
this [paper](https://arxiv.org/pdf/2408.10672).
163
-
3.**`custom`:**
158
+
2.**`custom`:**
164
159
A proposed feature extraction method
165
160
implemented [here](dynamicalgorithmselection/agents/agent_state.py). This can be
0 commit comments