Skip to content

Commit 4ee98bb

Browse files
authored
Merge pull request #24 from wniec/experiments_cleanup
Experiments cleanup, drop Neurela and Neuroevolution
2 parents 1c6302a + b0814a8 commit 4ee98bb

File tree

13 files changed

+12
-849
lines changed

13 files changed

+12
-849
lines changed

README.md

Lines changed: 9 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ uv run das <name> [options]
6868
| `-c`, `--compare` / `--no-compare` | `bool` | `False` | Whether to compare results against standalone optimizers. |
6969
| `-e`, `--wandb_entity` | `str` | `None` | Weights and Biases (WandB) entity name. |
7070
| `-w`, `--wandb_project` | `str` | `None` | Weights and Biases (WandB) project name. |
71-
| `-a`, `--agent` | `str` | `policy-gradient` | Agent type. Options: `neuroevolution`, `policy-gradient`, `random`, `RL-DAS`, `RL-DAS-random`. |
71+
| `-a`, `--agent` | `str` | `policy-gradient` | Agent type. Options: `policy-gradient`, `random`, `RL-DAS`, `RL-DAS-random`. |
7272
| `-l`, `--mode` | `str` | `LOIO` | Train/Test split mode (see [Split Strategies](https://www.google.com/search?q=%23-train-test-split-strategies)). |
7373
| `-x`, `--cdb` | `float` | `1.0` | **Checkpoint Division Exponent**; determines how quickly checkpoint length increases. |
7474
| `-r`, `--state-representation` | `str` | `ELA` | Method used to extract features from the algorithm population. |
@@ -83,13 +83,12 @@ uv run das <name> [options]
8383

8484
There are following agent options available in this project.
8585

86-
| Agent | Uses CDB? | Description | Implementation |
87-
|------------------------------------|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------|
88-
| `neuroevolution` | Yes | Neuroevolution-based agent. Its training is implemented using NEAT algorithm. | [here](dynamicalgorithmselection/agents/neuroevolution_agent.py) |
89-
| `policy-gradient` | Yes | PPO-based agent. Main subject of experiments. | [here](dynamicalgorithmselection/agents/policy_gradient_agent.py) |
90-
| `random` | Yes | Baseline for agents, that use Checkpoint division. Randomly selects actions using equal probabilities. | [here](dynamicalgorithmselection/agents/random_agent.py) |
91-
| `RL-DAS` | No | Implementation of [Deep Reinforcement Learning for Dynamic Algorithm Selection: A Proof-of-Principle Study on Differential Evolution](https://doi.org/10.48550/arXiv.2403.02131). | [here](dynamicalgorithmselection/agents/RLDAS_agent.py) |
92-
| `RL-DAS-random` | No | Implementation of the baseline proposed by the authors of `RL-DAS` algorithm. Randomly selects action using equal probabilities. | [here](dynamicalgorithmselection/agents/RLDAS_random_agent.py) |
86+
| Agent | Uses CDB? | Description | Implementation |
87+
|-------------------|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------|
88+
| `policy-gradient` | Yes | PPO-based agent. Main subject of experiments. | [here](dynamicalgorithmselection/agents/policy_gradient_agent.py) |
89+
| `random` | Yes | Baseline for agents, that use Checkpoint division. Randomly selects actions using equal probabilities. | [here](dynamicalgorithmselection/agents/random_agent.py) |
90+
| `RL-DAS` | No | Implementation of [Deep Reinforcement Learning for Dynamic Algorithm Selection: A Proof-of-Principle Study on Differential Evolution](https://doi.org/10.48550/arXiv.2403.02131). | [here](dynamicalgorithmselection/agents/RLDAS_agent.py) |
91+
| `RL-DAS-random` | No | Implementation of the baseline proposed by the authors of `RL-DAS` algorithm. Randomly selects action using equal probabilities. | [here](dynamicalgorithmselection/agents/RLDAS_random_agent.py) |
9392
---
9493

9594
## 📊 Train-Test Split Strategies
@@ -151,16 +150,12 @@ Calculates the scaled improvement between checkpoints and provides a binary outc
151150

152151
## 🧠 State Representation
153152

154-
There are three options for representing the optimization state (`-r` flag):
153+
There are two options for representing the optimization state (`-r` flag):
155154

156155
1. **`ELA` (Exploratory Landscape Analysis):**
157156
Implemented using [pflacco](https://pflacco.readthedocs.io/en/latest/index.html). Uses a subset of features to manage
158157
computational complexity.
159-
2. **`NeurELA`:**
160-
Uses a pre-trained model for feature extraction. The implementation is
161-
available [here](https://github.com/MetaEvo/Neur-ELA) and described in
162-
this [paper](https://arxiv.org/pdf/2408.10672).
163-
3. **`custom`:**
158+
2. **`custom`:**
164159
A proposed feature extraction method
165160
implemented [here](dynamicalgorithmselection/agents/agent_state.py). This can be
166161
modified to include additional features.
-25.9 KB
Binary file not shown.

dynamicalgorithmselection/NeurELA/NeurELA.py

Lines changed: 0 additions & 35 deletions
This file was deleted.

dynamicalgorithmselection/NeurELA/__init__.py

Whitespace-only changes.

0 commit comments

Comments
 (0)