diff --git a/simopt/gui/new_experiment_window.py b/simopt/gui/new_experiment_window.py index 3580fb640..e6c484f20 100644 --- a/simopt/gui/new_experiment_window.py +++ b/simopt/gui/new_experiment_window.py @@ -2306,7 +2306,10 @@ def __enable_exp_buttons(self, experiment_name: str) -> None: self.tk_buttons[button].configure(state="normal") def __update_action_button( - self, experiment_name: str, text: str, command: Callable | None = None + self, + experiment_name: str, + text: str, + command: Union[Callable, None] = None, ) -> None: name_base: Final[str] = "exp." + experiment_name action_bttn_name: Final[str] = name_base + ".action" diff --git a/simoptlib.egg-info/PKG-INFO b/simoptlib.egg-info/PKG-INFO index f9343b14e..42443ca97 100644 --- a/simoptlib.egg-info/PKG-INFO +++ b/simoptlib.egg-info/PKG-INFO @@ -1,6 +1,6 @@ Metadata-Version: 2.1 Name: simoptlib -Version: 1.1.0 +Version: 1.1.1 Summary: A testbed for simulation-optimization experiments. Author-email: David Eckman , Shane Henderson , Sara Shashaani , William Grochocinski License: MIT License @@ -76,7 +76,7 @@ Full documentation for the source code can be found on our **[readthedocs page]( - Python >= 3.8 - To check your Python version, open a terminal and run `python --version`. If you see a message along the lines of `Command not found`, then you likely don't have Python installed. If you know you have it installed but are getting a `Command not found` error, then you may need to [add Python to your PATH](https://realpython.com/add-python-to-path/). - For new installs, [Miniconda or Anaconda](https://www.anaconda.com/download) is recommended ([read about the differences between Miniconda and Anaconda](https://docs.anaconda.com/distro-or-miniconda/)). If you already have a compatible IDE (such as VS Code), we've found that Miniconda will work fine at 1/10 of the size of Anaconda. It is ***highly recommended*** to check the box during installation to add Python/Miniconda/Anaconda to your system PATH. -- Ruby >= 3.0 (required for datafarming) +- Ruby >= 2.5 (required for datafarming) - Included on MacOS, but Windows users will need to grab it from [here](https://rubyinstaller.org/). - `datafarming` gem < 2.0 (required for datafarming) - This can be installed via `gem install datafarming -v 1.4` once Ruby is installed/configured. diff --git a/workshop/README.md b/workshop/README.md index 8f8e12859..ab81b3d5a 100644 --- a/workshop/README.md +++ b/workshop/README.md @@ -11,7 +11,7 @@ The most-up-to-date publication about this library is [Eckman et al. (2023)](htt ## Before Workshop Before attending the workshop please follow the instructions below: -1. Install Python, Ruby, and required dependencies [as detailed in the README](https://github.com/simopt-admin/simopt/blob/master/README.md#getting-started) +1. Install Python, Ruby, and required dependencies [as detailed in the README](https://github.com/simopt-admin/simopt/blob/master/README.md#getting-started). (Please note that the Ruby installation is only needed for a small portion of the workshop; if you encounter issues with installing Ruby, you can still fully follow along.) 3. Install Microsoft's [Visual Studio Code (VS Code) IDE](https://code.visualstudio.com). diff --git a/workshop/workshop.ipynb b/workshop/workshop.ipynb index b8c359bff..825f6a53f 100644 --- a/workshop/workshop.ipynb +++ b/workshop/workshop.ipynb @@ -140,11 +140,11 @@ "### Exercise \\#2\n", "\n", "1. Open the file simopt/model/example.py in the VS Code editor.\n", - "2. Let's change how random search randomly samples solutions in R^2. For starters, uncomment Line 355\n", + "2. Let's change how random search randomly samples solutions in R^2. For starters, uncomment Line 430\n", "\n", " `x = tuple([rand_sol_rng.uniform(-2, 2) for _ in range(self.dim)])`\n", "\n", - " and comment out Line 356\n", + " and comment out Lines 431-437\n", " \n", " `x = tuple(rand_sol_rng.mvnormalvariate(mean_vec=np.zeros(self.dim), cov=np.eye(self.dim), factorized=False))`\n", "\n", @@ -225,31 +225,31 @@ "### Exercise \\#3\n", "\n", "1. Open simopt/model/example.py again.\n", - "2. Change the noise in the objective function evaluations to create a slightly different 2D optimization problem. This can be done by changing Line 85: \n", + "2. Change the noise in the objective function evaluations to create a slightly different 2D optimization problem. This can be done by changing Line 99: \n", " \n", " `fn_eval_at_x = np.linalg.norm(x) ** 2 + noise_rng.normalvariate()`\n", "\n", " where `x` is a numpy array of length two. For starters, try passing the argument `sigma=10` into the function call `noise_rng.normalvariate()`. The default value is `sigma=1`, so this has the effect of increasing the common variance of the noise from 1 to 100.\n", "3. Restart the kernel and run COMBO CODE CELL [0 + 1 + 3] below. *How have the plots changed? Why haven't they changed more?*\n", "\n", - "4. Next, change the underlying objective function by replacing `np.linalg.norm(x) ** 2` in Line 85 with some other two-dimensional function of `x`, e.g., `1 - np.exp(-np.linalg.norm(x) ** 2)`. (This objective function looks like an upside-down standard bivariate normal pdf, rescaled.)\n", + "4. Next, change the underlying objective function by replacing `np.linalg.norm(x) ** 2` in Line 99 with some other two-dimensional function of `x`, e.g., `1 - np.exp(-np.linalg.norm(x) ** 2)`. (This objective function looks like an upside-down standard bivariate normal pdf, rescaled.)\n", "5. Depending of your choice of new objective function, you MAY need to change other parts of the code, including:\n", - " * The gradient of `f(x)` in Line 89. For the example given above, this would need to be changed from\n", + " * The gradient of `f(x)` in Line 103. For the example given above, this would need to be changed from\n", " \n", " `gradients = {\"est_f(x)\": {\"x\": tuple(2 * x)}}`\n", "\n", " to\n", "\n", " `gradients = {\"est_f(x)\": {\"x\": tuple(2 * x * np.exp(-np.linalg.norm(x) ** 2))}}`\n", - " * If you change the problem to a maxmization problem, you will need to change Line 173 from\n", + " * If you change the problem to a maxmization problem, you will need to change Line 190 from\n", " \n", - " `self.minmax = (-1,)`\n", + " `return (-1,)`\n", " \n", " to\n", " \n", - " `self.minmax = (1,)`.\n", - " * The optimal solution in Line 204. (For the running example, this will not be necessary.)\n", - " * The optimal objective function value in Line 203. (For the running example, this will not be necessary.)\n", + " `return (1,)`.\n", + " * The optimal solution in Line 214. (For the running example, this will not be necessary.)\n", + " * The optimal objective function value in Line 208. (For the running example, this will not be necessary.)\n", "6. Restart the kernel and run COMBO CODE CELL [0 + 1 + 3] below. *How have the plots changed?*\n", "\n", "**Extra for Experts:** Change the dimension of the problem. To do this, you will need to change the dimension of the default initial solution, defined in Line 185."