Skip to content

Commit 04aea8b

Browse files
authored
Merge pull request #419 from leandrobbraga/fix-typos
Fix some typos
2 parents af1adc6 + 08c7bdc commit 04aea8b

13 files changed

+38
-29
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -253,7 +253,7 @@ optimizer.maximize(
253253
)
254254
```
255255

256-
By default the previous data in the json file is removed. If you want to keep working with the same logger, the `reset` paremeter in `JSONLogger` should be set to False.
256+
By default the previous data in the json file is removed. If you want to keep working with the same logger, the `reset` parameter in `JSONLogger` should be set to False.
257257

258258
### 4.2 Loading progress
259259

bayes_opt/domain_reduction.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ def transform(self, target_space: TargetSpace):
1919

2020
class SequentialDomainReductionTransformer(DomainTransformer):
2121
"""
22-
A sequential domain reduction transformer bassed on the work by Stander, N. and Craig, K:
22+
A sequential domain reduction transformer based on the work by Stander, N. and Craig, K:
2323
"On the robustness of a simple domain reduction scheme for simulation‐based optimization"
2424
"""
2525

@@ -68,8 +68,8 @@ def initialize(self, target_space: TargetSpace) -> None:
6868

6969
self.r = self.contraction_rate * self.r
7070

71-
# check if the minimum window fits in the orignal bounds
72-
self._window_bounds_compatiblity(self.original_bounds)
71+
# check if the minimum window fits in the original bounds
72+
self._window_bounds_compatibility(self.original_bounds)
7373

7474
def _update(self, target_space: TargetSpace) -> None:
7575

@@ -121,7 +121,7 @@ def _trim(self, new_bounds: np.array, global_bounds: np.array) -> np.array:
121121
new_bounds[i, 1] += ddw_r
122122
return new_bounds
123123

124-
def _window_bounds_compatiblity(self, global_bounds: np.array) -> bool:
124+
def _window_bounds_compatibility(self, global_bounds: np.array) -> bool:
125125
"""Checks if global bounds are compatible with the minimum window sizes."""
126126
for i, entry in enumerate(global_bounds):
127127
global_window_width = abs(entry[1] - entry[0])

bayes_opt/util.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ def acq_max(ac, gp, y_max, bounds, random_state, constraint=None, n_warmup=10000
5151
x_max = x_tries[ys.argmax()]
5252
max_acq = ys.max()
5353

54-
# Explore the parameter space more throughly
54+
# Explore the parameter space more thoroughly
5555
x_seeds = random_state.uniform(bounds[:, 0], bounds[:, 1],
5656
size=(n_iter, bounds.shape[0]))
5757

examples/advanced-tour.ipynb

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333
"metadata": {},
3434
"outputs": [],
3535
"source": [
36-
"# Let's start by defining our function, bounds, and instanciating an optimization object.\n",
36+
"# Let's start by defining our function, bounds, and instantiating an optimization object.\n",
3737
"def black_box_function(x, y):\n",
3838
" return -x ** 2 - (y - 1) ** 2 + 1"
3939
]
@@ -347,12 +347,13 @@
347347
]
348348
},
349349
{
350+
"attachments": {},
350351
"cell_type": "markdown",
351352
"metadata": {},
352353
"source": [
353354
"### 3.3 Changing kernels\n",
354355
"\n",
355-
"By default this package uses the Mattern 2.5 kernel. Depending on your use case you may find that tunning the GP kernel could be beneficial. You're on your own here since these are very specific solutions to very specific problems."
356+
"By default this package uses the Matern 2.5 kernel. Depending on your use case you may find that tunning the GP kernel could be beneficial. You're on your own here since these are very specific solutions to very specific problems."
356357
]
357358
},
358359
{

examples/basic-tour.ipynb

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,12 +16,13 @@
1616
]
1717
},
1818
{
19+
"attachments": {},
1920
"cell_type": "markdown",
2021
"metadata": {},
2122
"source": [
2223
"## 1. Specifying the function to be optimized\n",
2324
"\n",
24-
"This is a function optimization package, therefore the first and most important ingreedient is, of course, the function to be optimized.\n",
25+
"This is a function optimization package, therefore the first and most important ingredient is, of course, the function to be optimized.\n",
2526
"\n",
2627
"**DISCLAIMER:** We know exactly how the output of the function below depends on its parameter. Obviously this is just an example, and you shouldn't expect to know it in a real scenario. However, it should be clear that you don't need to. All you need in order to use this package (and more generally, this technique) is a function `f` that takes a known set of parameters and outputs a real number."
2728
]
@@ -43,12 +44,13 @@
4344
]
4445
},
4546
{
47+
"attachments": {},
4648
"cell_type": "markdown",
4749
"metadata": {},
4850
"source": [
4951
"## 2. Getting Started\n",
5052
"\n",
51-
"All we need to get started is to instanciate a `BayesianOptimization` object specifying a function to be optimized `f`, and its parameters with their corresponding bounds, `pbounds`. This is a constrained optimization technique, so you must specify the minimum and maximum values that can be probed for each parameter in order for it to work"
53+
"All we need to get started is to instantiate a `BayesianOptimization` object specifying a function to be optimized `f`, and its parameters with their corresponding bounds, `pbounds`. This is a constrained optimization technique, so you must specify the minimum and maximum values that can be probed for each parameter in order for it to work"
5254
]
5355
},
5456
{
@@ -306,12 +308,13 @@
306308
]
307309
},
308310
{
311+
"attachments": {},
309312
"cell_type": "markdown",
310313
"metadata": {},
311314
"source": [
312315
"## 4. Saving, loading and restarting\n",
313316
"\n",
314-
"By default you can follow the progress of your optimization by setting `verbose>0` when instanciating the `BayesianOptimization` object. If you need more control over logging/alerting you will need to use an observer. For more information about observers checkout the advanced tour notebook. Here we will only see how to use the native `JSONLogger` object to save to and load progress from files.\n",
317+
"By default you can follow the progress of your optimization by setting `verbose>0` when instantiating the `BayesianOptimization` object. If you need more control over logging/alerting you will need to use an observer. For more information about observers checkout the advanced tour notebook. Here we will only see how to use the native `JSONLogger` object to save to and load progress from files.\n",
315318
"\n",
316319
"### 4.1 Saving progress"
317320
]
@@ -327,14 +330,15 @@
327330
]
328331
},
329332
{
333+
"attachments": {},
330334
"cell_type": "markdown",
331335
"metadata": {},
332336
"source": [
333337
"The observer paradigm works by:\n",
334338
"1. Instantiating an observer object.\n",
335339
"2. Tying the observer object to a particular event fired by an optimizer.\n",
336340
"\n",
337-
"The `BayesianOptimization` object fires a number of internal events during optimization, in particular, everytime it probes the function and obtains a new parameter-target combination it will fire an `Events.OPTIMIZATION_STEP` event, which our logger will listen to.\n",
341+
"The `BayesianOptimization` object fires a number of internal events during optimization, in particular, every time it probes the function and obtains a new parameter-target combination it will fire an `Events.OPTIMIZATION_STEP` event, which our logger will listen to.\n",
338342
"\n",
339343
"**Caveat:** The logger will not look back at previously probed points."
340344
]

examples/constraints.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -330,7 +330,7 @@
330330
" else:\n",
331331
" c_labels = [f\"constraint {i+1}\" for i in range(n_constraints)]\n",
332332
" labels_top = [\"target\"] + c_labels + [\"masked target\"]\n",
333-
" labels_bot = [\"target estimate\"] + [c + \" estimate\" for c in c_labels] + [\"acqusition function\"]\n",
333+
" labels_bot = [\"target estimate\"] + [c + \" estimate\" for c in c_labels] + [\"acquisition function\"]\n",
334334
" labels = [labels_top, labels_bot]\n",
335335
"\n",
336336
" # Setup the grid to plot on\n",

examples/domain_reduction.ipynb

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,18 @@
11
{
22
"cells": [
33
{
4+
"attachments": {},
45
"cell_type": "markdown",
56
"metadata": {},
67
"source": [
78
"# Sequential Domain Reduction\n",
89
"\n",
910
"## Background\n",
10-
"Sequential domain reduction is a process where the bounds of the optimization problem are mutated (typically contracted) to reduce the time required to converge to an optimal value. The advantage of this method is typically seen when a cost function is particularly expensive to calculate, or if the optimization routine oscilates heavily. \n",
11+
"Sequential domain reduction is a process where the bounds of the optimization problem are mutated (typically contracted) to reduce the time required to converge to an optimal value. The advantage of this method is typically seen when a cost function is particularly expensive to calculate, or if the optimization routine oscillates heavily. \n",
1112
"\n",
1213
"## Basics\n",
1314
"\n",
14-
"The basic steps are a *pan* and a *zoom*. These two steps are applied at one time, therefore updating the problem search space evey iteration.\n",
15+
"The basic steps are a *pan* and a *zoom*. These two steps are applied at one time, therefore updating the problem search space every iteration.\n",
1516
"\n",
1617
"**Pan**: recentering the region of interest around the most optimal point found.\n",
1718
"\n",
@@ -122,10 +123,11 @@
122123
]
123124
},
124125
{
126+
"attachments": {},
125127
"cell_type": "markdown",
126128
"metadata": {},
127129
"source": [
128-
"Now we can set up two idential optimization problems, except one has the `bound_transformer` variable set."
130+
"Now we can set up two identical optimization problems, except one has the `bound_transformer` variable set."
129131
]
130132
},
131133
{
@@ -182,10 +184,11 @@
182184
]
183185
},
184186
{
187+
"attachments": {},
185188
"cell_type": "markdown",
186189
"metadata": {},
187190
"source": [
188-
"After both have completed we can plot to see how the objectives performed. It's quite obvious to see that the Sequential Domain Reduction technique contracted onto the optimal point relativly quickly."
191+
"After both have completed we can plot to see how the objectives performed. It's quite obvious to see that the Sequential Domain Reduction technique contracted onto the optimal point relatively quick."
189192
]
190193
},
191194
{

examples/sklearn_example.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ def rfc_cv(n_estimators, min_samples_split, max_features, data, targets):
4242
of cross validation is returned.
4343
4444
Our goal is to find combinations of n_estimators, min_samples_split, and
45-
max_features that minimzes the log loss.
45+
max_features that minimizes the log loss.
4646
"""
4747
estimator = RFC(
4848
n_estimators=n_estimators,

examples/visualization.ipynb

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
]
1717
},
1818
{
19+
"attachments": {},
1920
"cell_type": "markdown",
2021
"metadata": {},
2122
"source": [
@@ -25,7 +26,7 @@
2526
"\n",
2627
"$$f(x) = e^{-(x - 2)^2} + e^{-\\frac{(x - 6)^2}{10}} + \\frac{1}{x^2 + 1}, $$ its maximum is at $x = 2$ and we will restrict the interval of interest to $x \\in (-2, 10)$.\n",
2728
"\n",
28-
"Notice that, in practice, this function is unknown, the only information we have is obtained by sequentialy probing it at different points. Bayesian Optimization works by contructing a posterior distribution of functions that best fit the data observed and chosing the next probing point by balancing exploration and exploitation."
29+
"Notice that, in practice, this function is unknown, the only information we have is obtained by sequentially probing it at different points. Bayesian Optimization works by constructing a posterior distribution of functions that best fit the data observed and choosing the next probing point by balancing exploration and exploitation."
2930
]
3031
},
3132
{

tests/test_bayesian_optimization.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,7 @@ def test_callback(event, instance):
180180
optimizer._events[Events.OPTIMIZATION_START].values()
181181
])
182182

183-
# Check that prime subscriptions won't overight manual subscriptions
183+
# Check that prime subscriptions won't overwrite manual subscriptions
184184
optimizer._prime_subscriptions()
185185
assert all([
186186
k == test_subscriber for k in

0 commit comments

Comments
 (0)