@@ -46,7 +46,7 @@ The cleaning network will also require a Neutron allocation pool.
4646 OpenStack Config
4747================
4848
49- Overcloud Ironic will be deployed with a listening TFTP server on the
49+ Overcloud Ironic is deployed with a listening TFTP server on the
5050control plane which will provide baremetal nodes that PXE boot with the
5151Ironic Python Agent (IPA) kernel and ramdisk. Since the TFTP server is
5252listening exclusively on the internal API network it's neccessary for a
@@ -55,13 +55,13 @@ API network, we can achieve this is by defining a Neutron router using
5555`OpenStack Config <https://github.com/stackhpc/openstack-config> `.
5656
5757It not necessary to define the provision and cleaning networks in this
58- configuration as they will be generated during
58+ configuration as this is generated during
5959
6060.. code-block :: console
6161
6262 kayobe overcloud post configure
6363
64- The openstack config file could resemble the network, subnet and router
64+ The OpenStack config file could resemble the network, subnet and router
6565configuration shown below:
6666
6767.. code-block :: yaml
@@ -129,10 +129,10 @@ configuring the baremetal-compute inventory.
129129 Enabling conntrack (ML2/OVS only)
130130=================================
131131
132- Conntrack_helper will be required when UEFI booting on a cloud with ML2/OVS
132+ Conntrack_helper is required when UEFI booting on a cloud with ML2/OVS
133133and using the iptables firewall_driver, otherwise TFTP traffic is dropped due
134134to it being UDP. You will need to define some extension drivers in ``neutron.yml ``
135- to ensure conntrack is enabled in neutron server.
135+ to ensure conntrack is enabled in Neutron server.
136136
137137.. code-block :: yaml
138138
@@ -141,20 +141,20 @@ to ensure conntrack is enabled in neutron server.
141141 conntrack_helper
142142 dns_domain_ports
143143
144- The neutron l3 agent also requires conntrack to be set as an extension in
144+ The Neutron l3 agent also requires conntrack to be set as an extension in
145145``kolla/config/neutron/l3_agent.ini ``
146146
147147.. code-block :: ini
148148
149149 [agent]
150150 extensions = conntrack_helper
151151
152- It is also required to load the conntrack kernel module ``nf_nat_tftp ``,
153- `` nf_conntrack `` and ``nf_conntrack_tftp `` on network nodes. You can load these
154- modules using modprobe or define these in /etc/module-load.
152+ The conntrack kernel modules ``nf_nat_tftp ``, `` nf_conntrack ``,
153+ and ``nf_conntrack_tftp `` are also required on network nodes. You
154+ can load these modules using modprobe or define these in /etc/module-load.
155155
156- The Ironic neutron router will also need to be configured to use
157- conntrack_helper.
156+ The Ironic Neutron router will also need to be configured to use
157+ `` conntrack_helper `` .
158158
159159.. code-block :: json
160160
@@ -164,7 +164,7 @@ conntrack_helper.
164164 "helper" : " tftp"
165165 }
166166
167- To add the conntrack_helper to the neutron router, you can use the openstack
167+ To add the conntrack_helper to the Neutron router, you can use the OpenStack
168168CLI
169169
170170.. code-block :: console
@@ -180,15 +180,15 @@ Baremetal inventory
180180
181181The baremetal inventory is constructed with three different group types.
182182The first group is the default baremetal compute group for Kayobe called
183- [baremetal-compute] and will contain all baremetal nodes including tenant
184- and hypervisor nodes. This group acts as a parent for all baremetal nodes
185- and config that can be shared between all baremetal nodes will be defined
186- here.
183+ `` [baremetal-compute] `` and will contain all baremetal nodes including
184+ baremetal-compute (tenant) nodes and hypervisor nodes. This group acts as
185+ a parent for all baremetal nodes and config that is shared between all
186+ baremetal nodes is defined here.
187187
188188We will need to create a Kayobe group_vars file for the baremetal-compute
189189group that contains all the variables we want to define for the group. We
190190can put all these variables in the inventory in
191- ‘inventory/group_vars/baremetal-compute/ironic-vars’ The ironic_driver_info
191+ `` ‘inventory/group_vars/baremetal-compute/ironic-vars’ `` The ironic_driver_info
192192template dict contains all variables to be templated into the driver_info
193193property in Ironic. This includes the BMC address, username, password,
194194IPA configuration etc. We also currently define the ironic_driver here as
@@ -214,21 +214,21 @@ all nodes currently use the Redfish driver.
214214 ironic_redfish_password : " {{ inspector_redfish_password }}"
215215 ironic_capabilities : " boot_option:local,boot_mode:uefi"
216216
217- The second group type will be the hardware type that a baremetal node belongs
218- to, These variables will be in the inventory too in ‘inventory/group_vars/
217+ The second group type is the hardware type that a baremetal node belongs
218+ to, These variables are in the inventory in ‘inventory/group_vars/
219219baremetal-<YOUR_BAREMETAL_HARDWARE_TYPE>’
220220
221221Specific variables to the hardware type include the resource_class which is
222222used to associate the hardware type to the flavour in Nova we defined earlier
223- in Openstack Config.
223+ in OpenStack Config.
224224
225225.. code-block :: yaml
226226
227227 ironic_resource_class : " example_resource_class"
228228 ironic_redfish_system_id : " example_system_id"
229229 ironic_redfish_verify_ca : " {{ inspector_rule_var_redfish_verify_ca }}"
230230
231- The third group type will be the rack where the node is installed. This is the
231+ The third group type is the rack where the node is installed. This is the
232232group in which the rack specific networking configuration is defined here and
233233where the BMC address is entered as a host variable for each baremetal node.
234234Nodes can now be entered directly into the hosts file as part of this group.
@@ -262,34 +262,34 @@ invoking the Kayobe commmand
262262
263263.. code-block :: console
264264
265- (kayobe) $ kayobe baremetal compute register
265+ kayobe baremetal compute register
266266
267267 All nodes that were not defined in Ironic previously should’ve been enrolled
268268following this playbook and should now be in ‘manageable’ state if Ironic was
269269able to reach the BMC of the node. We will need to inspect the baremetal nodes
270270to gather information about their hardware to prepare for deployment. Kayobe
271- provides an inspection workflow and can be run using:
271+ provides an inspection command and can be run using:
272272
273273.. code-block :: console
274274
275- (kayobe) $ kayobe baremetal compute inspect
275+ kayobe baremetal compute inspect
276276
277277 Inspection would require PXE booting the nodes into IPA. If the nodes were able
278278to PXE boot properly they would now be in ‘manageable’ state again. If an error
279279developed during PXE booting, the nodes will now be in ‘inspect failed’ state
280280and issues preventing the node from booting or returning introspection data
281281will need to be resolved before continuing. If the nodes did inspect properly,
282- they can be cleaned and made available to Nova by running the provide workflow .
282+ they can be cleaned and made available to Nova by running the provide command .
283283
284284.. code-block :: console
285285
286- (kayobe) $ kayobe baremetal compute provide
286+ kayobe baremetal compute provide
287287
288288 Baremetal hypervisors
289289=====================
290290
291291Nodes that will not be dedicated as baremetal tenant nodes can be converted
292- into hypervisors as required. StackHPC Kayobe configuration provides a workflow
292+ into hypervisors as required. StackHPC Kayobe configuration provides a command
293293to provision baremetal tenants with the purpose of converted these nodes to
294294hypervisors. To begin the process of converting nodes we will need to define a
295295child group of the rack which will contain baremetal nodes dedicated to compute
@@ -314,10 +314,10 @@ hosts.
314314 rack1-compute
315315
316316 The rack1-compute group as shown above is also associated with the Kayobe
317- compute group in order for Kayobe to run the compute Kolla workflows on these
318- nodes during service deployment.
317+ compute group in order for Kayobe to deploy compute services during Kolla
318+ service deployment.
319319
320- You will also need to setup the Kayobe network configuration for the rack1
320+ You will also need to set up the Kayobe network configuration for the rack1
321321group. In networks.yml you should create an admin network for the rack1 group,
322322this should consist of the correct CIDR for the rack being deployed.
323323The configuration should resemble below in networks.yml:
@@ -328,7 +328,7 @@ The configuration should resemble below in networks.yml:
328328 physical_rack1_admin_oc_net_gateway : “172.16.208.129”
329329 physical_rack1_admin_net_defroute : true
330330
331- You will also need to configure a neutron network for racks to deploy instances
331+ You will also need to configure a Neutron network for racks to deploy instances
332332on, we can configure this in openstack-config as before. We will need to define
333333this network and associate a subnet for it for each rack we want to enroll in
334334Ironic.
@@ -356,8 +356,8 @@ Ironic.
356356 allocation_pool_end : " 172.16.208.130"
357357
358358 The subnet configuration largely resembles the Kayobe network configuration,
359- however we do not need to define an allocation pool or enable dhcp as we will
360- be associating neutron ports with our hypervisor instances per IP address to
359+ however we do not need to define an allocation pool or enable DHCP as we will
360+ be associating Neutron ports with our hypervisor instances per IP address to
361361ensure they match up properly.
362362
363363Now we should ensure the network interfaces are properly configured for the
@@ -379,9 +379,9 @@ for rack1 and the kayobe internal API network and be defined in the group_vars.
379379 internal_net_interface : " br0.{{ internal_net_vlan }}"
380380
381381 We should also ensure some variables are configured properly for our group,
382- such as the hypervisor image. These variables can be defined anywhere in
383- group_vars, we can place them in the ironic-vars file we used before for
384- baremetal node registration.
382+ such as the hypervisor image. These variables can be defined in group_vars,
383+ we can place them in the ironic-vars file we used before for baremetal node
384+ registration.
385385
386386.. code-block :: yaml
387387
@@ -397,7 +397,7 @@ baremetal node registration.
397397 project_name : " {{ lookup('env', 'OS_PROJECT_NAME') }}"
398398
399399 With these variables defined we can now begin deploying the baremetal nodes as
400- instances, to begin we invoke the deploy-baremetal-instance ansible playbook.
400+ instances, to begin we invoke the deploy-baremetal-instance Ansible playbook.
401401
402402.. code-block :: console
403403
@@ -418,48 +418,43 @@ Neutron port configured with the address of the baremetal node admin network.
418418The baremetal hypervisors will then be imaged and deployed associated with that
419419Neutron port. You should ensure that all nodes are correctly associated with
420420the right baremetal instance, you can do this by running a baremetal node show
421- on any given hypervisor node and comparing the server uuid to the metadata on
421+ on any given hypervisor node and comparing the server UUID to the metadata on
422422the Nova instance.
423423
424424Once the nodes are deployed, we can use Kayobe to configure them as compute
425- hosts, running kayobe overcloud host configure on these nodes will ensure that
426- all networking, package and various other host configurations are setup
425+ hosts. More information about Kayobe host configuration is available in the
426+ :kayobe-doc: ` upstream Kayobe documentation <configuration/reference/hosts.html> `.
427427
428428.. code-block :: console
429429
430430 kayobe overcloud host configure --limit baremetal-<YOUR_BAREMETAL_HARDWARE_TYPE>
431431
432432 Following host configuration we can begin deploying OpenStack services to the
433- baremetal hypervisors by invoking kayobe overcloud service deploy. Nova
434- services will be deployed to the baremetal hosts.
435-
436- .. code-block :: console
437-
438- kayobe overcloud service deploy --kolla-limit baremetal-<YOUR_BAREMETAL_HARDWARE_TYPE>
433+ baremetal hypervisors by invoking `kayobe overcloud service deploy `.
439434
440435Un-enrolling hypervisors
441436========================
442437
443438To convert baremetal hypervisors into regular baremetal compute instances you
444- will need to drain the hypervisor of all running compute instances, you should
445- first invoke the nova-compute-disable playbook to ensure all Nova services on
439+ will need to drain the hypervisor of all running compute instances, First invoke
440+ the `` nova-compute-disable.yml `` Ansible playbook to ensure all Nova services on
446441the baremetal node are disabled and compute instances will not be allocated to
447442this node.
448443
449444.. code-block :: console
450445
451- (kayobe) $ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/nova-compute-disable.yml
446+ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/nova-compute-disable.yml
452447
453448 Now the Nova services are disabled you should also ensure any existing compute
454449instances are moved elsewhere by invoking the nova-compute-drain playbook
455450
456451.. code-block :: console
457452
458- (kayobe) $ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/nova-compute-drain.yml
453+ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/nova-compute-drain.yml
459454
460- Now the node has no instances allocated to it you can delete the instance using
461- the OpenStack CLI and the node will be moved back to ``available `` state.
455+ Now the node has no instances allocated to it you can delete the baremetal instance
456+ using the OpenStack CLI and the node is moved back to ``available `` state.
462457
463458.. code-block :: console
464459
465- (os-venv) $ openstack server delete ...
460+ openstack server delete ...
0 commit comments