You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* doc updates
* run tools on Linux
* CloudTrail difference
* notes on sentinel role
* doc to move alb forwarding rule to new table
* add note about enableOptInRegions
Copy file name to clipboardExpand all lines: src/mkdocs/docs/lza-upgrade/comparison/feature-specific-considerations.md
+20-3Lines changed: 20 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -86,10 +86,21 @@ If you are using ALB IP Forwarding in ASEA, (`"alb-forwarding": true` is set for
86
86
- ca-central-1
87
87
```
88
88
89
-
Once the Customizations stage of the pipeline has been successfully run with the configuration file above, a new DynamoDB table will be generated in the `deploymentTargets` account and region specified. This table should be named `Alb-Ip-Forwarding-<VPC_NAME>`. In the same region and account, a DynamoDB table named `
90
-
<ASEA-Prefix>-Alb-Ip-Forwarding-<VPC-ID>` should exist. You will need to copy over all of these entries from the old ALB IP Forwarding table to the new one.
89
+
Once the Customizations stage of the pipeline has been successfully run with the configuration file above, a new DynamoDB table will be generated in the `deploymentTargets` account and region specified. This table should be named `Alb-Ip-Forwarding-<VPC_NAME>`. In the same region and account, a DynamoDB table named `<ASEA-Prefix>-Alb-Ip-Forwarding-<VPC-ID>` should exist. You will need to copy over all of these entries from the old ALB IP Forwarding table to the new one.
91
90
92
-
For more details about ALB Forwarding in LZA, refer to the [post-deployment instructions of LZA CCCS Medium reference architecture](https://github.com/aws-samples/landing-zone-accelerator-on-aws-for-cccs-medium/blob/main/post-deployment.md#44-configure-application-load-balancer-forwarding).
91
+
For more details about ALB Forwarding in LZA, refer to the [post-deployment instructions of LZA CCCS Medium reference architecture](https://github.com/aws-samples/landing-zone-accelerator-on-aws-for-cccs-medium/blob/main/post-deployment.md#44-configure-application-load-balancer-forwarding).
92
+
93
+
#### Steps to copy the entries from the old to new Alb-Ip-Forwarding DynamoDB table
94
+
95
+
To move the ALB forwarding entries from the ASEA table to the LZA table you can use the following procedure for each rule.
96
+
97
+
1. Retrieve the JSON content from the ASEA table and copy it to a text editor.
98
+
2. Edit the content to remove the `metadata` property. The metadata is added by the automation.
99
+
3. Edit the content to update the `priority` of the rule. You need to select a priority that is not already in use for other rules on the same Application Load Balancer. e.g. If the priority was 10, you can change it to 11, assuming there are no existing rules with priority 11. (Note: this is necessary because both automation are running in parallel at this time)
100
+
4. Add the edited entry to the LZA table.
101
+
5. Wait 1-2 minutes and refresh the content of the table. The metadata property should have been added. In the EC2 console, go to the ALB Listener Rules and confirm a new rule with the new priority was added. This rule should be an exact copy of the previous one.
102
+
6. Once you confirm the new rule was added, you can remove the entry from the ASEA table.
103
+
7. Repeat the process for the other entries until the ASEA table is empty.
93
104
94
105
### Managed Active Directory
95
106
!!! note "convert-config warning message"
@@ -337,6 +348,12 @@ If an assume role policy is needed outside of the scope of what's natively suppo
337
348
338
349
- Create your own CloudFormation template and add it to the `customizations-config.yaml` file, which will be generated in the LZA Configuration CodeCommit repository in the root directory.
339
350
351
+
#### Microsoft Sentinel Role
352
+
353
+
If you created a role for Microsoft Sentinel S3 Connector [using the ASEA documentation](https://aws-samples.github.io/aws-secure-environment-accelerator/latest/faq/#how-do-i-create-a-role-for-use-by-azure-sentinel-using-the-new-s3-connector-method), the trust policy format is not supported by LZA and won't be converted properly. Microsoft now recommends to [Create an Open ID Connect (OIDC) web identity provider and an AWS assumed role](https://learn.microsoft.com/en-us/azure/sentinel/connect-aws?tabs=s3#create-an-open-id-connect-oidc-web-identity-provider-and-an-aws-assumed-role) instead of the previous trust policy that trusted an external Microsoft managed AWS account.
354
+
355
+
We recommend creating a new role (outside ASEA/LZA) based on the latest recommendation and update your Microsoft Sentinel S3 Connector with the new role ARN. Once you confirm the connector work as expected with the new role, you can decommission the previous role that was deployed by ASEA to avoid any issues during the upgrade.
356
+
340
357
### Public and Private Hosted Zones
341
358
!!! note "convert-config warning message"
342
359
_The VPC ${vpcItem.name} in account ${accountKey} utilizes a public Route53 zone: ${zone}. Please refer to documentation on how to manage these resources._ or _The VPC ${vpcItem.name} in OU ${ouKey} utilizes a public Route53 zone: ${zone}. Please refer to documentation on how to manage these resources._
Copy file name to clipboardExpand all lines: src/mkdocs/docs/lza-upgrade/comparison/index.md
+9Lines changed: 9 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -78,6 +78,15 @@ In LZA, the forwarding rule and CloudWatch Log Groups are created in every accou
78
78
### ELB Access Logs
79
79
LZA creates new S3 buckets to store ELB access logs in every enabled regions in the central logs account (e.g. `asea-elb-access-logs-<account>-<region>`). ASEA stored the ELB access logs on the `asea-logarchive-phase0-aes<region>-<suffix>` bucket. After the upgrade, the `ASEA-LZA-ELB_LOGGING_ENABLED` AWS Config Rule will update the logging destination of all existing ELBs to use the new LZA buckets.
80
80
81
+
### CloudTrail Logs
82
+
LZA creates a new Trail with a similar configuration than the one used by ASEA. The ASEA Trail is removed during the finalization step after running `yarn run post-migration remove-org-cloudtrail`.
83
+
84
+
- The LZA trail uses the same S3 destination (central log bucket) but a different prefix (LZA: `cloudtrail-organization`; ASEA: `orgId (e.g. 0-a1a1a1aa1)`)
85
+
- The LZA trail uses a different CloudWatch Log Group (LZA: `ASEA-cloudtrail-logs`; ASEA: `/ASEA/CloudTrail`)
86
+
87
+
!!! Important
88
+
When using AWS Control Tower, the main management event trail is managed by Control Tower and is not affected by the upgrade. In that case above comments only apply to the ASEA managed Trail for S3 Data Event. When NOT using AWS Control Tower, ASEA manages a single Trail with management events and S3 data events.
89
+
81
90
## Customer Managed Keys
82
91
There are differences between how ASEA and LZA manage AWS KMS keys to provide encryption at rest capabilities for resources deployed by the solution. Detailed documentation is available in the [Customer Managed Keys - Comparison of ASEA and LZA](./kms.md) document.
Copy file name to clipboardExpand all lines: src/mkdocs/docs/lza-upgrade/preparation/prereq-config.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,9 @@ Before running the upgrade tools, ensure you meet the following requirements:
20
20
!!! note "Environment Requirements"
21
21
✅ **Recommended Environment:** Linux or MacOS with a Bash-like shell
22
22
23
-
⚠️ **Important Note:** Windows compatibility is limited as tools have not been extensively tested on this platform
23
+
⚠️ **Important Note:** Windows compatibility is limited as tools have not been extensively tested on this platform. All the upgrade tools SHOULD be run on a Unix-based shell.
24
+
25
+
You can use an EC2 instance in your AWS accounts to run the tools.
Copy file name to clipboardExpand all lines: src/mkdocs/docs/lza-upgrade/upgrade/finalize.md
+10-1Lines changed: 10 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,14 +22,15 @@ This step will perform post upgrade actions which includes following
22
22
- Marks duplicate CloudWatch Metrics resources for removal. `remove-cloudwatch-metrics`
23
23
- Marks duplicate Budget resources for removal. `remove-budgets`
24
24
- Marks duplicate logging resources for removal. `remove-logging`
25
+
- Marks duplicate CloudTrail configurations for removal. `remove-org-cloudtrail`
25
26
26
27
Each of the above steps has a corresponding flag that can be set during the post-migration step. These flags determine which actions are performed by the post-migration step.
After the commands has been run, go the the CodePipeline console and release the `ASEA-Pipeline`. Resources that have been flagged for removal will be deleted in the `ImportAseaResources` stage.
@@ -43,6 +44,14 @@ Change the setting in the `global-config.yaml` file and run the LZA pipeline.
43
44
terminationProtection: true
44
45
```
45
46
47
+
## Use of Opt-in regions
48
+
49
+
If you have AWS Opt-in regions, such as `ca-west-1` enabled in your landing zone, you should set the [enableOptInRegions](https://awslabs.github.io/landing-zone-accelerator-on-aws/latest/typedocs/interfaces/___packages__aws_accelerator_config_lib_models_global_config.IGlobalConfig.html#enableOptInRegions) option by adding the following line in your `global-config.yaml` file. This will ensure the opt-in regions are automatically enabled when you create new accounts.
50
+
51
+
```
52
+
enableOptInRegions: true
53
+
```
54
+
46
55
## Upgrade complete
47
56
48
57
At this point the upgrade to LZA is complete. Further updates to the environment will require updating the LZA configuration and then executing the LZA pipeline.
0 commit comments