diff --git a/content/authors.en.md b/content/authors.en.md index 4dfc92bd..4a3aa8fa 100644 --- a/content/authors.en.md +++ b/content/authors.en.md @@ -13,6 +13,15 @@ weight: 100 1. Sean Shriver ([switch180](https://github.com/switch180)) - Ported the whole lab to amazon-dynamodb-labs.com with a custom Hugo theme. Made the "bullet-proof" CloudFormation template for the lab. Updated the hands on lab to Python3 1. Daniel Yoder ([danielsyoder](https://github.com/danielsyoder)) - The brains behind amazon-dynamodb-labs.com and the co-creator of the design scenarios +### 2025 additions +Removing Cloud9 due to End of Life from all the workshops (October 2025): +1. Esteban Serna ([@tebanieo](https://github.com/tebanieo)) - Primary author, and merger + +Database modernizer workshop was released in August 2025: +1. Esteban Serna ([@tebanieo](https://github.com/tebanieo)) - Primary author, and merger +2. John Terhune - ([@terhunej](https://github.com/terhunej)) - Editor, tech reviewer. +3. Sean Shriver - ([@switch180](https://github.com/switch180)) - Tech reviewer + ### 2024 additions The Generative AI workshop LBED was released in early 2024: 1. John Terhune - ([@terhunej](https://github.com/terhunej)) - Primary author diff --git a/content/design-patterns/ex1capacity/Step2.en.md b/content/design-patterns/ex1capacity/Step2.en.md index fc935755..d99bb8a4 100644 --- a/content/design-patterns/ex1capacity/Step2.en.md +++ b/content/design-patterns/ex1capacity/Step2.en.md @@ -7,7 +7,7 @@ weight: 3 Now that you have created the table, you can load some sample data into the table by running the following Python script. ```bash -cd /home/ubuntu/workshop +cd /home/participant/workshop/LADV python load_logfile.py logfile ./data/logfile_small1.csv ``` The parameters in the preceding command: 1) Table name = `logfile` 2) File name = `logfile_small1.csv` diff --git a/content/design-patterns/setup/Step1.en.md b/content/design-patterns/setup/Step1.en.md index 4d842fd9..054e8db2 100644 --- a/content/design-patterns/setup/Step1.en.md +++ b/content/design-patterns/setup/Step1.en.md @@ -4,32 +4,30 @@ date: 2019-12-02T10:07:45-08:00 weight: 10 --- -1. Once you've gained access to the AWS Management Console for the lab, double check the region is correct and the role name **WSParticipantRole** appears on the top right of the console. -1. In the services search bar, search for **Systems Manager** and click on it to open the AWS Systems Manager section of the AWS Management Console. -1. In the AWS Systems Manager console, locate the menu in the left, identify the section **Node Management** and select **Session Manager** from the list. -1. Choose **Start session** to launch a shell session. -1. Click the radio button to select the EC2 instance for the lab. If you see no instance, wait a few minutes and then click refresh. Wait until an ec2 instance with name of `DynamoDBC9` is available before continuing. Select the instance. -1. Click the **Start Session** button (This action will open a new tab in your browser with a new black shell). -1. In the new black shell, switch to the ubuntu account by running `sudo su - ubuntu` - ```bash - sudo su - ubuntu - ``` -1. run `shopt login_shell` and be sure it says `login_shell on` and then change into the workshop directory. - ```bash - #Verify login_shell is 'on' - shopt login_shell - #Change into the workshop directory - cd ~/workshop/ - ``` - - -The output of your commands in the Session Manager session should look like the following: - ```bash - $ sudo su - ubuntu - :~ $ #Verify login_shell is 'on' - shopt login_shell - #Change into the workshop directory - cd ~/workshop/ - login_shell on - :~/workshop $ - ``` +During the first 60 seconds, the environment will automatically update extensions and plugins. Any startup notification can be safely dismissed. + +![VS Code Setup](/static/images/common/common-vs-code-01.png) + +If a terminal is not available at the bottom left side of your screen, please open a new one like the following picture indicates. + +![VS Code Setup](/static/images/common/common-vs-code-02.png) + +Then run the command `aws sts get-caller-identity` just to verify that your AWS credentials have been properly configured. + +![VS Code Setup](/static/images/common/common-vs-code-03.png) + +Change your directory to use LADV and browse the content: + +```shell +cd LADV +``` + +```shell +participant:~/workshop/LADV$ ls +data iam-trust-relationship.json load_logfile_parallel.py query_responsecode.py scan_logfile_simple.py +ddbreplica_lambda.py lab_config.py query_city_dept.py requirements.txt +gsi_city_dept.json load_employees.py query_employees.py scan_for_managers.py +gsi_manager.json load_invoice.py query_index_invoiceandbilling.py scan_for_managers_gsi.py +iam-role-policy.json load_logfile.py query_invoiceandbilling.py scan_logfile_parallel.py +participant:~/workshop/LADV$ +``` diff --git a/content/design-patterns/setup/Step2.en.md b/content/design-patterns/setup/Step2.en.md index 61fc7345..0334ffdb 100644 --- a/content/design-patterns/setup/Step2.en.md +++ b/content/design-patterns/setup/Step2.en.md @@ -14,7 +14,7 @@ python --version Output: ```plain -Python 3.10.12 +Python 3.13.9 ``` **Note: The major and minor version of Python may vary from what you see above** @@ -29,7 +29,7 @@ Sample output: ```bash #Note that your linux kernel version may differ from the example. -aws-cli/2.13.26 Python/3.11.6 Linux/6.2.0-1013-aws exe/x86_64.ubuntu.22 prompt/off +aws-cli/2.31.24 Python/3.13.7 Linux/6.1.155-176.282.amzn2023.aarch64 exe/aarch64.amzn.2023 ``` ::alert[_Make sure you have AWS CLI version 2.x or higher and python 3.10 or higher before proceeding. If you do not have these versions, you may have difficultly successfully completing the lab._] diff --git a/content/design-patterns/setup/Step4.en.md b/content/design-patterns/setup/Step4.en.md index fbb821d6..96de0953 100644 --- a/content/design-patterns/setup/Step4.en.md +++ b/content/design-patterns/setup/Step4.en.md @@ -7,43 +7,38 @@ weight: 40 On the EC2 instance, go to the workshop folder and run the ls command: ```bash -cd /home/ubuntu/workshop +participant:~/workshop/LADV$ cd /home/participant/workshop/LADV ls -l . ``` The following list indicates the folder structure and the files that will be used during the workshop: ```bash -. -├── data -│ ├── employees.csv -│ ├── invoice-data2.csv -│ ├── invoice-data.csv -│ ├── logfile_medium1.csv -│ ├── logfile_medium2.csv -│ ├── logfile_small1.csv -│ └── logfile_stream.csv -├── ddbreplica_lambda.py -├── ddb-replication-role-arn.txt -├── gsi_city_dept.json -├── gsi_manager.json -├── iam-role-policy.json -├── iam-trust-relationship.json -├── lab_config.py -├── load_employees.py -├── load_invoice.py -├── load_logfile_parallel.py -├── load_logfile.py -├── query_city_dept.py -├── query_employees.py -├── query_index_invoiceandbilling.py -├── query_invoiceandbilling.py -├── query_responsecode.py -├── requirements.txt -├── scan_for_managers_gsi.py -├── scan_for_managers.py -├── scan_logfile_parallel.py -└── scan_logfile_simple.py + +participant:~/workshop/LADV$ ls -l . +total 80 +drwxr-xr-x. 2 participant participant 182 Oct 29 16:30 data +-rw-r--r--. 1 participant participant 1275 Sep 10 22:37 ddbreplica_lambda.py +-rw-r--r--. 1 participant participant 438 Sep 10 22:37 gsi_city_dept.json +-rw-r--r--. 1 participant participant 438 Sep 10 22:37 gsi_manager.json +-rw-r--r--. 1 participant participant 865 Sep 10 22:37 iam-role-policy.json +-rw-r--r--. 1 participant participant 205 Sep 10 22:37 iam-trust-relationship.json +-rw-r--r--. 1 participant participant 94 Sep 10 22:37 lab_config.py +-rw-r--r--. 1 participant participant 3845 Sep 10 22:37 load_employees.py +-rw-r--r--. 1 participant participant 2198 Sep 10 22:37 load_invoice.py +-rw-r--r--. 1 participant participant 1763 Sep 10 22:37 load_logfile.py +-rw-r--r--. 1 participant participant 3101 Sep 10 22:37 load_logfile_parallel.py +-rw-r--r--. 1 participant participant 1466 Sep 10 22:37 query_city_dept.py +-rw-r--r--. 1 participant participant 1071 Sep 10 22:37 query_employees.py +-rw-r--r--. 1 participant participant 2547 Sep 10 22:37 query_index_invoiceandbilling.py +-rw-r--r--. 1 participant participant 2341 Sep 10 22:37 query_invoiceandbilling.py +-rw-r--r--. 1 participant participant 1887 Sep 10 22:37 query_responsecode.py +-rw-r--r--. 1 participant participant 32 Sep 10 22:37 requirements.txt +-rw-r--r--. 1 participant participant 1287 Sep 10 22:37 scan_for_managers.py +-rw-r--r--. 1 participant participant 1157 Sep 10 22:37 scan_for_managers_gsi.py +-rw-r--r--. 1 participant participant 2019 Sep 10 22:37 scan_logfile_parallel.py +-rw-r--r--. 1 participant participant 1278 Sep 10 22:37 scan_logfile_simple.py +participant:~/workshop/LADV$ ``` Python code: diff --git a/content/design-patterns/setup/Step5.en.md b/content/design-patterns/setup/Step5.en.md index 06bbf0f7..61801cb5 100644 --- a/content/design-patterns/setup/Step5.en.md +++ b/content/design-patterns/setup/Step5.en.md @@ -24,10 +24,11 @@ The Server Logs file has the following structure: - bytessent (number) - useragent (string) -To view a sample record in the file, execute: -```bash -head -n1 ./data/logfile_small1.csv -``` +To view a sample record in the file, just click on the files over the left side pannel: + +![Small file](/static/images/ladv-small-file.png) + + Sample log record: ```csv 1,66.249.67.3,2017-07-20,20,GMT-0700,GET,"/gallery/main.php?g2_controller=exif.SwitchDetailMode&g2_mode=detailed&g2_return=%2Fgallery%2Fmain.php%3Fg2_itemId%3D15741&g2_returnName=photo",302,5,"Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" @@ -47,9 +48,7 @@ The Employees data file has the following structure: - is a manager (string), 1 for manager employees, non-existent for others To view a sample record in the file, execute: -```bash -head -n1 ./data/employees.csv -``` + Sample employee record: ```csv 1,Onfroi Greeno,Systems Administrator,Operation,Portland,OR,1992-03-31,2014-10-24,Application Support Analyst,2014-04-12 diff --git a/content/design-patterns/setup/aws-ws-event.en.md b/content/design-patterns/setup/aws-ws-event.en.md index 1bebb5e6..2e8b664c 100644 --- a/content/design-patterns/setup/aws-ws-event.en.md +++ b/content/design-patterns/setup/aws-ws-event.en.md @@ -1,7 +1,7 @@ --- title: "Start: At an AWS Hosted Event" date: 2019-12-02T07:05:12-08:00 -weight: 3 +weight: 4 chapter: true --- @@ -26,6 +26,10 @@ chapter: true 7. Select on **I agree with the Terms and Conditions** on the bottom of the next page and click **Join event** to continue to the event dashboard. 8. On the event dashboard, click on **Open AWS console** to federate into AWS Management Console in a new tab. On the same page, click **Get started** to open the workshop instructions. -![Event dashboard](/static/images/aws-ws-event5.png) +![Event dashboard](/static/images/common/workshop-studio-01.png) -9. Now that you are connected continue on to: :link[Step 1]{href="/design-patterns/setup/Step1"}. +9. In addition to the AWS console you should open your Visual Studio code server, by clicking in the `VSCodeServerURL` parameter, available from the "Event Outputs" section. When prompted for a password use the value from `VSCodeServerPassword`. + +![Event dashboard](/static/images/common/workshop-studio-02.png) + +10. Continue with the steps as listed in the section :link[Launch Visual Studio Code]{href="/design-patterns/setup/step1"}. \ No newline at end of file diff --git a/content/design-patterns/setup/user-account.en.md b/content/design-patterns/setup/user-account.en.md index 10d4e548..a53d6d8b 100644 --- a/content/design-patterns/setup/user-account.en.md +++ b/content/design-patterns/setup/user-account.en.md @@ -5,25 +5,30 @@ weight: 5 chapter: true --- -::alert[These setup instructions are identitical for LADV, LHOL, LMR, LBED, and LGME - all of which use the same Cloud9 template. Only complete this section once, and only if you're running it on your own account.]{type="warning"} -::alert[Only complete this section if you are running the workshop on your own. If you are at an AWS hosted event (such as re\:Invent, Immersion Day, etc), go to :link[At an AWS hosted Event]{href="/design-patterns/setup/aws-ws-event"}] +::alert[These setup instructions are identitical for LADV, LHOL, LBED, LMR, and LGME - all of which use the same Visual Studio Code template. Only complete this section once, and only if you're running it on your own account.]{type="warning"} + +::alert[Only complete this section if you are running the workshop on your own. If you are at an AWS hosted event (such as re\:Invent, Immersion Day, etc), go to :link[At an AWS hosted Event]{href="/hands-on-labs/setup/aws-ws-event"}] ## Launch the CloudFormation stack ::alert[During the course of the lab, you will make DynamoDB tables that will incur a cost that could approach tens or hundreds of dollars per day. Ensure you delete the DynamoDB tables using the DynamoDB console, and make sure you delete the CloudFormation stack as soon as the lab is complete.] -1. Launch the CloudFormation template in US West 2 to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=DynamoDBID&templateURL=:param{key="design_patterns_s3_lab_yaml"}) - 1. *Optionally, download [the YAML template](:param{key="design_patterns_s3_lab_yaml"}) and launch it your own way* +1. **[Deprecated]** - Launch the CloudFormation template in US West 2 to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=DynamoDBID&templateURL=:param{key="design_patterns_s3_lab_yaml"}) + +1. *Optionally, download [the YAML template](https://github.com/aws-samples/aws-dynamodb-examples/blob/master/workshops/modernizer/modernizer-db.yaml) from our GitHub repository and launch it your own way* 1. Click *Next* on the first dialog. -1. In the Parameters section, note the *Timeout* is set to zero. This means the Cloud9 instance will not sleep; you may want to change this manually to a value such as 60 to protect against unexpected charges if you forget to delete the stack at the end. - Leave the *WorkshopZIP* parameter unchanged and click *Next* -![CloudFormation parameters](/static/images/awsconsole1.png) +1. Provide a CloudFormation stack name. + +1. In the Parameters section, note the *AllowedIP** contains a default IP Address, if you want to access the instance via SSH obtain your own public IP address. Ensure to add the `/32` network mask at the end. Do not modify any other parameter and click *Next*. + +![CloudFormation parameters](/static/images/common/on-your-own-cf-01.png) -1. Scroll to the bottom and click *Next*, and then review the *Template* and *Parameters*. When you are ready to create the stack, scroll to the bottom, check the box acknowledging the creation of IAM resources, and click *Create stack*. -![CloudFormation parameters](/static/images/awsconsole2.png) - The stack will create a Cloud9 lab instance, a role for the instance, and a role for the AWS Lambda function used later on in the lab. It will use Systems Manager to configure the Cloud9 instance. +6. Scroll to the bottom and click *Next*, and then review the *Template* and *Parameters*. When you are ready to create the stack, scroll to the bottom, check the box acknowledging the creation of IAM resources, and click *Create stack*. +![CloudFormation parameters](/static/images/common/on-your-own-cf-02.png) + + The stack will create a Visual Studio Code EC2 instance, a role for the instance, and a role for the AWS Lambda function used later on in the lab. The CloudFormation template will create a set of folders that can be used to execute individually the lab modules presented in this guide. -1. After the CloudFormation stack is `CREATE_COMPLETE`, :link[continue onto Step 1]{href="/design-patterns/setup/Step1"}. +7. After the CloudFormation stack is `CREATE_COMPLETE`, :link[Launch Visual Studio Code]{href="/design-patterns/setup/step1"}. diff --git a/content/event-driven-architecture/setup/aws-ws-event.en.md b/content/event-driven-architecture/setup/aws-ws-event.en.md index 044101a6..e5f76275 100644 --- a/content/event-driven-architecture/setup/aws-ws-event.en.md +++ b/content/event-driven-architecture/setup/aws-ws-event.en.md @@ -7,7 +7,9 @@ chapter: true ### Login to AWS Workshop Studio Portal -1. If you are provided a one-click join link, skip to step 3. +### Login to AWS Workshop Studio Portal + +1. If you are provided a one-click join link, use it and skip to step 3. 2. Visit [https://catalog.us-east-1.prod.workshops.aws](https://catalog.us-east-1.prod.workshops.aws). If you attended any other workshop earlier on this portal, please logout first. Click on **Get Started** on the right hand side of the window. ![Workshop Studio Landing Page](/static/images/aws-ws-event1.png) @@ -26,6 +28,10 @@ chapter: true 7. Select on **I agree with the Terms and Conditions** on the bottom of the next page and click **Join event** to continue to the event dashboard. 8. On the event dashboard, click on **Open AWS console** to federate into AWS Management Console in a new tab. On the same page, click **Get started** to open the workshop instructions. -![Event dashboard](/static/images/aws-ws-event5.png) +![Event dashboard](/static/images/common/workshop-studio-01.png) + +9. In addition to the AWS console you should open your Visual Studio code server, by clicking in the `VSCodeServerURL` parameter, available from the "Event Outputs" section. When prompted for a password use the value from `VSCodeServerPassword`. + +![Event dashboard](/static/images/common/workshop-studio-02.png) Now that you are set up, continue on to: :link[Exercise 1: Overview]{href="/event-driven-architecture/ex1overview"}. diff --git a/content/event-driven-architecture/setup/user-account.en.md b/content/event-driven-architecture/setup/user-account.en.md index 13a3e2bd..6e93ec7c 100644 --- a/content/event-driven-architecture/setup/user-account.en.md +++ b/content/event-driven-architecture/setup/user-account.en.md @@ -5,19 +5,30 @@ weight: 5 chapter: true --- +::alert[These setup instructions are identitical for LADV, LHOL, LBED, LMR, and LGME - all of which use the same Visual Studio Code template. Only complete this section once, and only if you're running it on your own account.]{type="warning"} + +::alert[Only complete this section if you are running the workshop on your own. If you are at an AWS hosted event (such as re\:Invent, Immersion Day, etc), go to :link[At an AWS hosted Event]{href="/hands-on-labs/setup/aws-ws-event"}] -::alert[Only complete this section if you are running the workshop on your own. If you are at an AWS hosted event (such as re\:Invent, Immersion Day, etc), go to :link[At an AWS hosted Event]{href="/event-driven-architecture/setup/start-here/aws-ws-event"}] ## Launch the CloudFormation stack -::alert[During the course of the lab, you will make DynamoDB tables that will incur a cost that could approach tens or hundreds of dollars per day. Ensure you delete the DynamoDB tables using the DynamoDB console, and make sure you delete the CloudFormation stack as soon as the lab is complete.]{type="warning"} +::alert[During the course of the lab, you will make DynamoDB tables that will incur a cost that could approach tens or hundreds of dollars per day. Ensure you delete the DynamoDB tables using the DynamoDB console, and make sure you delete the CloudFormation stack as soon as the lab is complete.] + +1. **[Deprecated]** - Launch the CloudFormation template in US West 2 to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=DynamoDBID&templateURL=:param{key="design_patterns_s3_lab_yaml"}) -1. Launch the CloudFormation template in US West 2 to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=amazon-dynamodb-labs&templateURL=:param{key="event_driven_architecture_lab_yaml"}) - 1. *Optionally, download [the YAML template](:param{key="event_driven_architecture_lab_yaml"}) and launch it your own way* +1. *Optionally, download [the YAML template](https://github.com/aws-samples/aws-dynamodb-examples/blob/master/workshops/modernizer/modernizer-db.yaml) from our GitHub repository and launch it your own way* 1. Click *Next* on the first dialog. -1. Scroll to the bottom and click *Next*, and then review the *Template*. When you are ready to create the stack, scroll to the bottom, check the box acknowledging the creation of IAM resources, and click *Create stack*. -![CloudFormation parameters](/static/images/awsconsole2.png) - The stack will create DynamoDB tables, Lambda functions, Kinesis streams, and IAM roles and policies which will be used later on in the lab. +1. Provide a CloudFormation stack name. + +1. In the Parameters section, note the *AllowedIP** contains a default IP Address, if you want to access the instance via SSH obtain your own public IP address. Ensure to add the `/32` network mask at the end. Do not modify any other parameter and click *Next*. + +![CloudFormation parameters](/static/images/common/on-your-own-cf-01.png) + +6. Scroll to the bottom and click *Next*, and then review the *Template* and *Parameters*. When you are ready to create the stack, scroll to the bottom, check the box acknowledging the creation of IAM resources, and click *Create stack*. + +![CloudFormation parameters](/static/images/common/on-your-own-cf-02.png) + + The stack will create a Visual Studio Code EC2 instance, a role for the instance, and a role for the AWS Lambda function used later on in the lab. The CloudFormation template will create a set of folders that can be used to execute individually the lab modules presented in this guide. 1. After the CloudFormation stack is `CREATE_COMPLETE`, :link[continue onto the overview]{href="/event-driven-architecture/ex1overview"}. diff --git a/content/global-serverless-application/getting_started/aws-ws-event.en.md b/content/global-serverless-application/getting_started/aws-ws-event.en.md index 672bd3a6..a23aa1d3 100644 --- a/content/global-serverless-application/getting_started/aws-ws-event.en.md +++ b/content/global-serverless-application/getting_started/aws-ws-event.en.md @@ -7,30 +7,29 @@ chapter: true ### Login to AWS Workshop Studio Portal -1. If you are provided a one-click join link, skip to step 3. +1. If you are provided a one-click join link, use it and skip to step 3. 2. Visit [https://catalog.us-east-1.prod.workshops.aws](https://catalog.us-east-1.prod.workshops.aws). If you attended any other workshop earlier on this portal, please logout first. Click on **Get Started** on the right hand side of the window. - ![Workshop Studio Landing Page](/static/images/aws-ws-event1.png) 3. On the next, **Sign in** page, choose **Email One-Time Passcode (OTP)** to sign in to your workshop page. - ![Sign in page](/static/images/aws-ws-event2.png) 4. Provide an email address to receive a one-time passcode. - ![Email address input](/static/images/aws-ws-event3.png) 5. Enter the passcode that you received in the provided email address, and click **Sign in**. 6. Next, in the textbox, enter the event access code (eg: abcd-012345-ef) that you received from the event facilitators. If you are provided a one-click join link, you will be redirected to the next step automatically. - ![Event access code](/static/images/aws-ws-event4.png) 7. Select on **I agree with the Terms and Conditions** on the bottom of the next page and click **Join event** to continue to the event dashboard. 8. On the event dashboard, click on **Open AWS console** to federate into AWS Management Console in a new tab. On the same page, click **Get started** to open the workshop instructions. +![Event dashboard](/static/images/common/workshop-studio-01.png) + +9. In addition to the AWS console you should open your Visual Studio code server, by clicking in the `VSCodeServerURL` parameter, available from the "Event Outputs" section. When prompted for a password use the value from `VSCodeServerPassword`. -![Event dashboard](/static/images/aws-ws-event5.png) +![Event dashboard](/static/images/common/workshop-studio-02.png) 9. Now that you are connected continue on to: :link[Module 1]{href="/global-serverless-application/module_1"}. diff --git a/content/global-serverless-application/getting_started/on-your-own.en.md b/content/global-serverless-application/getting_started/on-your-own.en.md index cb5c67d8..03f95c6e 100644 --- a/content/global-serverless-application/getting_started/on-your-own.en.md +++ b/content/global-serverless-application/getting_started/on-your-own.en.md @@ -5,26 +5,29 @@ weight: 5 chapter: true --- -::alert[These setup instructions are identitical for LADV, LHOL, LBED, LMR, and LGME - all of which use the same Cloud9 template. Only complete this section once, and only if you're running it on your own account.]{type="warning"} +::alert[These setup instructions are identitical for LADV, LHOL, LBED, LMR, and LGME - all of which use the same Visual Studio Code template. Only complete this section once, and only if you're running it on your own account.]{type="warning"} -::alert[Only complete this section if you are running the workshop on your own. If you are at an AWS hosted event (such as re\:Invent, Immersion Day, etc), go to :link[At an AWS hosted Event]{href="/global-serverless-application/getting_started/aws-ws-event"}] +::alert[Only complete this section if you are running the workshop on your own. If you are at an AWS hosted event (such as re\:Invent, Immersion Day, etc), go to :link[At an AWS hosted Event]{href="/hands-on-labs/setup/aws-ws-event"}] ## Launch the CloudFormation stack ::alert[During the course of the lab, you will make DynamoDB tables that will incur a cost that could approach tens or hundreds of dollars per day. Ensure you delete the DynamoDB tables using the DynamoDB console, and make sure you delete the CloudFormation stack as soon as the lab is complete.] -1. Launch the CloudFormation template in US West 2 to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=DynamoDBID&templateURL=:param{key="design_patterns_s3_lab_yaml"}) - *Optionally, download [the YAML template](:param{key="design_patterns_s3_lab_yaml"}) and launch it your own way* +1. **[Deprecated]** - Launch the CloudFormation template in US West 2 to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=DynamoDBID&templateURL=:param{key="design_patterns_s3_lab_yaml"}) -2. Click *Next* on the first dialog. +1. *Optionally, download [the YAML template](https://github.com/aws-samples/aws-dynamodb-examples/blob/master/workshops/modernizer/modernizer-db.yaml) from our GitHub repository and launch it your own way* -3. In the Parameters section, note the *Timeout* is set to zero. This means the Cloud9 instance will not sleep; you may want to change this manually to a value such as 60 to protect against unexpected charges if you forget to delete the stack at the end. -Leave the *WorkshopZIP* parameter unchanged and click *Next* +1. Click *Next* on the first dialog. -![CloudFormation parameters](/static/images/awsconsole1.png) +1. Provide a CloudFormation stack name. -4. Scroll to the bottom and click *Next*, and then review the *Template* and *Parameters*. When you are ready to create the stack, scroll to the bottom, check the box acknowledging the creation of IAM resources, and click *Create stack*. - -![Acknowledge IAM role capabilities](/static/images/awsconsole2.png) - The stack will create a Cloud9 lab instance, a role for the instance, and a role for the AWS Lambda function used later on in the lab. It will use Systems Manager to configure the Cloud9 instance. +1. In the Parameters section, note the *AllowedIP** contains a default IP Address, if you want to access the instance via SSH obtain your own public IP address. Ensure to add the `/32` network mask at the end. Do not modify any other parameter and click *Next*. + +![CloudFormation parameters](/static/images/common/on-your-own-cf-01.png) + +6. Scroll to the bottom and click *Next*, and then review the *Template* and *Parameters*. When you are ready to create the stack, scroll to the bottom, check the box acknowledging the creation of IAM resources, and click *Create stack*. + +![CloudFormation parameters](/static/images/common/on-your-own-cf-02.png) + + The stack will create a Visual Studio Code EC2 instance, a role for the instance, and a role for the AWS Lambda function used later on in the lab. The CloudFormation template will create a set of folders that can be used to execute individually the lab modules presented in this guide. 5. After the CloudFormation stack is `CREATE_COMPLETE`, :link[continue onto Module 1]{href="/global-serverless-application/module_1"}. diff --git a/content/global-serverless-application/module_1/index.en.md b/content/global-serverless-application/module_1/index.en.md index 99d4d274..ade033e7 100644 --- a/content/global-serverless-application/module_1/index.en.md +++ b/content/global-serverless-application/module_1/index.en.md @@ -3,41 +3,54 @@ title : "Module 1: Deploy the backend resources" weight : 20 --- +### Login to AWS Workshop Studio Portal -## Setup Steps -This lab requires a terminal shell with Python3 and the AWS Command Line Interface (CLI) installed and configured with admin credentials. +On the event dashboard, click on **Open AWS console** to federate into AWS Management Console in a new tab. On the same page, click **Get started** to open the workshop instructions. +![Event dashboard](/static/images/common/workshop-studio-01.png) -We will use AWS Cloud9 for this event. [AWS Cloud9](https://aws.amazon.com/cloud9/) is a cloud-based integrated development environment (IDE) that lets you write, run, and debug code with just a browser. AWS Cloud9 includes a code editor, debugger, and terminal. It also comes prepackaged with essential tools for popular programming languages and the AWS Command Line Interface (CLI) preinstalled so that you don’t have to install files or configure your laptop for this lab. Your AWS Cloud9 environment will have access to the same AWS resources as the user with which you signed in to the AWS Management Console. +In addition to the AWS console you should open your Visual Studio code server, by clicking in the `VSCodeServerURL` parameter, available from the "Event Outputs" section. When prompted for a password use the value from `VSCodeServerPassword`. -### To set up your AWS Cloud9 development environment: +![Event dashboard](/static/images/common/workshop-studio-02.png) -1. Choose **Services** at the top of the page, and then choose **Cloud9** under **Developer Tools**. - -2. There will be an environment ready to use under **My environments**. +During the first 60 seconds, the environment will automatically update extensions and plugins. Any startup notification can be safely dismissed. + +![VS Code Setup](/static/images/common/common-vs-code-01.png) -3. Click on **Open** under **Cloud9 IDE**, and your IDE should open with a welcome note. +If a terminal is not available at the bottom left side of your screen, please open a new one like the following picture indicates. -You should now see your AWS Cloud9 environment. You need to be familiar with the three areas of the AWS Cloud9 console shown in the following screenshot: +![VS Code Setup](/static/images/common/common-vs-code-02.png) -![Cloud9 Environment](/static/images/global-serverless-application/module_1/cloud9-environment.png) +Then run the command `aws sts get-caller-identity` just to verify that your AWS credentials have been properly configured. -- **File explorer**: On the left side of the IDE, the file explorer shows a list of the files in your directory. - -- **File editor**: On the upper right area of the IDE, the file editor is where you view and edit files that you’ve selected in the file explorer. - -- **Terminal**: On the lower right area of the IDE, this is where you run commands to execute code samples. +![VS Code Setup](/static/images/common/common-vs-code-03.png) -### Verify Environment -1. Run ```aws sts get-caller-identity``` to verify the AWS CLI is functioning -2. Run ```python3 --version``` to verify that python3 is installed -3. Your Cloud9 environment is already configured with boto3, but for this lab we will also need AWS Chalice. -Run ```sudo python3 -m pip install chalice``` to install [AWS Chalice](https://github.com/aws/chalice). + +From within the terminal: + +To keep our python files and dependencies organized lets create a python virtual environment, in the LMR folder: + +```bash +cd LMR +python -m venv .venv +source .venv/bin/activate +``` + +Your VS Code environment is already configured with boto3, but for this lab we will also need [AWS Chalice](https://github.com/aws/chalice). + +```bash +pip install chalice +``` ::alert[You may see a couple of WARNING lines near the bottom of the command output, these are safely ignored.]{type="info"} -4. Run ```curl -O https://amazon-dynamodb-labs.com/assets/global-serverless.zip``` -5. Run ```unzip global-serverless.zip && cd global-serverless``` -6. To see what application resources we will be deploying you can open the **app.py** file by navigating to "global-serverless/app.py" in the file explorer. This code defines Lambda function and API Gateway routes. +Download the global serverless workshop: + +```bash +curl -O https://amazon-dynamodb-labs.com/assets/global-serverless.zip +unzip global-serverless.zip && cd global-serverless +``` + +To see what application resources we will be deploying you can open the **app.py** file by navigating to "global-serverless/app.py" in the file explorer. This code defines Lambda function and API Gateway routes. ### Deploy a new DynamoDB table 1. In your terminal, run: @@ -121,10 +134,18 @@ aws dynamodb get-item \ ### Deploy the backend API service to the first region -1. Run ```export AWS_DEFAULT_REGION=us-west-2``` to instruct Chalice to deploy into us-west-2 for our first region -2. Run ```chalice deploy``` and wait for the infrastructure to be created. Chalice is a Python based serverless framework. +1. Run the following instruction to instruct Chalice to deploy into us-west-2 for our first region +```bash +export AWS_DEFAULT_REGION=us-west-2 +``` + +2. Run the following instruction and wait for the infrastructure to be created. Chalice is a Python based serverless framework. +```bash +chalice deploy +``` + 3. When the script completes, it reports a list of resources deployed. **Copy and paste the Rest API URL into a note as you will need it later.** -4. Copy that REST API URL and paste it into a new browser tab to test it. You should see a JSON response of {ping: "ok"} +4. Copy that REST API URL and paste it into a new browser tab to test it. You should see a JSON response of `{ping: "ok"}` 5. You can type in certain paths to the end of the URL. Add the word scan so that the URL now ends with ```/api/scan``` You should see a JSON response representing the results of a table scan. @@ -149,13 +170,20 @@ Click Ping again and check the latency. You now have a test harness where you can perform reads and writes to a DynamoDB record via the custom API. ### Deploy the service stack to the second region, Ireland -1. Run ```export AWS_DEFAULT_REGION=eu-west-1``` to instruct Chalice to deploy into eu-west-1 for our second region. -2. Run ```chalice deploy``` and wait for the infrastructure to be created in eu-west-1. +1. Run the following instruction to instruct Chalice to deploy into eu-west-1 for our second region. +```bash +export AWS_DEFAULT_REGION=eu-west-1 +``` + +2. Run the following instruction and wait for the infrastructure to be created in eu-west-1. +```bash +chalice deploy +``` 3. When the script completes, it reports a list of resources deployed. Again, copy down the new REST API URL to a note for later use. 4. Return to the web app. 5. Click **Add API** again and paste in the new API URL. A second row of buttons appears in an alternate color. -Note: In this workshop you have permissions for Global Tables in us-west-2 and eu-west-1. +Note: In this workshop you have permissions for Global Tables in `us-west-2` and `eu-west-1`. In your own account you could add any number of replicas in any regions. Note 2: If you make any changes to the code in ```app.py```, you can push the updates to your Lambda function diff --git a/content/hands-on-labs/explore-cli/index.en.md b/content/hands-on-labs/explore-cli/index.en.md index bee8a755..71ebda19 100644 --- a/content/hands-on-labs/explore-cli/index.en.md +++ b/content/hands-on-labs/explore-cli/index.en.md @@ -5,7 +5,7 @@ weight: 20 chapter: true --- -We will be exploring DynamoDB with the AWS CLI using the [AWS cloud9 management Console](https://console.aws.amazon.com/cloud9/home). If you haven't already, choose *open IDE* to launch AWS Cloud9 environment. You can close the Welcome screen and adjust your terminal to increase screen area, or close all the windows and navigate to *Window* -> *New Terminal* to open a new terminal window. +We will be exploring the Amazon DynamoDB tables using the AWS CLI with the platform we have set up during the :link[environment setup]{href="/hands-on-labs/setup/create-tables"}. The highest level of abstraction in DynamoDB is a *Table* (there isn't a concept of a "Database" that has a bunch of tables inside of it like in other NOSQL or RDBMS services). Inside of a Table you will insert *Items*, which are analogous to what you might think of as a row in other services. Items are a collection of *Attributes*, which are analogous to columns. Every item must have a *Primary Key* which will uniquely identify that row (two items may not contain the same Primary Key). At a minimum when you create a table you must choose an attribute to be the *Partition Key* (aka the Hash Key) and you can optionally specify another attribute to be the *Sort Key*. diff --git a/content/hands-on-labs/index.en.md b/content/hands-on-labs/index.en.md index e705a17a..7f2776eb 100644 --- a/content/hands-on-labs/index.en.md +++ b/content/hands-on-labs/index.en.md @@ -24,3 +24,4 @@ This workshop is designed for developers, engineers, and database administrators ### Recommended study before taking the lab If you're not part of an AWS event and you haven't recently reviewed DynamoDB design concepts, we suggest you watch this video on [Advanced Design Patterns for DynamoDB](:param{key="latest_rh_design_pattern_yt"}), which is about an hour in duration. + diff --git a/content/hands-on-labs/setup/aws-ws-event.en.md b/content/hands-on-labs/setup/aws-ws-event.en.md index 72602ca1..082589f4 100644 --- a/content/hands-on-labs/setup/aws-ws-event.en.md +++ b/content/hands-on-labs/setup/aws-ws-event.en.md @@ -26,6 +26,10 @@ chapter: true 7. Select on **I agree with the Terms and Conditions** on the bottom of the next page and click **Join event** to continue to the event dashboard. 8. On the event dashboard, click on **Open AWS console** to federate into AWS Management Console in a new tab. On the same page, click **Get started** to open the workshop instructions. -![Event dashboard](/static/images/aws-ws-event5.png) +![Event dashboard](/static/images/common/workshop-studio-01.png) -9. Now that you are connected continue on to: :link[Step 1]{href="/design-patterns/setup/Step1"}. +9. In addition to the AWS console you should open your Visual Studio code server, by clicking in the `VSCodeServerURL` parameter, available from the "Event Outputs" section. When prompted for a password use the value from `VSCodeServerPassword`. + +![Event dashboard](/static/images/common/workshop-studio-02.png) + +10. Continue with the steps as listed in the section :link[Launch Visual Studio Code]{href="/hands-on-labs/setup/vscode"}. \ No newline at end of file diff --git a/content/hands-on-labs/setup/cloud9.en.md b/content/hands-on-labs/setup/cloud9.en.md deleted file mode 100644 index 0ebc1ebe..00000000 --- a/content/hands-on-labs/setup/cloud9.en.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "Launch Cloud9 IDE" -date: 2021-04-21T07:33:04-05:00 -weight: 13 ---- - -Let's begin by navigating to [AWS cloud9 management Console](https://console.aws.amazon.com/cloud9/home) and choose *open IDE* on the *DynamoDBLabsIDE* instance to launch AWS Cloud9 environment. You can close the Welcome screen and adjust your terminal to increase screen area, or close all the windows and navigate to *Window* -> *New Terminal* to open a new terminal window. - -Then run the command `aws sts get-caller-identity` just to verify that your AWS credentials have been properly configured. - -![Cloud9 Setup](/static/images/hands-on-labs/setup/cloud9_setup.png) diff --git a/content/hands-on-labs/setup/create-tables.en.md b/content/hands-on-labs/setup/create-tables.en.md index 47fd20d2..a643592e 100644 --- a/content/hands-on-labs/setup/create-tables.en.md +++ b/content/hands-on-labs/setup/create-tables.en.md @@ -15,8 +15,7 @@ aws dynamodb create-table \ AttributeName=Id,AttributeType=N \ --key-schema \ AttributeName=Id,KeyType=HASH \ - --provisioned-throughput \ - ReadCapacityUnits=10,WriteCapacityUnits=5 \ + --billing-mode PAY_PER_REQUEST \ --query "TableDescription.TableStatus" aws dynamodb create-table \ --table-name Forum \ @@ -24,8 +23,7 @@ aws dynamodb create-table \ AttributeName=Name,AttributeType=S \ --key-schema \ AttributeName=Name,KeyType=HASH \ - --provisioned-throughput \ - ReadCapacityUnits=10,WriteCapacityUnits=5 \ + --billing-mode PAY_PER_REQUEST \ --query "TableDescription.TableStatus" aws dynamodb create-table \ --table-name Thread \ @@ -35,8 +33,7 @@ aws dynamodb create-table \ --key-schema \ AttributeName=ForumName,KeyType=HASH \ AttributeName=Subject,KeyType=RANGE \ - --provisioned-throughput \ - ReadCapacityUnits=10,WriteCapacityUnits=5 \ + --billing-mode PAY_PER_REQUEST \ --query "TableDescription.TableStatus" aws dynamodb create-table \ --table-name Reply \ @@ -46,8 +43,7 @@ aws dynamodb create-table \ --key-schema \ AttributeName=Id,KeyType=HASH \ AttributeName=ReplyDateTime,KeyType=RANGE \ - --provisioned-throughput \ - ReadCapacityUnits=10,WriteCapacityUnits=5 \ + --billing-mode PAY_PER_REQUEST \ --query "TableDescription.TableStatus" ``` diff --git a/content/hands-on-labs/setup/index.en.md b/content/hands-on-labs/setup/index.en.md index 42524e3f..88eb22cd 100644 --- a/content/hands-on-labs/setup/index.en.md +++ b/content/hands-on-labs/setup/index.en.md @@ -11,3 +11,26 @@ In this chapter, we'll cover the prerequisites needed to get started with [Amazo The deployment architecture that you will be building in this lab will look like the below. ![Final Deployment Architecture](/static/images/hands-on-labs/setup/dynamodb_lab_architecture.png) + +## Prerequisites + +To run this lab, you'll need an AWS account, and a user identity with access to the following services: + +* Amazon DynamoDB +* Visual Studio Code Web Environment + +You can use your own account, or an account provided through Workshop Studio Event Delivery as part of an AWS organized workshop. Using an account provided by Workshop Studio is the easier path, as you will have full access to all AWS services, and the account will terminate automatically when the event is over. + +### Account setup + +#### Using an account provided to you by your lab instructor + +If you are running this workshop using a link provided to you by your AWS instructor, please use that link and enter the access-code provided to you as part of the workshop. In the lab AWS account, the Visual Studio Code instance should already be provisioned. This should be available at the "Event Output" section in your Workshop studio URL. + +![CloudFormation parameters](/static/images/common/on-your-own-cf-03.png) + +#### Using your own AWS account + +If you are using your own AWS account, be sure you have access to create and manage resources in Amazon DynamoDB and EC2 instance. + +*After completing the workshop, remember to complete the [cleanup](/hands-on-labs/cleanup.html) section to remove any unnecessary AWS resources.* \ No newline at end of file diff --git a/content/hands-on-labs/setup/load-sample-data.en.md b/content/hands-on-labs/setup/load-sample-data.en.md index 77c7d180..16dcdb82 100644 --- a/content/hands-on-labs/setup/load-sample-data.en.md +++ b/content/hands-on-labs/setup/load-sample-data.en.md @@ -7,6 +7,8 @@ weight: 15 Download and unzip the sample data: ```bash +cd LHOL + curl -O https://amazon-dynamodb-labs.com/static/hands-on-labs/sampledata.zip unzip sampledata.zip @@ -16,11 +18,17 @@ Load the sample data using the `batch-write-item` CLI: ```bash aws dynamodb batch-write-item --request-items file://ProductCatalog.json +``` +```bash aws dynamodb batch-write-item --request-items file://Forum.json +``` +```bash aws dynamodb batch-write-item --request-items file://Thread.json +``` +```bash aws dynamodb batch-write-item --request-items file://Reply.json ``` @@ -33,4 +41,6 @@ After each data load you should get this message saying that there were no Unpro ``` #### Sample output -![Cloud9 Setup](/static/images/hands-on-labs/setup/load_data.png) +![Processed Items](/static/images/hands-on-labs/load-sample-data.png) + +You can now continue with the section :link[Explore DynamoDB with the CLI]{href="/hands-on-labs/explore-cli"}. \ No newline at end of file diff --git a/content/hands-on-labs/setup/prerequisites.en.md b/content/hands-on-labs/setup/prerequisites.en.md deleted file mode 100644 index ad04ff39..00000000 --- a/content/hands-on-labs/setup/prerequisites.en.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: "Prerequisites" -date: 2020-08-06T07:38:58-05:00 -weight: 11 ---- - -To run this lab, you'll need an AWS account, and a user identity with access to the following services: - -* Amazon DynamoDB -* AWS Cloud9 Environment - -You can use your own account, or an account provided through Workshop Studio Event Delivery as part of an AWS organized workshop. Using an account provided by Workshop Studio is the easier path, as you will have full access to all AWS services, and the account will terminate automatically when the event is over. - -### Account setup - -#### Using an account provided to you by your lab instructor - -If you are running this workshop using a link provided to you by your AWS instructor, please use that link and enter the access-code provided to you as part of the workshop. In the lab AWS account, the Cloud9 instance should already be provisioned. Please open the "AWS Cloud9" section of the AWS Management Console in the correct region and look for a lab instance called **DynamoDBC9**. - -#### Using your own AWS account - -If you are using your own AWS account, be sure you have access to create and manage resources in Amazon DynamoDB and AWS Cloud9 environment - -*After completing the workshop, remember to complete the [cleanup](/hands-on-labs/cleanup.html) section to remove any unnecessary AWS resources.* diff --git a/content/hands-on-labs/setup/setup.en.md b/content/hands-on-labs/setup/setup.en.md index 6e5503c2..1a8315d3 100644 --- a/content/hands-on-labs/setup/setup.en.md +++ b/content/hands-on-labs/setup/setup.en.md @@ -5,24 +5,29 @@ weight: 5 chapter: true --- -::alert[These setup instructions are identitical for LADV, LHOL, LBED, LMR, and LGME - all of which use the same Cloud9 template. Only complete this section once, and only if you're running it on your own account.]{type="warning"} +::alert[These setup instructions are identitical for LADV, LHOL, LBED, LMR, and LGME - all of which use the same Visual Studio Code template. Only complete this section once, and only if you're running it on your own account.]{type="warning"} ::alert[Only complete this section if you are running the workshop on your own. If you are at an AWS hosted event (such as re\:Invent, Immersion Day, etc), go to :link[At an AWS hosted Event]{href="/hands-on-labs/setup/aws-ws-event"}] ## Launch the CloudFormation stack ::alert[During the course of the lab, you will make DynamoDB tables that will incur a cost that could approach tens or hundreds of dollars per day. Ensure you delete the DynamoDB tables using the DynamoDB console, and make sure you delete the CloudFormation stack as soon as the lab is complete.] -1. Launch the CloudFormation template in US West 2 to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=DynamoDBID&templateURL=:param{key="design_patterns_s3_lab_yaml"}) - 1. *Optionally, download [the YAML template](:param{key="design_patterns_s3_lab_yaml"}) and launch it your own way* +1. **[Deprecated]** - Launch the CloudFormation template in US West 2 to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=DynamoDBID&templateURL=:param{key="design_patterns_s3_lab_yaml"}) + +1. *Optionally, download [the YAML template](https://github.com/aws-samples/aws-dynamodb-examples/blob/master/workshops/modernizer/modernizer-db.yaml) from our GitHub repository and launch it your own way* 1. Click *Next* on the first dialog. -1. In the Parameters section, note the *Timeout* is set to zero. This means the Cloud9 instance will not sleep; you may want to change this manually to a value such as 60 to protect against unexpected charges if you forget to delete the stack at the end. - Leave the *WorkshopZIP* parameter unchanged and click *Next* -![CloudFormation parameters](/static/images/awsconsole1.png) +1. Provide a CloudFormation stack name. + +1. In the Parameters section, note the *AllowedIP** contains a default IP Address, if you want to access the instance via SSH obtain your own public IP address. Ensure to add the `/32` network mask at the end. Do not modify any other parameter and click *Next*. + +![CloudFormation parameters](/static/images/common/on-your-own-cf-01.png) + +6. Scroll to the bottom and click *Next*, and then review the *Template* and *Parameters*. When you are ready to create the stack, scroll to the bottom, check the box acknowledging the creation of IAM resources, and click *Create stack*. -1. Scroll to the bottom and click *Next*, and then review the *Template* and *Parameters*. When you are ready to create the stack, scroll to the bottom, check the box acknowledging the creation of IAM resources, and click *Create stack*. -![CloudFormation parameters](/static/images/awsconsole2.png) - The stack will create a Cloud9 lab instance, a role for the instance, and a role for the AWS Lambda function used later on in the lab. It will use Systems Manager to configure the Cloud9 instance. +![CloudFormation parameters](/static/images/common/on-your-own-cf-02.png) + + The stack will create a Visual Studio Code EC2 instance, a role for the instance, and a role for the AWS Lambda function used later on in the lab. The CloudFormation template will create a set of folders that can be used to execute individually the lab modules presented in this guide. -1. After the CloudFormation stack is `CREATE_COMPLETE`, :link[continue onto Prerequisites]{href="/hands-on-labs/setup/prerequisites"}. +7. After the CloudFormation stack is `CREATE_COMPLETE`, :link[continue onto launching your IDE]{href="/hands-on-labs/setup/vscode"}. diff --git a/content/hands-on-labs/setup/vscode.en.md b/content/hands-on-labs/setup/vscode.en.md new file mode 100644 index 00000000..afda9e4f --- /dev/null +++ b/content/hands-on-labs/setup/vscode.en.md @@ -0,0 +1,17 @@ +--- +title: "Launch Visual Studio Code" +date: 2021-04-21T07:33:04-05:00 +weight: 13 +--- + +During the first 60 seconds, the environment will automatically update extensions and plugins. Any startup notification can be safely dismissed. + +![VS Code Setup](/static/images/common/common-vs-code-01.png) + +If a terminal is not available at the bottom left side of your screen, please open a new one like the following picture indicates. + +![VS Code Setup](/static/images/common/common-vs-code-02.png) + +Then run the command `aws sts get-caller-identity` just to verify that your AWS credentials have been properly configured. + +![VS Code Setup](/static/images/common/common-vs-code-03.png) diff --git a/content/relational-migration/schema refactoring/index4.en.md b/content/relational-migration/schema refactoring/index4.en.md index ed6fce2d..17538144 100644 --- a/content/relational-migration/schema refactoring/index4.en.md +++ b/content/relational-migration/schema refactoring/index4.en.md @@ -14,13 +14,13 @@ so that it could be included in a script and automated. 3. Run: ```bash -python3 mysql_desc_ddb.py Customers +python mysql_desc_ddb.py Customers ``` The script should output a table definition in JSON format, like we saw within the web app. 3. Next, let's pipe the output to a new file so we can more easily review it: ```bash -python3 mysql_desc_ddb.py Customers > Customers.json +python mysql_desc_ddb.py Customers > Customers.json ``` 4. Within the left nav, find ```Customers.json``` and double click to open it in the editor. diff --git a/content/relational-migration/schema refactoring/index5.en.md b/content/relational-migration/schema refactoring/index5.en.md index cf8443a6..d4d3b0bd 100644 --- a/content/relational-migration/schema refactoring/index5.en.md +++ b/content/relational-migration/schema refactoring/index5.en.md @@ -48,7 +48,7 @@ Now, let's generate a DynamoDB table definition based on this view's output. 8. Run: ```bash -python3 mysql_desc_ddb.py vCustOrders +python mysql_desc_ddb.py vCustOrders ``` The script returns a new table definition based on the name of the view, with the first @@ -59,7 +59,7 @@ first TWO column names as the Partition Key and Sort Key. 9. Run: ```bash -python3 mysql_desc_ddb.py vCustOrders 2 +python mysql_desc_ddb.py vCustOrders 2 ``` Now we can see that the DynamoDB table's Key Schema includes both columns. diff --git a/content/relational-migration/setup/index1.en.md b/content/relational-migration/setup/index1.en.md index d715fbc7..90c8625d 100644 --- a/content/relational-migration/setup/index1.en.md +++ b/content/relational-migration/setup/index1.en.md @@ -2,65 +2,57 @@ title : "Dev Environment" weight : 16 --- +### Login to AWS Workshop Studio Portal -[AWS Cloud9](https://aws.amazon.com/cloud9/) is a cloud-based integrated development environment (IDE) that lets you write, run, and debug code with just a browser. AWS Cloud9 includes a code editor, debugger, and terminal. It also comes prepackaged with essential tools for popular programming languages and the AWS Command Line Interface (CLI) preinstalled so that you don’t have to install files or configure your laptop for this lab. Your AWS Cloud9 environment will have access to the same AWS resources as the user with which you signed in to the AWS Management Console. +On the event dashboard, click on **Open AWS console** to federate into AWS Management Console in a new tab. On the same page, click **Get started** to open the workshop instructions. +![Event dashboard](/static/images/common/workshop-studio-01.png) -### To set up your AWS Cloud9 development environment: +In addition to the AWS console you should open your Visual Studio code server, by clicking in the `VSCodeServerURL` parameter, available from the "Event Outputs" section. When prompted for a password use the value from `VSCodeServerPassword`. -1. Choose **Services** at the top of the page, and then choose **Cloud9** under **Developer Tools**. +![Event dashboard](/static/images/common/workshop-studio-02.png) -2. There would be an environment ready to use under **Your environments**. +During the first 60 seconds, the environment will automatically update extensions and plugins. Any startup notification can be safely dismissed. + +![VS Code Setup](/static/images/common/common-vs-code-01.png) -3. Click on **Open IDE**, your IDE should open with a welcome note. +If a terminal is not available at the bottom left side of your screen, please open a new one like the following picture indicates. -You should now see your AWS Cloud9 environment. You need to be familiar with the three areas of the AWS Cloud9 console shown in the following screenshot: +![VS Code Setup](/static/images/common/common-vs-code-02.png) -![Cloud9 Environment](/static/images/zetl-cloud9-environment.png) +Then run the command `aws sts get-caller-identity` just to verify that your AWS credentials have been properly configured. -- **File explorer**: On the left side of the IDE, the file explorer shows a list of the files in your directory. - -- **File editor**: On the upper right area of the IDE, the file editor is where you view and edit files that you’ve selected in the file explorer. - -- **Terminal**: On the lower right area of the IDE, this is where you run commands to execute code samples. +![VS Code Setup](/static/images/common/common-vs-code-03.png) From within the terminal: -2. Run the command ```aws sts get-caller-identity``` just to verify that your AWS credentials have been properly configured. +To keep our python files and dependencies organized lets create a python virtual environment: + +```bash +python -m venv .venv +source .venv/bin/activate +``` -3. Clone the repository containing the Chalice code and migration scripts. Run: +Clone the repository containing the Chalice code and migration scripts. Run: ```bash -cd ~/environment +cd /home/participant/workshop/LSQL git clone https://github.com/aws-samples/aws-dynamodb-examples.git cd aws-dynamodb-examples -git checkout :param{key="lsql_git_commit"} -``` - - -*This checkout command ensures you are using a specific, tested version of the repository* - -```bash cd workshops/relational-migration ``` -4. Next, run this to install three components: Boto3 (AWS SDK for Python), Chalice, and the MySQL connector for Python. +Next, run this to install three components: Boto3 (AWS SDK for Python), Chalice, and the MySQL connector for Python. ```bash sudo pip3 install chalice mysql-connector-python ``` -5. From the left navigation panel, locate our project folder by - clicking into ```aws-dynamodb-examples / workshops / relational-migration``` +From the left navigation panel, locate our project folder by clicking into ```LSQL / aws-dynamodb-examples / workshops / relational-migration``` -6. Find the gear icon near the top of the left nav panel, and click "show hidden files" . - You may now see a folder called ```.chalice``` under the main **relational-migration** folder. - Within this folder is the ```config.json``` file that holds the MySQL connection details. - A script will automatically update this file in the next step. +Navigate to the `.chalice` folder under the main **relational-migration** folder. Within this folder is the ```config.json``` file that holds the MySQL connection details. A script will automatically update this file in the next step. -7. Return to the terminal prompt window. Run this file which - uses AWS CLI commands to find the MySQL host's IP address and S3 bucket name, then sets them as - environment variables, while also updating the Chalice config.json file: +Return to the terminal prompt window. Run this file which, uses AWS CLI commands to find the MySQL host's IP address and S3 bucket name, then sets them as environment variables, while also updating the Chalice config.json file: ```bash source ./setenv.sh diff --git a/content/scenarios/reference-materials/index.en.md b/content/scenarios/reference-materials/index.en.md index 5ef11f3b..f5f5eece 100644 --- a/content/scenarios/reference-materials/index.en.md +++ b/content/scenarios/reference-materials/index.en.md @@ -20,6 +20,7 @@ Understanding Distributed Systems and DynamoDB: - **[Amazon DynamoDB: How It Works](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.html)** DynamoDB Related Tools: +- **[Amazon DynamoDB learning resources and tools](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AdditionalResources.html)** - **[NoSQL Workbench for Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/workbench.html)** - **[EMR-DynamoDB-Connector: Access data stored in Amazon DynamoDB with Apache Hadoop, Apache Hive, and Apache Spark](https://github.com/awslabs/emr-dynamodb-connector)** diff --git a/design-patterns/cloudformation/C9.yaml b/design-patterns/cloudformation/C9.yaml index d34d4802..a5bff7ec 100644 --- a/design-patterns/cloudformation/C9.yaml +++ b/design-patterns/cloudformation/C9.yaml @@ -1,45 +1,19 @@ #Source: https://tiny.amazon.com/1dbfklsd7 -Description: Provides a Cloud9 instance, resizes the instance volume size, and installs required components. +Description: Provides a Code instance, resizes the instance volume size, and installs required components. Parameters: EnvironmentName: Description: An environment name that is tagged to the resources. Type: String Default: DynamoDBID - InstanceName: - Description: Cloud9 instance name. - Type: String - Default: DynamoDBC9 - InstanceType: - Description: The memory and CPU of the EC2 instance that will be created for Cloud9 to run on. - Type: String - Default: t3.medium - AllowedValues: - - t2.micro - - t3.micro - - t3.small - - t3.medium - - t2.medium - - m5.large - ConstraintDescription: Must be a valid Cloud9 instance type - InstanceVolumeSize: - Description: The size in GB of the Cloud9 instance volume - Type: Number - Default: 16 - InstanceOwner: - Type: String - Description: Assumed role username of Cloud9 owner, in the format 'Role/username'. Leave blank to assign leave the instance assigned to the role running the CloudFormation template. - AutomaticStopTimeMinutes: - Description: How long Cloud9 can be inactive (no user input) before auto-hibernating. This helps prevent unnecessary charges. - Type: Number - Default: 0 + WorkshopZIP: Type: String Description: Location of LADV code ZIP Default: https://amazon-dynamodb-labs.com/assets/workshop.zip DBLatestAmiId: Type: 'AWS::SSM::Parameter::Value' - Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2' + Default: '/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-arm64' DbMasterUsername: Description: The datbase master user name Type: String @@ -48,6 +22,35 @@ Parameters: Description: The database master password Type: String Default: m7de4uwt2eG# + ################## VSCode Server ################# + VSCodeUser: + Type: String + Description: Username for VS code-server + Default: participant + VSCodeInstanceName: + Type: String + Description: EC2 Instance name for VS code-server + Default: VSCodeServer + VSCodeInstanceVolumeSize: + Type: Number + Description: VS code-server EC2 instance volume size in GB + Default: 40 + VSCodeInstanceType: + Description: VS code-server EC2 instance type + Type: String + Default: t4g.large + AllowedPattern: ^(t4g|m6g|m7g|m8g|c6g|c7g)\.[0-9a-z]+$ + ConstraintDescription: Must be a valid t, c or m series Graviton EC2 instance type + VSCodeHomeFolder: + Type: String + Description: Folder to open in VS Code server + Default: /home/participant/workshop + PythonMajorMinor: + Type: String + Default: "3.13" + Description: "Python major.minor version (e.g., 3.13) for the Code instance. Latest patch version will be installed automatically." + AllowedPattern: "^[0-9]+\\.[0-9]+$" + ConstraintDescription: "Must be in format X.Y (e.g., 3.13)" Metadata: AWS::CloudFormation::Interface: @@ -56,30 +59,13 @@ Metadata: default: General configuration Parameters: - EnvironmentName - - Label: - default: Cloud9 configuration - Parameters: - - InstanceName - - InstanceType - - InstanceVolumeSize - - InstanceOwner - - AutomaticStopTimeMinutes + ParameterLabels: EnvironmentName: default: Environment name - InstanceName: - default: Name - InstanceType: - default: Instance type - InstanceVolumeSize: - default: Attached volume size - InstanceOwner: - default: Role and username - AutomaticStopTimeMinutes: - default: Timeout - -Conditions: - AssignCloud9Owner: !Not [!Equals [!Ref InstanceOwner, ""]] + + + Mappings: DesignPatterns: options: @@ -122,6 +108,42 @@ Mappings: us-west-2: PrefixList: pl-047d464325e7bf465 + AWSRegionsPrefixListID: + # aws ec2 describe-managed-prefix-lists --region | jq -r '.PrefixLists[] | select (.PrefixListName == "com.amazonaws.global.cloudfront.origin-facing") | .PrefixListId' + ap-northeast-1: + PrefixList: pl-58a04531 + ap-northeast-2: + PrefixList: pl-22a6434b + ap-south-1: + PrefixList: pl-9aa247f3 + ap-southeast-1: + PrefixList: pl-31a34658 + ap-southeast-2: + PrefixList: pl-b8a742d1 + ca-central-1: + PrefixList: pl-38a64351 + eu-central-1: + PrefixList: pl-a3a144ca + eu-north-1: + PrefixList: pl-fab65393 + eu-west-1: + PrefixList: pl-4fa04526 + eu-west-2: + PrefixList: pl-93a247fa + eu-west-3: + PrefixList: pl-75b1541c + sa-east-1: + PrefixList: pl-5da64334 + us-east-1: + PrefixList: pl-3b927c52 + us-east-2: + PrefixList: pl-b6a144df + us-west-1: + PrefixList: pl-4ea04527 + us-west-2: + PrefixList: pl-82a045eb + + Resources: #LADV Role DDBReplicationRole: @@ -164,7 +186,7 @@ Resources: Resource: - '*' ################## PERMISSIONS AND ROLES ################# - Cloud9Role: + CodeInstanceRole: Type: AWS::IAM::Role Properties: Tags: @@ -195,7 +217,9 @@ Resources: - cloud9:UpdateEnvironment Resource: '*' - Cloud9LambdaExecutionRole: + + ################ LAMBDA INSTANCE TYPE FINDER ################ + VSCodeLambdaExecutionRole: Type: AWS::IAM::Role Metadata: cfn_nag: @@ -216,7 +240,7 @@ Resources: ManagedPolicyArns: - !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole Policies: - - PolicyName: !Sub Cloud9LambdaPolicy-${AWS::Region} + - PolicyName: !Sub VSCodeLambdaPolicy-${AWS::Region} PolicyDocument: Version: 2012-10-17 Statement: @@ -259,43 +283,83 @@ Resources: - s3:ListBucket - s3:DeleteObject Resource: - - !Sub arn:${AWS::Partition}:s3:::${Cloud9LogBucket} - - !Sub arn:${AWS::Partition}:s3:::${Cloud9LogBucket}/* + - !Sub arn:${AWS::Partition}:s3:::${VSCodeLogBucket} + - !Sub arn:${AWS::Partition}:s3:::${VSCodeLogBucket}/* - Effect: Allow Action: - iam:PassRole Resource: Fn::GetAtt: - - Cloud9Role + - CodeInstanceRole - Arn - ################ LAMBDA INSTANCE TYPE FINDER ################ - Cloud9FindTheInstanceTypeLambda: - Type: Custom::Cloud9FindTheInstanceTypeLambda + + VSCodeLogBucket: + Type: AWS::S3::Bucket + Metadata: + cfn_nag: + rules_to_suppress: + - id: W35 + reason: Access logs aren't needed for this bucket + DeletionPolicy: Delete + Properties: + AccessControl: Private + BucketEncryption: + ServerSideEncryptionConfiguration: + - ServerSideEncryptionByDefault: + SSEAlgorithm: AES256 + PublicAccessBlockConfiguration: + BlockPublicAcls: true + BlockPublicPolicy: true + IgnorePublicAcls: true + RestrictPublicBuckets: true + + VSCodeLogBucketPolicy: + Type: AWS::S3::BucketPolicy + Properties: + Bucket: !Ref VSCodeLogBucket + PolicyDocument: + Version: 2012-10-17 + Statement: + - Action: + - s3:GetObject + - s3:PutObject + - s3:PutObjectAcl + Effect: Allow + Resource: + - !Sub arn:${AWS::Partition}:s3:::${VSCodeLogBucket} + - !Sub arn:${AWS::Partition}:s3:::${VSCodeLogBucket}/* + Principal: + AWS: + Fn::GetAtt: + - VSCodeLambdaExecutionRole + - Arn + VSCodeFindTheInstanceTypeLambda: + Type: Custom::VSCodeFindTheInstanceTypeLambda DependsOn: - - Cloud9LambdaExecutionRole + - VSCodeLambdaExecutionRole Properties: Tags: - Key: Environment Value: !Sub ${EnvironmentName} ServiceToken: Fn::GetAtt: - - Cloud9FindTheInstanceTypeLambdaFunction + - VSCodeFindTheInstanceTypeLambdaFunction - Arn Region: Ref: AWS::Region StackName: Ref: AWS::StackName InstanceType: - Ref: InstanceType + Ref: VSCodeInstanceType LogBucket: - Ref: Cloud9LogBucket - Cloud9FindTheInstanceTypeLambdaFunction: + Ref: VSCodeLogBucket + VSCodeFindTheInstanceTypeLambdaFunction: Type: AWS::Lambda::Function Metadata: cfn_nag: rules_to_suppress: - id: W58 - reason: Cloud9LambdaExecutionRole has the AWSLambdaBasicExecutionRole managed policy attached, allowing writing to CloudWatch logs + reason: VSCodeLambdaExecutionRole has the AWSLambdaBasicExecutionRole managed policy attached, allowing writing to CloudWatch logs - id: W89 reason: Bootstrap function does not need the scaffolding of a VPC or provisioned concurrency - id: W92 @@ -307,9 +371,9 @@ Resources: Handler: index.lambda_handler Role: Fn::GetAtt: - - Cloud9LambdaExecutionRole + - VSCodeLambdaExecutionRole - Arn - Runtime: python3.9 + Runtime: python3.13 MemorySize: 1024 Timeout: 400 Code: @@ -390,385 +454,502 @@ Resources: # TODO implement return offerings + ############ RELATIONAL MIGRATION STAGING BUCKET ######### + MigrationS3Bucket: + Type: AWS::S3::Bucket - ################## LAMBDA BOOTSTRAP FUNCTION ################ - Cloud9BootstrapInstanceLambda: - Type: Custom::Cloud9BootstrapInstanceLambda - DependsOn: - - Cloud9LambdaExecutionRole + ############## AWS GLUE SETUP FOR MYSQL TO DYNAMODB MIGRATION ############## + + # Glue Service Role for MySQL to DynamoDB Migration + GlueServiceRole: + Type: AWS::IAM::Role Properties: + AssumeRolePolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Principal: + Service: + - glue.amazonaws.com + Action: + - sts:AssumeRole + ManagedPolicyArns: + - !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSGlueServiceRole + - !Sub arn:${AWS::Partition}:iam::aws:policy/AmazonDynamoDBFullAccess + Policies: + - PolicyName: S3MigrationBucketAccess + PolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Action: + - s3:GetObject + - s3:PutObject + - s3:DeleteObject + - s3:ListBucket + Resource: + - !Sub ${MigrationS3Bucket.Arn}/* + - Effect: Allow + Action: + - s3:ListBucket + Resource: + - !GetAtt MigrationS3Bucket.Arn + - PolicyName: CloudWatchLogsAccess + PolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Action: + - logs:CreateLogGroup + - logs:CreateLogStream + - logs:PutLogEvents + - logs:DescribeLogGroups + - logs:DescribeLogStreams + Resource: + - !Sub arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws-glue/* + - !Sub arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws-glue/*:log-stream:* + - PolicyName: VPCAccess + PolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Action: + - ec2:CreateNetworkInterface + - ec2:DeleteNetworkInterface + - ec2:DescribeNetworkInterfaces + - ec2:DescribeVpcs + - ec2:DescribeSubnets + - ec2:DescribeSecurityGroups + - ec2:DescribeDhcpOptions + - ec2:CreateTags + - ec2:DeleteTags + Resource: '*' Tags: - Key: Environment - Value: !Sub ${EnvironmentName} - ServiceToken: - Fn::GetAtt: - - Cloud9BootstrapInstanceLambdaFunction - - Arn - Region: - Ref: AWS::Region - StackName: - Ref: AWS::StackName - Cloud9Name: !GetAtt Cloud9Instance.Name - EnvironmentId: - Ref: Cloud9Instance - SsmDocument: - Ref: Cloud9BootStrapSSMDocument - LabIdeInstanceProfileName: - Ref: Cloud9InstanceProfile - LabIdeInstanceProfileArn: - Fn::GetAtt: - - Cloud9InstanceProfile - - Arn - LogBucket: - Ref: Cloud9LogBucket - Cloud9BootstrapInstanceLambdaFunction: + Value: !Ref EnvironmentName + - Key: Purpose + Value: GlueETLMigration + + # AWS Glue Data Catalog Database + GlueDatabase: + Type: AWS::Glue::Database + Properties: + CatalogId: !Ref AWS::AccountId + DatabaseInput: + Name: modernizer-migration-db-noc9 + Description: Database for MySQL to DynamoDB modernization migration + + # CloudWatch Log Group for Glue Jobs + GlueLogGroup: + Type: AWS::Logs::LogGroup + Properties: + LogGroupName: /aws-glue/jobs/modernizer-migration + RetentionInDays: 14 + Tags: + - Key: Environment + Value: !Ref EnvironmentName + - Key: Purpose + Value: GlueMigration + + # AWS Glue Connection for MySQL Database (uses VSCode instance with MySQL) + MySQLGlueConnection: + Type: AWS::Glue::Connection + DependsOn: + - VSCodeInstance + Properties: + CatalogId: !Ref AWS::AccountId + ConnectionInput: + Name: mysql-modernizer-connection + Description: MySQL connection for DynamoDB modernization workshop + ConnectionType: JDBC + ConnectionProperties: + JDBC_CONNECTION_URL: !Sub "jdbc:mysql://${VSCodeInstance.PrivateIp}:3306/online_shopping_store" + USERNAME: !Ref DbMasterUsername + PASSWORD: !Ref DbMasterPassword + PhysicalConnectionRequirements: + AvailabilityZone: !GetAtt VSCodeInstance.AvailabilityZone + SecurityGroupIdList: + - !GetAtt SecurityGroup.GroupId + SubnetId: !GetAtt VSCodeInstance.SubnetId + + # Sample AWS Glue ETL Job for MySQL to DynamoDB Migration + SampleGlueETLJob: + Type: AWS::Glue::Job + DependsOn: + - MySQLGlueConnection + - GlueDatabase + Properties: + Name: !Sub ${AWS::StackName}-mysql-to-dynamodb-etl + Role: !GetAtt GlueServiceRole.Arn + Description: Sample ETL job for migrating data from MySQL to DynamoDB + GlueVersion: "3.0" + MaxRetries: 0 + Timeout: 60 + WorkerType: G.1X + NumberOfWorkers: 2 + DefaultArguments: + "--TempDir": !Sub s3://${MigrationS3Bucket}/glue-temp/ + "--enable-metrics": "" + "--enable-continuous-cloudwatch-log": "true" + "--job-language": "python" + "--job-bookmark-option": "job-bookmark-disable" + Command: + Name: glueetl + ScriptLocation: !Sub s3://${MigrationS3Bucket}/scripts/mysql-to-dynamodb-etl.py + PythonVersion: "3" + Connections: + Connections: + - !Ref MySQLGlueConnection + Tags: + Environment: !Ref EnvironmentName + Purpose: MySQLToDynamoDBMigration + + # Lambda function to create ETL script in S3 + ETLScriptCreatorRole: + Type: AWS::IAM::Role + Properties: + AssumeRolePolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Allow + Principal: + Service: lambda.amazonaws.com + Action: sts:AssumeRole + ManagedPolicyArns: + - !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole + Policies: + - PolicyName: S3ScriptAccess + PolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Allow + Action: + - s3:PutObject + - s3:GetObject + - s3:ListBucket + Resource: + - !Sub ${MigrationS3Bucket.Arn} + - !Sub ${MigrationS3Bucket.Arn}/* + + ETLScriptCreatorFunction: Type: AWS::Lambda::Function Metadata: cfn_nag: rules_to_suppress: - id: W58 - reason: Cloud9LambdaExecutionRole has the AWSLambdaBasicExecutionRole managed policy attached, allowing writing to CloudWatch logs + reason: Lambda execution role has basic execution permissions - id: W89 - reason: Bootstrap function does not need the scaffolding of a VPC or provisioned concurrency + reason: Lambda function does not need VPC configuration - id: W92 - reason: Bootstrap function does not need provisioned concurrency + reason: Lambda function does not need provisioned concurrency Properties: - Tags: - - Key: Environment - Value: !Sub ${EnvironmentName} - Handler: index.lambda_handler - Role: - Fn::GetAtt: - - Cloud9LambdaExecutionRole - - Arn - Runtime: python3.9 - MemorySize: 1024 - Environment: - Variables: - DiskSize: - Ref: InstanceVolumeSize - LogS3Bucket: - Fn::GetAtt: - - Cloud9LogBucket - - Arn - Timeout: 400 + Handler: index.handler + Role: !GetAtt ETLScriptCreatorRole.Arn + Runtime: python3.13 + MemorySize: 128 + Timeout: 60 Code: ZipFile: | - from __future__ import print_function import boto3 - import json - import os - import time - import traceback import cfnresponse import logging - logger = logging.getLogger(__name__) - - def lambda_handler(event, context): - print(event.values()) - print('context: {}'.format(context)) - responseData = {} - - status = cfnresponse.SUCCESS - - if event['RequestType'] == 'Delete': - logger.info("Emptying the S3 bucket to allow for successful bucket delete.") - s3 = boto3.resource('s3') - bucket_name = os.getenv('LogS3Bucket', None) - bucket_name = bucket_name.split(':::')[1] - try: - bucket = s3.Bucket(bucket_name) - bucket.objects.all().delete() - logger.info("Successfully deleted all objects in bucket '{}'".format(bucket_name)) - except err as err: - logger.error(err) - pass - responseData = {'Success': 'Custom Resource removed'} - cfnresponse.send(event, context, status, responseData, 'CustomResourcePhysicalID') - else: - try: - # Open AWS clients - ec2 = boto3.client('ec2') - ssm = boto3.client('ssm') - - # Get the InstanceId of the Cloud9 IDE - instance = ec2.describe_instances(Filters=[{'Name': 'tag:Name','Values': ['aws-cloud9-'+event['ResourceProperties']['Cloud9Name']+'-'+event['ResourceProperties']['EnvironmentId']]}])['Reservations'][0]['Instances'][0] - print('instance: {}'.format(instance)) - instance_id = instance['InstanceId'] - - # Create the IamInstanceProfile request object - iam_instance_profile = { - 'Arn': event['ResourceProperties']['LabIdeInstanceProfileArn'], - 'Name': event['ResourceProperties']['LabIdeInstanceProfileName'] - } - print('Found IAM instance profile: {}'.format(iam_instance_profile)) - time.sleep(10) + logger = logging.getLogger() + logger.setLevel(logging.INFO) - print('Waiting for the instance to be ready...') - - # Wait for Instance to become ready before adding Role - instance_state = instance['State']['Name'] - print('instance_state: {}'.format(instance_state)) - while instance_state != 'running': - time.sleep(5) - instance_state = ec2.describe_instances(InstanceIds=[instance_id]) - print('instance_state: {}'.format(instance_state)) - - print('Instance is ready') + def handler(event, context): + try: + if event['RequestType'] == 'Delete': + cfnresponse.send(event, context, cfnresponse.SUCCESS, {}) + return - associations = ec2.describe_iam_instance_profile_associations( - Filters=[ - { - 'Name': 'instance-id', - 'Values': [instance_id], - }, - ], - ) + bucket_name = event['ResourceProperties']['BucketName'] + + s3 = boto3.client('s3') + + # Create sample ETL script for IMDB data migration + etl_script = """ + import sys + from awsglue.transforms import * + from awsglue.utils import getResolvedOptions + from pyspark.context import SparkContext + from awsglue.context import GlueContext + from awsglue.job import Job + import boto3 - if len(associations['IamInstanceProfileAssociations']) > 0: - print('Replacing existing IAM profile...') - for association in associations['IamInstanceProfileAssociations']: - if association['State'] == 'associated': - print("{} is active with state {}".format(association['AssociationId'], association['State'])) - ec2.replace_iam_instance_profile_association(AssociationId=association['AssociationId'], IamInstanceProfile=iam_instance_profile) - else: - print('Associating IAM profile...') - ec2.associate_iam_instance_profile(IamInstanceProfile=iam_instance_profile, InstanceId=instance_id) + # Get job parameters + args = getResolvedOptions(sys.argv, ['JOB_NAME']) - block_volume_id = instance['BlockDeviceMappings'][0]['Ebs']['VolumeId'] + # Initialize Glue context + sc = SparkContext() + glueContext = GlueContext(sc) + spark = glueContext.spark_session + job = Job(glueContext) + job.init(args['JOB_NAME'], args) - block_device = ec2.describe_volumes(VolumeIds=[block_volume_id])['Volumes'][0] + # Create DynamoDB resource + dynamodb = boto3.resource('dynamodb') - DiskSize = int(os.environ['DiskSize']) - if block_device['Size'] < DiskSize: - ec2.modify_volume(VolumeId=block_volume_id,Size=DiskSize) - print('Modifying block volume: {}'.format(block_volume_id)) - time.sleep(10) + try: + # Read from MySQL using Glue Data Catalog + # This assumes the crawler has run and discovered the schema + + # Example: Read title_basics table + mysql_data = glueContext.create_dynamic_frame.from_catalog( + database="modernizer-migration-db", + table_name="imdb_title_basics" + ) + + # Convert to Spark DataFrame for processing + df = mysql_data.toDF() + + # Example transformation: prepare data for DynamoDB + # Filter for movies and TV shows, clean up data + filtered_df = df.filter( + (df.titleType.isin(['movie', 'tvSeries', 'tvMovie'])) & + (df.startYear.isNotNull()) & + (df.startYear != '\\\\N') & + (df.runtimeMinutes.isNotNull()) & + (df.runtimeMinutes != '\\\\N') + ) + + # Select and rename columns for DynamoDB + transformed_df = filtered_df.select( + df.tconst.alias('title_id'), + df.titleType.alias('title_type'), + df.primaryTitle.alias('primary_title'), + df.originalTitle.alias('original_title'), + df.startYear.alias('start_year'), + df.runtimeMinutes.alias('runtime_minutes'), + df.genres.alias('genres') + ) + + # Convert back to DynamicFrame + transformed_data = DynamicFrame.fromDF(transformed_df, glueContext, "transformed_data") + + # Write to S3 in JSON format (can be imported to DynamoDB later) + output_path = f"s3://{bucket_name}/output/title_basics/" + + glueContext.write_dynamic_frame.from_options( + frame=transformed_data, + connection_type="s3", + connection_options={"path": output_path}, + format="json" + ) + + print(f"ETL job completed successfully. Data written to {output_path}") + + # Optional: Write directly to DynamoDB table if it exists + # This would require creating a DynamoDB table first + # glueContext.write_dynamic_frame_from_options( + # frame=transformed_data, + # connection_type="dynamodb", + # connection_options={ + # "dynamodb.region": "us-west-2", + # "dynamodb.output.tableName": "imdb_titles" + # } + # ) + + except Exception as e: + print(f"Error in ETL job: {str(e)}") + raise - for i in range(1, 30): - response = ec2.describe_volumes_modifications( - VolumeIds=[ - block_volume_id - ] + finally: + job.commit() + """ + + # Upload ETL script to S3 + s3.put_object( + Bucket=bucket_name, + Key='scripts/mysql-to-dynamodb-etl.py', + Body=etl_script, + ContentType='text/x-python-script' + ) + + # Create directory structure + for prefix in ['scripts/', 'glue-temp/', 'logs/', 'output/']: + try: + s3.put_object( + Bucket=bucket_name, + Key=f'{prefix}.gitkeep', + Body=b'' ) - modify_state = response['VolumesModifications'][0]['ModificationState'] - if modify_state != 'modifying': - print('Volume has been resized') - break - time.sleep(10) - else: - print('Volume is already sized') + except Exception as e: + logger.warning(f'Could not create {prefix}: {str(e)}') + + logger.info('ETL script and directory structure created successfully') + cfnresponse.send(event, context, cfnresponse.SUCCESS, {}) + + except Exception as e: + logger.error(f'Error: {str(e)}') + cfnresponse.send(event, context, cfnresponse.FAILED, {}, str(e)) - # Reboot is required to avoid weird race condition with IAM role and SSM agent - # It also causes the file system to expand in the OS - print('Rebooting instance') + ETLScriptCreator: + Type: Custom::ETLScriptCreator + DependsOn: MigrationS3Bucket + Properties: + ServiceToken: !GetAtt ETLScriptCreatorFunction.Arn + BucketName: !Ref MigrationS3Bucket - ec2.reboot_instances( - InstanceIds=[ - instance_id, - ], - ) + ############## VPC ENDPOINTS FOR GLUE NETWORKING ############## + + # Self-referencing security group rule for Glue job communication + DbSecurityGroupSelfIngress: + Type: AWS::EC2::SecurityGroupIngress + Properties: + Description: Allow all traffic from same security group (required for AWS Glue) + GroupId: !GetAtt DbSecurityGroup.GroupId + IpProtocol: -1 + SourceSecurityGroupId: !GetAtt DbSecurityGroup.GroupId - time.sleep(60) + # Self-referencing security group rule for VSCode security group (required for AWS Glue) + VSCodeSecurityGroupSelfIngress: + Type: AWS::EC2::SecurityGroupIngress + Properties: + Description: Allow all traffic from same security group (required for AWS Glue) + GroupId: !GetAtt SecurityGroup.GroupId + IpProtocol: -1 + SourceSecurityGroupId: !GetAtt SecurityGroup.GroupId - print('Waiting for instance to come online in SSM...') + # VPC Endpoints for Glue to access AWS services + # AWS Glue requires Gateway endpoints for S3 and DynamoDB + RouteTableLookupRole: + Type: AWS::IAM::Role + Properties: + AssumeRolePolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Allow + Principal: + Service: lambda.amazonaws.com + Action: sts:AssumeRole + ManagedPolicyArns: + - !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole + Policies: + - PolicyName: EC2RouteTableAccess + PolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Allow + Action: + - ec2:DescribeRouteTables + - ec2:DescribeSubnets + Resource: '*' - for i in range(1, 60): - response = ssm.describe_instance_information(Filters=[{'Key': 'InstanceIds', 'Values': [instance_id]}]) - if len(response["InstanceInformationList"]) == 0: - print('No instances in SSM') - elif len(response["InstanceInformationList"]) > 0 and \ - response["InstanceInformationList"][0]["PingStatus"] == "Online" and \ - response["InstanceInformationList"][0]["InstanceId"] == instance_id: - print('Instance is online in SSM') - break - time.sleep(10) + RouteTableLookupFunction: + Type: AWS::Lambda::Function + Metadata: + cfn_nag: + rules_to_suppress: + - id: W58 + reason: Lambda execution role has basic execution permissions + - id: W89 + reason: Lambda function does not need VPC configuration + - id: W92 + reason: Lambda function does not need provisioned concurrency + Properties: + Handler: index.handler + Role: !GetAtt RouteTableLookupRole.Arn + Runtime: python3.13 + MemorySize: 128 + Timeout: 60 + Code: + ZipFile: | + import boto3 + import cfnresponse + import logging - ssm_document = event['ResourceProperties']['SsmDocument'] + logger = logging.getLogger() + logger.setLevel(logging.INFO) - print('Sending SSM command...') + def handler(event, context): + try: + if event['RequestType'] == 'Delete': + cfnresponse.send(event, context, cfnresponse.SUCCESS, {}) + return - response = ssm.send_command( - InstanceIds=[instance_id], - DocumentName=ssm_document) + subnet_id = event['ResourceProperties']['SubnetId'] + vpc_id = event['ResourceProperties']['VpcId'] + + ec2 = boto3.client('ec2') + + # First check if subnet has explicit route table association + response = ec2.describe_route_tables( + Filters=[ + {'Name': 'association.subnet-id', 'Values': [subnet_id]} + ] + ) + + if response['RouteTables']: + route_table_id = response['RouteTables'][0]['RouteTableId'] + logger.info(f'Found explicit route table: {route_table_id}') + else: + # If no explicit association, find main route table for VPC + response = ec2.describe_route_tables( + Filters=[ + {'Name': 'vpc-id', 'Values': [vpc_id]}, + {'Name': 'association.main', 'Values': ['true']} + ] + ) + if response['RouteTables']: + route_table_id = response['RouteTables'][0]['RouteTableId'] + logger.info(f'Using main route table: {route_table_id}') + else: + raise Exception(f'No route table found for VPC {vpc_id}') + + cfnresponse.send(event, context, cfnresponse.SUCCESS, {'RouteTableId': route_table_id}) + + except Exception as e: + logger.error(f'Error: {str(e)}') + cfnresponse.send(event, context, cfnresponse.FAILED, {}, str(e)) - command_id = response['Command']['CommandId'] + RouteTableLookup: + Type: Custom::RouteTableLookup + DependsOn: VSCodeInstance + Properties: + ServiceToken: !GetAtt RouteTableLookupFunction.Arn + SubnetId: !GetAtt VSCodeInstance.SubnetId + VpcId: !GetAtt VSCodeInstance.VpcId - waiter = ssm.get_waiter('command_executed') + S3VPCEndpoint: + Type: AWS::EC2::VPCEndpoint + Properties: + VpcId: !GetAtt VSCodeInstance.VpcId + ServiceName: !Sub com.amazonaws.${AWS::Region}.s3 + VpcEndpointType: Gateway + RouteTableIds: + - !GetAtt RouteTableLookup.RouteTableId - waiter.wait( - CommandId=command_id, - InstanceId=instance_id, - WaiterConfig={ - 'Delay': 10, - 'MaxAttempts': 30 - } - ) - - responseData = {'Success': 'Started bootstrapping for instance: '+instance_id} - cfnresponse.send(event, context, status, responseData, 'CustomResourcePhysicalID') - - except Exception as e: - status = cfnresponse.FAILED - print(traceback.format_exc()) - responseData = {'Error': traceback.format_exc(e)} - finally: - cfnresponse.send(event, context, status, responseData, 'CustomResourcePhysicalID') - LambdaLogGroup: - Type: AWS::Logs::LogGroup - DeletionPolicy: Delete - UpdateReplacePolicy: Delete + DynamoDBVPCEndpoint: + Type: AWS::EC2::VPCEndpoint Properties: - LogGroupName: !Sub /aws/lambda/${Cloud9BootstrapInstanceLambdaFunction} - RetentionInDays: 7 + VpcId: !GetAtt VSCodeInstance.VpcId + ServiceName: !Sub com.amazonaws.${AWS::Region}.dynamodb + VpcEndpointType: Gateway + RouteTableIds: + - !GetAtt RouteTableLookup.RouteTableId - ################## SSM BOOTSTRAP HANDLER ############### - Cloud9LogBucket: - Type: AWS::S3::Bucket - Metadata: - cfn_nag: - rules_to_suppress: - - id: W35 - reason: Access logs aren't needed for this bucket - DeletionPolicy: Delete + # VPC Endpoint for AWS Secrets Manager (required for Glue connections with stored credentials) + SecretsManagerVPCEndpoint: + Type: AWS::EC2::VPCEndpoint Properties: - AccessControl: Private - BucketEncryption: - ServerSideEncryptionConfiguration: - - ServerSideEncryptionByDefault: - SSEAlgorithm: AES256 - PublicAccessBlockConfiguration: - BlockPublicAcls: true - BlockPublicPolicy: true - IgnorePublicAcls: true - RestrictPublicBuckets: true - Cloud9LogBucketPolicy: - Type: AWS::S3::BucketPolicy - Properties: - Bucket: !Ref Cloud9LogBucket + VpcId: !GetAtt VSCodeInstance.VpcId + ServiceName: !Sub com.amazonaws.${AWS::Region}.secretsmanager + VpcEndpointType: Interface + SubnetIds: + - !GetAtt VSCodeInstance.SubnetId + SecurityGroupIds: + - !GetAtt SecurityGroup.GroupId PolicyDocument: - Version: 2012-10-17 + Version: '2012-10-17' Statement: - - Action: - - s3:GetObject - - s3:PutObject - - s3:PutObjectAcl - Effect: Allow - Resource: - - !Sub arn:${AWS::Partition}:s3:::${Cloud9LogBucket} - - !Sub arn:${AWS::Partition}:s3:::${Cloud9LogBucket}/* - Principal: - AWS: - Fn::GetAtt: - - Cloud9LambdaExecutionRole - - Arn + - Effect: Allow + Principal: '*' + Action: + - secretsmanager:GetSecretValue + - secretsmanager:DescribeSecret + Resource: '*' - Cloud9BootStrapSSMDocument: - Type: AWS::SSM::Document - Properties: - Tags: - - Key: Environment - Value: !Sub ${EnvironmentName} - Content: !Sub - - |+ - { - "schemaVersion": "1.2", - "description": "RunDaShellScript", - "parameters": {}, - "runtimeConfig": { - "aws:runShellScript": { - "properties": [ - { - "id": "0.aws:runShellScript", - "runCommand": [ - "#!/bin/bash", - "echo \"`date -u +\"%Y-%m-%dT%H:%M:%SZ\"` Started DynamoDB Workshop User Data\"", - "set -x", - - "function sleep_delay", - "{", - " if (( $SLEEP_TIME < $SLEEP_TIME_MAX )); then", - " echo Sleeping $SLEEP_TIME", - " sleep $SLEEP_TIME", - " SLEEP_TIME=$(($SLEEP_TIME * 2))", - " else", - " echo Sleeping $SLEEP_TIME_MAX", - " sleep $SLEEP_TIME_MAX", - " fi", - "}", - "# Executing bootstrap script", - "SLEEP_TIME=10", - "SLEEP_TIME_MAX=3600", - "while true; do", - " curl \"${SUB_USERDATA_URL}\" > /tmp/dynamodbworkshop.sh", - " RESULT=$?", - " if [[ \"$RESULT\" -ne 0 ]]; then", - " sleep_delay", - " else", - " /bin/bash /tmp/dynamodbworkshop.sh ${SUB_VERSION} ${AWS::AccountId} ${AWS::Region} \"${WorkshopZIP}\" \"${SUB_REPL_ROLE}\" \"${SUB_DB_USER}\" \"${SUB_DB_PASSWORD}\" &&", - " exit 0", - " fi", - "done" - ] - } - ] - } - } - } - - { - SUB_USERDATA_URL: !FindInMap [DesignPatterns, options, UserDataURL], - SUB_VERSION: !FindInMap [DesignPatterns, options, version], - SUB_REPL_ROLE: !GetAtt ['DDBReplicationRole', 'Arn'], - SUB_DB_USER: !Ref 'DbMasterUsername', - SUB_DB_PASSWORD: !Ref 'DbMasterPassword', - } - Cloud9BootstrapAssociation: - Type: AWS::SSM::Association - Properties: - Name: !Ref Cloud9BootStrapSSMDocument - OutputLocation: - S3Location: - OutputS3BucketName: !Ref Cloud9LogBucket - OutputS3KeyPrefix: bootstrap - Targets: - - Key: tag:SSMBootstrap - Values: - - Active - - ################## INSTANCE ##################### - Cloud9InstanceProfile: - Type: AWS::IAM::InstanceProfile - Properties: - Path: '/' - Roles: - - Ref: Cloud9Role - - Cloud9Instance: - DependsOn: Cloud9BootstrapAssociation - Type: AWS::Cloud9::EnvironmentEC2 - Properties: - Description: !Sub AWS Cloud9 instance for ${EnvironmentName} - AutomaticStopTimeMinutes: !Ref AutomaticStopTimeMinutes - InstanceType: !GetAtt Cloud9FindTheInstanceTypeLambda.InstanceType - ImageId: ubuntu-22.04-x86_64 - SubnetId: !GetAtt Cloud9FindTheInstanceTypeLambda.SubnetId - Name: !Ref InstanceName - OwnerArn: - Fn::If: - - AssignCloud9Owner - - !Sub arn:${AWS::Partition}:iam::${AWS::AccountId}:assumed-role/${InstanceOwner} - - Ref: AWS::NoValue - Tags: - - Key: SSMBootstrap - Value: Active - - Key: SSMInstallFiles - Value: Active - - Key: Environment - Value: !Ref EnvironmentName - ############ RELATIONAL MIGRATION STAGING BUCKET ######### - MigrationS3Bucket: - Type: AWS::S3::Bucket ###### RELATIONAL MIGRATION MYSQL EC2 PUBLIC INSTANCE ###### DbSecurityGroup: Type: AWS::EC2::SecurityGroup @@ -816,10 +997,10 @@ Resources: Type: AWS::EC2::Instance Properties: ImageId: !Ref DBLatestAmiId - InstanceType: !GetAtt Cloud9FindTheInstanceTypeLambda.InstanceType + InstanceType: !GetAtt VSCodeFindTheInstanceTypeLambda.InstanceType SecurityGroupIds: - !GetAtt DbSecurityGroup.GroupId - SubnetId: !GetAtt Cloud9FindTheInstanceTypeLambda.SubnetId + SubnetId: !GetAtt VSCodeFindTheInstanceTypeLambda.SubnetId IamInstanceProfile: !Ref DBInstanceProfile BlockDeviceMappings: - DeviceName: /dev/xvda @@ -831,64 +1012,1981 @@ Resources: UserData: Fn::Base64: !Sub | #!/bin/bash -ex - sudo su + + # Enable logging + exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1 + echo "Starting MySQL setup at $(date)" + + # Set variables + export DbMasterPassword='${DbMasterPassword}' + export DbMasterUsername='${DbMasterUsername}' + + # Function to retry commands + retry_command() { + local max_attempts=3 + local delay=5 + local attempt=1 + + while [ $attempt -le $max_attempts ]; do + echo "Attempt $attempt of $max_attempts: $*" + if "$@"; then + echo "Command succeeded on attempt $attempt" + return 0 + else + echo "Command failed on attempt $attempt" + if [ $attempt -lt $max_attempts ]; then + echo "Waiting $delay seconds before retry..." + sleep $delay + fi + ((attempt++)) + fi + done + + echo "Command failed after $max_attempts attempts: $*" + return 1 + } + + # Install MySQL + echo "Installing MySQL..." rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2023 rpm -Uvh https://repo.mysql.com/mysql80-community-release-el7-3.noarch.rpm yum install -y mysql-community-server + + # Start MySQL service + echo "Starting MySQL service..." systemctl enable mysqld systemctl start mysqld - export DbMasterPassword=${DbMasterPassword} - export DbMasterUsername=${DbMasterUsername} - mysql -u root "-p$(grep -oP '(?<=root@localhost\: )\S+' /var/log/mysqld.log)" -e "ALTER USER 'root'@'localhost' IDENTIFIED BY '${DbMasterPassword}'" --connect-expired-password - mysql -u root "-p${DbMasterPassword}" -e "CREATE USER '${DbMasterUsername}' IDENTIFIED BY '${DbMasterPassword}'" - mysql -u root "-p${DbMasterPassword}" -e "GRANT ALL PRIVILEGES ON *.* TO '${DbMasterUsername}'" - mysql -u root "-p${DbMasterPassword}" -e "FLUSH PRIVILEGES" - mysql -u root "-p${DbMasterPassword}" -e "CREATE DATABASE app_db;" - ## Setup MySQL Tables - cd /var/lib/mysql-files/ - curl -O https://www.amazondynamodblabs.com/static/rdbms-migration/rdbms-migration.zip - unzip -q rdbms-migration.zip - chmod 775 *.* - mysql -u root "-p${DbMasterPassword}" -e "CREATE DATABASE imdb;" - mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_akas (titleId VARCHAR(200), ordering VARCHAR(200),title VARCHAR(1000), region VARCHAR(1000), language VARCHAR(1000), types VARCHAR(1000),attributes VARCHAR(1000),isOriginalTitle VARCHAR(5),primary key (titleId, ordering));" - mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_basics (tconst VARCHAR(200), titleType VARCHAR(1000),primaryTitle VARCHAR(1000), originalTitle VARCHAR(1000), isAdult VARCHAR(1000), startYear VARCHAR(1000),endYear VARCHAR(1000),runtimeMinutes VARCHAR(1000),genres VARCHAR(1000),primary key (tconst));" - mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_crew (tconst VARCHAR(200), directors VARCHAR(1000),writers VARCHAR(1000),primary key (tconst));" - mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_principals (tconst VARCHAR(200), ordering VARCHAR(200),nconst VARCHAR(200), category VARCHAR(1000), job VARCHAR(1000), characters VARCHAR(1000),primary key (tconst,ordering,nconst));" - mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_ratings (tconst VARCHAR(200), averageRating float,numVotes integer,primary key (tconst));" - mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.name_basics (nconst VARCHAR(200), primaryName VARCHAR(1000),birthYear VARCHAR(1000), deathYear VARCHAR(1000), primaryProfession VARCHAR(1000), knownForTitles VARCHAR(1000),primary key (nconst));" - mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_ratings.tsv' IGNORE INTO TABLE imdb.title_ratings FIELDS TERMINATED BY '\t';" - mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_basics.tsv' IGNORE INTO TABLE imdb.title_basics FIELDS TERMINATED BY '\t';" - mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_crew.tsv' IGNORE INTO TABLE imdb.title_crew FIELDS TERMINATED BY '\t';" - mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_principals.tsv' IGNORE INTO TABLE imdb.title_principals FIELDS TERMINATED BY '\t';" - mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/name_basics.tsv' IGNORE INTO TABLE imdb.name_basics FIELDS TERMINATED BY '\t';" - mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_akas.tsv' IGNORE INTO TABLE imdb.title_akas FIELDS TERMINATED BY '\t';" + + # Wait for MySQL to be ready + echo "Waiting for MySQL to be ready..." + for i in {1..30}; do + if systemctl is-active --quiet mysqld; then + echo "MySQL service is active" + break + fi + echo "Waiting for MySQL service... ($i/30)" + sleep 2 + done + + # Get temporary root password + echo "Getting temporary root password..." + TEMP_PASSWORD=$(grep 'temporary password' /var/log/mysqld.log | awk '{print $NF}' | tail -1) + if [ -z "$TEMP_PASSWORD" ]; then + echo "ERROR: Could not find temporary password in MySQL log" + exit 1 + fi + echo "Found temporary password" + + # Set password validation to LOW to allow simpler passwords + echo "Configuring password validation..." + mysql -u root -p"$TEMP_PASSWORD" --connect-expired-password -e "SET GLOBAL validate_password.policy=LOW;" || { + echo "Failed to set password policy, trying alternative method..." + mysql -u root -p"$TEMP_PASSWORD" --connect-expired-password -e "SET GLOBAL validate_password_policy=LOW;" || { + echo "Warning: Could not set password validation policy" + } + } + + # Change root password + echo "Changing root password..." + mysql -u root -p"$TEMP_PASSWORD" --connect-expired-password -e "ALTER USER 'root'@'localhost' IDENTIFIED BY '$DbMasterPassword';" || { + echo "ERROR: Failed to change root password" + exit 1 + } + + # Create database user + echo "Creating database user..." + mysql -u root -p"$DbMasterPassword" -e "CREATE USER IF NOT EXISTS '$DbMasterUsername'@'%' IDENTIFIED BY '$DbMasterPassword';" || { + echo "ERROR: Failed to create user" + exit 1 + } + + mysql -u root -p"$DbMasterPassword" -e "GRANT ALL PRIVILEGES ON *.* TO '$DbMasterUsername'@'%';" || { + echo "ERROR: Failed to grant privileges" + exit 1 + } + + mysql -u root -p"$DbMasterPassword" -e "FLUSH PRIVILEGES;" || { + echo "ERROR: Failed to flush privileges" + exit 1 + } + + # Create app database + echo "Creating app_db database..." + mysql -u root -p"$DbMasterPassword" -e "CREATE DATABASE IF NOT EXISTS app_db;" || { + echo "ERROR: Failed to create app_db" + exit 1 + } + + # Setup IMDB database and tables + echo "Setting up IMDB database..." + mysql -u root -p"$DbMasterPassword" -e "CREATE DATABASE IF NOT EXISTS imdb;" || { + echo "ERROR: Failed to create imdb database" + exit 1 + } + + # Download and extract data files + echo "Downloading IMDB data files..." + cd /var/lib/mysql-files/ || { + echo "ERROR: Could not change to mysql-files directory" + exit 1 + } + + # Download with retry + retry_command curl -L -o rdbms-migration.zip https://www.amazondynamodblabs.com/static/rdbms-migration/rdbms-migration.zip || { + echo "ERROR: Failed to download data files" + exit 1 + } + + # Extract files + echo "Extracting data files..." + unzip -q rdbms-migration.zip || { + echo "ERROR: Failed to extract data files" + exit 1 + } + + # Set proper permissions + chmod 644 *.tsv 2>/dev/null || echo "Warning: Could not set permissions on TSV files" + chown mysql:mysql *.tsv 2>/dev/null || echo "Warning: Could not change ownership of TSV files" + + # Create tables + echo "Creating IMDB tables..." + mysql -u root -p"$DbMasterPassword" -e " + CREATE TABLE IF NOT EXISTS imdb.title_akas ( + titleId VARCHAR(200), + ordering VARCHAR(200), + title VARCHAR(1000), + region VARCHAR(1000), + language VARCHAR(1000), + types VARCHAR(1000), + attributes VARCHAR(1000), + isOriginalTitle VARCHAR(5), + PRIMARY KEY (titleId, ordering) + );" || echo "Warning: Failed to create title_akas table" + + mysql -u root -p"$DbMasterPassword" -e " + CREATE TABLE IF NOT EXISTS imdb.title_basics ( + tconst VARCHAR(200), + titleType VARCHAR(1000), + primaryTitle VARCHAR(1000), + originalTitle VARCHAR(1000), + isAdult VARCHAR(1000), + startYear VARCHAR(1000), + endYear VARCHAR(1000), + runtimeMinutes VARCHAR(1000), + genres VARCHAR(1000), + PRIMARY KEY (tconst) + );" || echo "Warning: Failed to create title_basics table" + + mysql -u root -p"$DbMasterPassword" -e " + CREATE TABLE IF NOT EXISTS imdb.title_crew ( + tconst VARCHAR(200), + directors VARCHAR(1000), + writers VARCHAR(1000), + PRIMARY KEY (tconst) + );" || echo "Warning: Failed to create title_crew table" + + mysql -u root -p"$DbMasterPassword" -e " + CREATE TABLE IF NOT EXISTS imdb.title_principals ( + tconst VARCHAR(200), + ordering VARCHAR(200), + nconst VARCHAR(200), + category VARCHAR(1000), + job VARCHAR(1000), + characters VARCHAR(1000), + PRIMARY KEY (tconst,ordering,nconst) + );" || echo "Warning: Failed to create title_principals table" + + mysql -u root -p"$DbMasterPassword" -e " + CREATE TABLE IF NOT EXISTS imdb.title_ratings ( + tconst VARCHAR(200), + averageRating FLOAT, + numVotes INTEGER, + PRIMARY KEY (tconst) + );" || echo "Warning: Failed to create title_ratings table" + + mysql -u root -p"$DbMasterPassword" -e " + CREATE TABLE IF NOT EXISTS imdb.name_basics ( + nconst VARCHAR(200), + primaryName VARCHAR(1000), + birthYear VARCHAR(1000), + deathYear VARCHAR(1000), + primaryProfession VARCHAR(1000), + knownForTitles VARCHAR(1000), + PRIMARY KEY (nconst) + );" || echo "Warning: Failed to create name_basics table" + + # Load data with error handling + echo "Loading data into tables..." + + # Function to load data with error handling + load_data() { + local file=$1 + local table=$2 + + if [ -f "$file" ]; then + echo "Loading data from $file into $table..." + mysql -u root -p"$DbMasterPassword" -e " + LOAD DATA INFILE '/var/lib/mysql-files/$file' + IGNORE INTO TABLE imdb.$table + FIELDS TERMINATED BY '\t' + LINES TERMINATED BY '\n' + IGNORE 1 LINES;" || echo "Warning: Failed to load data from $file" + else + echo "Warning: File $file not found" + fi + } + + # Load all data files + load_data "title_ratings.tsv" "title_ratings" + load_data "title_basics.tsv" "title_basics" + load_data "title_crew.tsv" "title_crew" + load_data "title_principals.tsv" "title_principals" + load_data "name_basics.tsv" "name_basics" + load_data "title_akas.tsv" "title_akas" + + # Verify setup + echo "Verifying database setup..." + mysql -u root -p"$DbMasterPassword" -e "SHOW DATABASES;" || echo "Warning: Could not show databases" + mysql -u root -p"$DbMasterPassword" -e "USE imdb; SHOW TABLES;" || echo "Warning: Could not show imdb tables" + + echo "MySQL setup completed at $(date)" Tags: - Key: Name Value: MySQL-Instance +################ VSCode Server ################ + VSCodeSecret: + Metadata: + cfn_nag: + rules_to_suppress: + - id: W77 + reason: The default KMS Key used by Secrets Manager is appropriate for this password which will be used to log into VSCodeServer, which has very limited permissions. In addition this secret will not be required to be shared across accounts + Type: AWS::SecretsManager::Secret + DeletionPolicy: Delete + UpdateReplacePolicy: Delete + Properties: + Name: !Ref VSCodeInstanceName + Description: VS code-server user details + GenerateSecretString: + PasswordLength: 16 + SecretStringTemplate: !Sub '{"username":"${VSCodeUser}"}' + GenerateStringKey: "password" + ExcludePunctuation: true + + SecretPlaintextLambdaRole: + Type: AWS::IAM::Role + Properties: + AssumeRolePolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Allow + Principal: + Service: !Sub lambda.${AWS::URLSuffix} + Action: sts:AssumeRole + ManagedPolicyArns: + - !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole + Policies: + - PolicyName: AwsSecretsManager + PolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Allow + Action: + - secretsmanager:GetSecretValue + Resource: !Ref VSCodeSecret + + SecretPlaintextLambda: + Type: AWS::Lambda::Function + Metadata: + cfn_nag: + rules_to_suppress: + - id: W58 + reason: Warning incorrectly reported. The role associated with the Lambda function has the AWSLambdaBasicExecutionRole managed policy attached, which includes permission to write CloudWatch Logs. See https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaBasicExecutionRole.html + - id: W89 + reason: CloudFormation custom function does not need the scaffolding of a VPC, to do so would add unnecessary complexity + - id: W92 + reason: CloudFormation custom function does not need reserved concurrent executions, to do so would add unnecessary complexity + Properties: + Description: Return the value of the secret + Handler: index.lambda_handler + Runtime: python3.13 + MemorySize: 128 + Timeout: 10 + Architectures: + - arm64 + Role: !GetAtt SecretPlaintextLambdaRole.Arn + Code: + ZipFile: | + import boto3 + import json + import cfnresponse + import logging + + logger = logging.getLogger() + logger.setLevel(logging.INFO) + + def is_valid_json(json_string): + logger.debug(f'Calling is_valid_jason:{json_string}') + try: + json.loads(json_string) + logger.info('Secret is in json format') + return True + except json.JSONDecodeError: + logger.info('Secret is in string format') + return False + + def lambda_handler(event, context): + logger.debug(f'event: {event}') + logger.debug(f'context: {context}') + try: + if event['RequestType'] == 'Delete': + cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData={}, reason='No action to take') + else: + resource_properties = event['ResourceProperties'] + secret_name = resource_properties['SecretArn'] + secrets_mgr = boto3.client('secretsmanager') + + logger.info('Getting secret from %s', secret_name) + + secret = secrets_mgr.get_secret_value(SecretId = secret_name) + logger.debug(f'secret: {secret}') + secret_value = secret['SecretString'] + + responseData = {} + if is_valid_json(secret_value): + responseData = secret_value + else: + responseData = {'secret': secret_value} + logger.debug(f'responseData: {responseData}') + logger.debug(f'type(responseData): {type(responseData)}') + cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData=json.loads(responseData), reason='OK', noEcho=True) + except Exception as e: + logger.error(e) + cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason=str(e)) + + SecretPlaintext: + Type: Custom::SecretPlaintextLambda + Properties: + ServiceToken: !GetAtt SecretPlaintextLambda.Arn + ServiceTimeout: 15 + SecretArn: !Ref VSCodeSecret + + VSCodeSSMDoc: + Type: AWS::SSM::Document + Properties: + DocumentType: Command + Content: + schemaVersion: "2.2" + description: Bootstrap VS code-server instance + parameters: + LinuxFlavor: + type: String + default: "al2023" + VSCodePassword: + type: String + default: !Ref AWS::StackId + PythonMajorMinor: + type: String + default: "3.13" + # all mainSteps scripts are in in /var/lib/amazon/ssm//document/orchestration///_script.sh + mainSteps: + # This step was needed to avoid "Can't create transaction lock" error likely due to competing install + - name: RemoveTransactionLock + action: aws:runShellScript + inputs: + timeoutSeconds: 300 + runCommand: + - "#!/bin/bash" + - sudo rm -f /var/lib/rpm/.rpm.lock + - name: InstallCloudWatchAgent + action: aws:configurePackage + inputs: + name: AmazonCloudWatchAgent + action: Install + - name: ConfigureCloudWatchAgent + action: aws:runDocument + inputs: + documentType: SSMDocument + documentPath: AmazonCloudWatch-ManageAgent + documentParameters: + action: configure + mode: ec2 + optionalConfigurationSource: default + optionalRestart: "yes" + - name: InstallBasePackagesDnf + action: aws:runShellScript + precondition: + StringEquals: + - "{{ LinuxFlavor }}" + - al2023 + inputs: + timeoutSeconds: 300 + runCommand: + - "#!/bin/bash" + - dnf install -y --allowerasing curl gnupg whois argon2 unzip nginx openssl + - name: AddUserDnf + action: aws:runShellScript + precondition: + StringEquals: + - "{{ LinuxFlavor }}" + - al2023 + inputs: + timeoutSeconds: 300 + runCommand: + - "#!/bin/bash" + - !Sub | + echo 'Adding user: ${VSCodeUser}' + adduser -c '' ${VSCodeUser} + passwd -l ${VSCodeUser} + echo "${VSCodeUser}:{{ VSCodePassword }}" | chpasswd + usermod -aG wheel ${VSCodeUser} + echo "participant ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/participant + sudo chmod 440 /etc/sudoers.d/participant + - echo "User added. Checking configuration" + - !Sub getent passwd ${VSCodeUser} + - name: UpdateProfile + action: aws:runShellScript + inputs: + timeoutSeconds: 300 + runCommand: + - "#!/bin/bash" + - echo LANG=en_US.utf-8 >> /etc/environment + - echo LC_ALL=en_US.UTF-8 >> /etc/environment + - !Sub echo 'PATH=$PATH:/home/${VSCodeUser}/.local/bin' >> /home/${VSCodeUser}/.bashrc + - !Sub echo 'export PATH' >> /home/${VSCodeUser}/.bashrc + - !Sub echo 'export AWS_REGION=${AWS::Region}' >> /home/${VSCodeUser}/.bashrc + - !Sub echo 'export AWS_ACCOUNTID=${AWS::AccountId}' >> /home/${VSCodeUser}/.bashrc + - !Sub echo 'export NEXT_TELEMETRY_DISABLED=1' >> /home/${VSCodeUser}/.bashrc + - !Sub echo "export PS1='\[\033[01;32m\]\u:\[\033[01;34m\]\w\[\033[00m\]\$ '" >> /home/${VSCodeUser}/.bashrc + - !Sub chown -R ${VSCodeUser}:${VSCodeUser} /home/${VSCodeUser} + - name: InstallAWSCLI + action: aws:runShellScript + inputs: + timeoutSeconds: 300 + runCommand: + - "#!/bin/bash" + - mkdir -p /tmp + - curl -fsSL https://awscli.amazonaws.com/awscli-exe-linux-$(uname -m).zip -o /tmp/aws-cli.zip + - !Sub chown -R ${VSCodeUser}:${VSCodeUser} /tmp/aws-cli.zip + - unzip -q -d /tmp /tmp/aws-cli.zip + - sudo /tmp/aws/install + - rm -rf /tmp/aws + - echo "AWS CLI installed. Checking configuration" + - aws --version + - name: InstallGitDnf + action: aws:runShellScript + precondition: + StringEquals: + - "{{ LinuxFlavor }}" + - al2023 + inputs: + timeoutSeconds: 300 + runCommand: + - "#!/bin/bash" + - dnf install -y git + - !Sub sudo -u ${VSCodeUser} git config --global user.email "participant@example.com" + - !Sub sudo -u ${VSCodeUser} git config --global user.name "Workshop Participant" + - !Sub sudo -u ${VSCodeUser} git config --global init.defaultBranch "main" + - echo "Git installed. Checking configuration" + - git --version + - name: ConfigureCodeServer + action: aws:runShellScript + inputs: + timeoutSeconds: 600 + runCommand: + - "#!/bin/bash" + - !Sub export HOME=/home/${VSCodeUser} + - curl -fsSL https://code-server.dev/install.sh | sh -s -- --version 4.100.3 2>&1 + - !Sub | + # Create systemd service file for code-server + tee /etc/systemd/system/code-server@${VSCodeUser}.service <&1 + - systemctl status nginx --no-pager + - echo "CodeServer installed. Checking configuration" + - code-server -v + - !Sub systemctl status code-server@${VSCodeUser} --no-pager + - echo "Checking if code-server is listening on port 8080..." + - netstat -tlnp | grep :8080 || echo "Warning code-server not yet listening on port 8080" + - name: InstallLADVDepsf + action: aws:runShellScript + inputs: + timeoutSeconds: 1200 + runCommand: + - "#!/bin/bash" + - !Sub "mkdir -p ${VSCodeHomeFolder}/{LHOL,LBED,LADV,LSQL,LMR,LEDA,LGME,LGAM,LDMS,LDC,LCDC}" + - mkdir -p /tmp + - !Sub curl -o /tmp/workshop.zip "${WorkshopZIP}" + - !Sub unzip -o /tmp/workshop.zip -d ${VSCodeHomeFolder}/LADV + - rm /tmp/workshop.zip + - !Sub echo "${DDBReplicationRole.Arn}" > ${VSCodeHomeFolder}/ddb-replication-role-arn.txt + - !Sub chown -R ${VSCodeUser}:${VSCodeUser} ${VSCodeHomeFolder} + - echo "Installing pyenv dependencies..." + - dnf install -y make gcc zlib-devel bzip2 bzip2-devel readline-devel sqlite sqlite-devel openssl-devel tk-devel libffi-devel xz-devel > /dev/null + - echo "Installing pyenv for VSCode user..." + - !Sub sudo -u ${VSCodeUser} bash -c 'curl https://pyenv.run | bash' + - echo "Configuring pyenv in shell profiles..." + - !Sub echo 'export PYENV_ROOT="$HOME/.pyenv"' >> /home/${VSCodeUser}/.bashrc + - !Sub echo 'command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"' >> /home/${VSCodeUser}/.bashrc + - !Sub echo 'eval "$(pyenv init -)"' >> /home/${VSCodeUser}/.bashrc + - echo "Installing Python {{ PythonMajorMinor }}:latest using pyenv..." + - !Sub sudo -u ${VSCodeUser} bash -c 'source ~/.bashrc && pyenv install {{ PythonMajorMinor }}:latest' + - echo "Getting installed Python version and setting global..." + - !Sub sudo -u ${VSCodeUser} bash -c 'source ~/.bashrc && PYTHON_VERSION=$(pyenv versions --bare | grep "^{{ PythonMajorMinor }}" | tail -1) && echo "Setting global Python version to $PYTHON_VERSION" && pyenv global $PYTHON_VERSION' + - echo "Installing required Python packages..." + - !Sub sudo -u ${VSCodeUser} bash -c 'source ~/.bashrc && pip install boto3 opensearch-py' + - echo "Creating symlink for backward compatibility..." + - !Sub sudo -u ${VSCodeUser} bash -c 'source ~/.bashrc && sudo ln -sf $(pyenv which python) /usr/local/bin/python' + - !Sub chown -R ${VSCodeUser}:${VSCodeUser} /home/${VSCodeUser}/.pyenv + - echo "Python installation completed. Version:" + - !Sub sudo -u ${VSCodeUser} bash -c 'source ~/.bashrc && python --version' + - name: InstallNode + action: aws:runShellScript + inputs: + timeoutSeconds: 600 + runCommand: + - "#!/bin/bash" + - echo "Installing Node.js using nvm..." + - !Sub | + # Install nvm as participant user + sudo -u ${VSCodeUser} bash -c 'curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash' + - !Sub | + # Install Node.js 18 as participant user and set as default + sudo -u ${VSCodeUser} bash -c 'export NVM_DIR="$HOME/.nvm" && [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" && nvm install 18 && nvm use 18 && nvm alias default 18' + - echo "Adding nvm configuration to shell profiles..." + - !Sub | + # Add to .bashrc for interactive bash shells + cat >> /home/${VSCodeUser}/.bashrc <> /home/${VSCodeUser}/.zshrc <> /home/${VSCodeUser}/.profile <> /etc/mysql/mysql.conf.d/mysqld.cnf <> /home/${VSCodeUser}/.bashrc < /home/${VSCodeUser}/.aws/config < /home/${VSCodeUser}/.aws/amazonq/mcp.json <<'\''EOF'\'' + { + "mcpServers": { + "awslabs.dynamodb-mcp-server": { + "command": "uvx", + "args": ["awslabs.dynamodb-mcp-server@latest"], + "env": { + "DDB-MCP-READONLY": "true", + "AWS_PROFILE": "default", + "AWS_REGION": "us-west-2", + "FASTMCP_LOG_LEVEL": "ERROR" + }, + "disabled": false, + "autoApprove": [] + }, + "mysql": { + "type": "stdio", + "command": "uvx", + "args": [ + "--from", + "mysql-mcp-server", + "mysql_mcp_server" + ], + "env": { + "MYSQL_HOST": "127.0.0.1", + "MYSQL_PORT": "3306", + "MYSQL_USER": "${DbMasterUsername}", + "MYSQL_PASSWORD": "${DbMasterPassword}", + "MYSQL_DATABASE": "online_shopping_store" + } + } + } + } + EOF' + - !Sub | + # Create MCP config file as participant user + sudo -u ${VSCodeUser} mkdir -p /home/${VSCodeUser}/.local/share/code-server/User/globalStorage/saoudrizwan.claude-dev/settings + sudo -u ${VSCodeUser} bash -c 'cat > /home/${VSCodeUser}/.local/share/code-server/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json <<'\''EOF'\'' + { + "mcpServers": { + "dynamodb-server": { + "autoApprove": [], + "disabled": false, + "timeout": 60, + "type": "stdio", + "command": "uvx", + "args": [ + "awslabs.dynamodb-mcp-server@latest" + ], + "env": { + "DDB-MCP-READONLY": "false", + "AWS_PROFILE": "default", + "AWS_REGION": "us-west-2", + "FASTMCP_LOG_LEVEL": "ERROR" + } + }, + "data-processing-mcp": { + "autoApprove": [], + "disabled": false, + "timeout": 60, + "type": "stdio", + "command": "uvx", + "args": [ + "awslabs.aws-dataprocessing-mcp-server@latest", + "--allow-write" + ], + "env": { + "FASTMCP_LOG_LEVEL": "ERROR", + "AWS_REGION": "us-west-2" + } + }, + "modernizer-mysql-mcp-server": { + "timeout": 60, + "type": "stdio", + "command": "uvx", + "args": [ + "--from", + "mysql-mcp-server", + "mysql_mcp_server" + ], + "env": { + "MYSQL_HOST": "${VSCodeInstance.PrivateIp}", + "MYSQL_PORT": "3306", + "MYSQL_USER": "${DbMasterUsername}", + "MYSQL_PASSWORD": "${DbMasterPassword}", + "MYSQL_DATABASE": "online_shopping_store" + } + } + } + } + EOF' + - echo "modernizer setup completed successfully." + - name: InstallDocker + action: aws:runShellScript + inputs: + timeoutSeconds: 1200 + runCommand: + - "#!/bin/bash" + - "set -euo pipefail" + - echo "Installing Docker..." + - yum install docker -y + - systemctl start docker + - systemctl enable docker + - !Sub "usermod -aG docker ${VSCodeUser}" + - echo "Installing Docker Compose..." + - "curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose" + - "chmod +x /usr/local/bin/docker-compose" + - echo "Verifying Docker installation..." + - "docker --version" + - "docker-compose --version" + - echo "Docker installation completed successfully." + - name: CloneWorkshop + action: aws:runShellScript + inputs: + timeoutSeconds: 1200 + runCommand: + - "#!/bin/bash" + - "set -euo pipefail" + - echo "Cloning workshop repository..." + - !Sub | + # Clone repository as participant user + sudo -u ${VSCodeUser} bash -c 'cd /home/${VSCodeUser} && git clone https://github.com/aws-samples/aws-dynamodb-examples.git' + - !Sub | + # Copy files as participant user + sudo -u ${VSCodeUser} bash -c 'cd /home/${VSCodeUser}/aws-dynamodb-examples/workshops/modernizer && cp -R * ${VSCodeHomeFolder}/LGAM/' + - echo "Workshop repository cloned successfully." + - name: ConfigureBackendEnv + action: aws:runShellScript + inputs: + timeoutSeconds: 300 + runCommand: + - "#!/bin/bash" + - "set -euo pipefail" + - echo "Configuring backend .env file with database credentials..." + - !Sub | + # Update .env file with correct database credentials as participant user + if [ -f "${VSCodeHomeFolder}/LGAM/backend/.env" ]; then + sudo -u ${VSCodeUser} sed -i "s/^DB_USER=.*/DB_USER=\"${DbMasterUsername}\"/" ${VSCodeHomeFolder}/LGAM/backend/.env + sudo -u ${VSCodeUser} sed -i "s/^DB_PASSWORD=.*/DB_PASSWORD=\"${DbMasterPassword}\"/" ${VSCodeHomeFolder}/LGAM/backend/.env + sudo -u ${VSCodeUser} sed -i "s/^JWT_SECRET=.*/JWT_SECRET=63de917288d776db7e6761b183bc1fd8ffc5905565d30c635294c25cc574adc496062bc59cc4370479ecbf1e826fff3c12abe4a6ecbc5203a4d58ca24a86e6fa/" ${VSCodeHomeFolder}/LGAM/backend/.env + echo "Updated .env file with database credentials and JWT secret" + else + echo "Warning: .env file not found, creating new one with full configuration" + sudo -u ${VSCodeUser} bash -c 'cat > ${VSCodeHomeFolder}/LGAM/backend/.env <//" index.html' + + # Remove the CSP meta tag (multi-line) + sudo -u ${VSCodeUser} bash -c 'cd ${VSCodeHomeFolder}/LGAM/frontend/public && sed -i "/meta http-equiv=\"Content-Security-Policy\"/,+1d" index.html' + + # Add the new comment after the first comment + sudo -u ${VSCodeUser} bash -c 'cd ${VSCodeHomeFolder}/LGAM/frontend/public && sed -i "//a\ " index.html' + + # Update X-Frame-Options content from DENY to empty + sudo -u ${VSCodeUser} bash -c 'cd ${VSCodeHomeFolder}/LGAM/frontend/public && sed -i "s/content=\"DENY\"/content=\"\"/" index.html' + + echo "Updated index.html - removed CSP and updated X-Frame-Options" + else + echo "Warning: index.html file not found" + fi + - echo "Frontend CSP configuration updated successfully." + - name: SetupGit + action: aws:runShellScript + inputs: + timeoutSeconds: 600 + runCommand: + - "#!/bin/bash" + - "set -euo pipefail" + - echo "Setting up Git repository for modernizer project..." + - !Sub | + # Initialize git repository in modernizer directory as participant user + sudo -u ${VSCodeUser} bash -c 'cd ${VSCodeHomeFolder}/LGAM && git init' + - !Sub | + # Create .gitignore file with comprehensive content + sudo -u ${VSCodeUser} bash -c 'cat > ${VSCodeHomeFolder}/LGAM/.gitignore </${MigrationS3Bucket}/g' ${VSCodeHomeFolder}/LGAM/tools/config.json + sudo -u ${VSCodeUser} sed -i 's||${GlueServiceRole.Arn}|g' ${VSCodeHomeFolder}/LGAM/tools/config.json + sudo -u ${VSCodeUser} sed -i 's//${VSCodeInstance.VpcId}/g' ${VSCodeHomeFolder}/LGAM/tools/config.json + sudo -u ${VSCodeUser} sed -i 's//${VSCodeInstance.SubnetId}/g' ${VSCodeHomeFolder}/LGAM/tools/config.json + sudo -u ${VSCodeUser} sed -i 's//${SecurityGroup.GroupId}/g' ${VSCodeHomeFolder}/LGAM/tools/config.json + sudo -u ${VSCodeUser} sed -i 's//${VSCodeInstance.PublicIp}/g' ${VSCodeHomeFolder}/LGAM/tools/config.json + sudo -u ${VSCodeUser} sed -i 's//${VSCodeInstance.PrivateIp}/g' ${VSCodeHomeFolder}/LGAM/tools/config.json + sudo -u ${VSCodeUser} sed -i 's//${DbMasterUsername}/g' ${VSCodeHomeFolder}/LGAM/tools/config.json + sudo -u ${VSCodeUser} sed -i 's//${DbMasterPassword}/g' ${VSCodeHomeFolder}/LGAM/tools/config.json + - echo "Configuration file updated with CloudFormation values successfully." + SSMDocLambdaRole: + Type: AWS::IAM::Role + Metadata: + cfn_nag: + rules_to_suppress: + - id: W11 + reason: The Amazon EC2 ssm:*CommandInvocation API actions do not support resource-level permissions, so you cannot control which individual resources users can view in the console. Therefore, the * wildcard is necessary in the Resource element. See https://docs.aws.amazon.com/service-authorization/latest/reference/list_awssystemsmanager.html + Properties: + AssumeRolePolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Allow + Principal: + Service: !Sub lambda.${AWS::URLSuffix} + Action: sts:AssumeRole + ManagedPolicyArns: + - !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole + Policies: + - PolicyName: SSMDocOnEC2 + PolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Allow + Action: + - ssm:SendCommand + Resource: + - !Sub arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:document/${VSCodeSSMDoc} + - !Sub arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:document/AmazonCloudWatch-ManageAgent + - !Sub arn:${AWS::Partition}:ec2:${AWS::Region}:${AWS::AccountId}:instance/${VSCodeInstance} + - Effect: Allow + Action: + - ssm:ListCommandInvocations + - ssm:GetCommandInvocation + Resource: "*" + + RunSSMDocLambda: + Type: AWS::Lambda::Function + Metadata: + cfn_nag: + rules_to_suppress: + - id: W58 + reason: Warning incorrectly reported. The role associated with the Lambda function has the AWSLambdaBasicExecutionRole managed policy attached, which includes permission to write CloudWatch Logs. See https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaBasicExecutionRole.html + - id: W89 + reason: CloudFormation custom function does not need the scaffolding of a VPC, to do so would add unnecessary complexity + - id: W92 + reason: CloudFormation custom function does not need reserved concurrent executions, to do so would add unnecessary complexity + Properties: + Description: Run SSM document on EC2 instance + Handler: index.lambda_handler + Runtime: python3.13 + MemorySize: 128 + Timeout: 600 + Environment: + Variables: + RetrySleep: 2900 + AbortTimeRemaining: 3200 + Architectures: + - arm64 + Role: !GetAtt SSMDocLambdaRole.Arn + Code: + ZipFile: | + import boto3 + import cfnresponse + import logging + import time + import os + + logger = logging.getLogger() + logger.setLevel(logging.INFO) + + def lambda_handler(event, context): + logger.debug(f'event: {event}') + logger.debug(f'context: {context}') + + if event['RequestType'] != 'Create': + cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData={}, reason='No action to take') + else: + sleep_ms = int(os.environ.get('RetrySleep')) + abort_time_remaining_ms = int(os.environ.get('AbortTimeRemaining')) + resource_properties = event['ResourceProperties'] + instance_id = resource_properties['InstanceId'] + document_name = resource_properties['DocumentName'] + cloudwatch_log_group_name = resource_properties['CloudWatchLogGroupName'] + + logger.info(f'Running SSM Document {document_name} on EC2 instance {instance_id}. Logging to {cloudwatch_log_group_name}') + + del resource_properties['ServiceToken'] + if 'ServiceTimeout' in resource_properties: + del resource_properties['ServiceTimeout'] + del resource_properties['InstanceId'] + del resource_properties['DocumentName'] + del resource_properties['CloudWatchLogGroupName'] + if 'PhysicalResourceId' in resource_properties: + del resource_properties['PhysicalResourceId'] + + logger.debug(f'resource_properties filtered: {resource_properties}') + + parameters = {} + for key, value in resource_properties.items(): + parameters[key] = [value] + + logger.debug(f'parameters: {parameters}') + + retry = True + attempt_no = 0 + time_remaining_ms = context.get_remaining_time_in_millis() + + ssm = boto3.client('ssm') + + while (retry == True): + attempt_no += 1 + logger.info(f'Attempt: {attempt_no}. Time Remaining: {time_remaining_ms/1000}s') + try: + response = ssm.send_command( + InstanceIds = [instance_id], + DocumentName = document_name, + CloudWatchOutputConfig = {'CloudWatchLogGroupName': cloudwatch_log_group_name, 'CloudWatchOutputEnabled': True}, + Parameters = parameters + ) + logger.debug(f'response: {response}') + command_id = response['Command']['CommandId'] + responseData = {'CommandId': command_id} + cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, reason='OK') + retry = False + + except ssm.exceptions.InvalidInstanceId as e: + time_remaining_ms = context.get_remaining_time_in_millis() + if (time_remaining_ms > abort_time_remaining_ms): + logger.info(f'Instance {instance_id} not ready. Sleeping: {sleep_ms/1000}s') + time.sleep(sleep_ms/1000) + retry = True + else: + logger.info(f'Instance {instance_id} not ready, timed out. Time remaining {time_remaining_ms/1000}s < Abort time remaining {abort_time_remaining_ms/1000}s') + logger.error(e, exc_info=True) + cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason='Timed out. Time remaining: ' + str(time_remaining_ms/1000) + 's < Abort time remaining: ' + str(abort_time_remaining_ms/1000) + 's') + retry = False + + except Exception as e: + logger.error(e, exc_info=True) + cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason=str(e)) + retry = False + + RunVSCodeSSMDoc: + Type: Custom::RunSSMDocLambda + Properties: + ServiceToken: !GetAtt RunSSMDocLambda.Arn + ServiceTimeout: 305 + InstanceId: !Ref VSCodeInstance + DocumentName: !Ref VSCodeSSMDoc + CloudWatchLogGroupName: !Sub /aws/ssm/${VSCodeSSMDoc} + VSCodePassword: !GetAtt SecretPlaintext.password + LinuxFlavor: al2023 + PythonMajorMinor: !Ref PythonMajorMinor + + CodeInstanceProfile: + Type: AWS::IAM::InstanceProfile + Properties: + Roles: + - !Ref CodeInstanceRole + + VSCodeInstance: + Type: AWS::EC2::Instance + Properties: + ImageId: "{{resolve:ssm:/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-arm64}}" + InstanceType: !GetAtt VSCodeFindTheInstanceTypeLambda.InstanceType + BlockDeviceMappings: + - DeviceName: /dev/xvda + Ebs: + VolumeSize: !Ref VSCodeInstanceVolumeSize + VolumeType: gp3 + DeleteOnTermination: true + Encrypted: true + Monitoring: true + SecurityGroupIds: + - !GetAtt SecurityGroup.GroupId + IamInstanceProfile: !Ref CodeInstanceProfile + SubnetId: !GetAtt VSCodeFindTheInstanceTypeLambda.SubnetId + UserData: + Fn::Base64: !Sub | + #cloud-config + hostname: ${VSCodeInstanceName} + runcmd: + - mkdir -p ${VSCodeHomeFolder} && chown -R ${VSCodeUser}:${VSCodeUser} ${VSCodeHomeFolder} + Tags: + - Key: Name + Value: !Ref VSCodeInstanceName + + VSCodeInstanceCachePolicy: + Type: AWS::CloudFront::CachePolicy + Properties: + CachePolicyConfig: + DefaultTTL: 86400 + MaxTTL: 31536000 + MinTTL: 1 + Name: !Sub + - ${VSCodeInstanceName}-${RandomGUID} + - RandomGUID: + !Select [ + 0, + !Split ["-", !Select [2, !Split ["/", !Ref AWS::StackId]]], + ] + ParametersInCacheKeyAndForwardedToOrigin: + CookiesConfig: + CookieBehavior: all + EnableAcceptEncodingGzip: False + HeadersConfig: + HeaderBehavior: whitelist + Headers: + - Accept-Charset + - Authorization + - Origin + - Accept + - Referer + - Host + - Accept-Language + - Accept-Encoding + - Accept-Datetime + QueryStringsConfig: + QueryStringBehavior: all + + CloudFrontDistribution: + Type: AWS::CloudFront::Distribution + Metadata: + cfn_nag: + rules_to_suppress: + - id: W10 + reason: CloudFront Distribution access logging would require setup of an S3 bucket and changes in IAM, which add unnecessary complexity to the template + - id: W70 + reason: Workshop Studio does not include a domain that can be used to provision a certificate, so it is not possible to setup TLS. See PFR EE-6016 + Properties: + DistributionConfig: + Enabled: True + HttpVersion: http2and3 + CacheBehaviors: + - AllowedMethods: + - GET + - HEAD + - OPTIONS + - PUT + - PATCH + - POST + - DELETE + CachePolicyId: 4135ea2d-6df8-44a3-9df3-4b5a84be39ad # see https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-cache-policies.html#managed-cache-policy-caching-disabled + Compress: False + OriginRequestPolicyId: 216adef6-5c7f-47e4-b989-5492eafa07d3 # Managed-AllViewer - see https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-origin-request-policies.html#:~:text=When%20using%20AWS,47e4%2Db989%2D5492eafa07d3 + TargetOriginId: !Sub CloudFront-${AWS::StackName} + ViewerProtocolPolicy: allow-all + PathPattern: "/proxy/*" + DefaultCacheBehavior: + AllowedMethods: + - GET + - HEAD + - OPTIONS + - PUT + - PATCH + - POST + - DELETE + CachePolicyId: !Ref VSCodeInstanceCachePolicy + OriginRequestPolicyId: 216adef6-5c7f-47e4-b989-5492eafa07d3 # Managed-AllViewer - see https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-origin-request-policies.html#:~:text=When%20using%20AWS,47e4%2Db989%2D5492eafa07d3 + TargetOriginId: !Sub CloudFront-${AWS::StackName} + ViewerProtocolPolicy: allow-all + Origins: + - DomainName: !GetAtt VSCodeInstance.PublicDnsName + Id: !Sub CloudFront-${AWS::StackName} + CustomOriginConfig: + OriginProtocolPolicy: http-only + + SecurityGroup: + Type: AWS::EC2::SecurityGroup + Metadata: + cfn_nag: + rules_to_suppress: + - id: F1000 + reason: All outbound traffic should be allowed from this instance. The EC2 instance is provisioned in the default VPC, which already has this egress rule, and it is not possible to duplicate this egress rule in the default VPC + Properties: + GroupDescription: SG for VSCodeServer - only allow CloudFront ingress + SecurityGroupIngress: + - Description: Allow HTTP from com.amazonaws.global.cloudfront.origin-facing + IpProtocol: tcp + FromPort: 80 + ToPort: 80 + SourcePrefixListId: + !FindInMap [AWSRegionsPrefixListID, !Ref "AWS::Region", PrefixList] + + VSCodeHealthCheckLambdaRole: + Type: AWS::IAM::Role + Properties: + AssumeRolePolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Allow + Principal: + Service: !Sub lambda.${AWS::URLSuffix} + Action: sts:AssumeRole + ManagedPolicyArns: + - !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole + + VSCodeHealthCheckLambda: + Type: AWS::Lambda::Function + Metadata: + cfn_nag: + rules_to_suppress: + - id: W58 + reason: Warning incorrectly reported. The role associated with the Lambda function has the AWSLambdaBasicExecutionRole managed policy attached, which includes permission to write CloudWatch Logs. See https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaBasicExecutionRole.html + - id: W89 + reason: CloudFormation custom function does not need the scaffolding of a VPC, to do so would add unnecessary complexity + - id: W92 + reason: CloudFormation custom function does not need reserved concurrent executions, to do so would add unnecessary complexity + Properties: + Description: Run health check on VS code-server instance + Handler: index.lambda_handler + Runtime: python3.13 + MemorySize: 128 + Timeout: 600 + Environment: + Variables: + RetrySleep: 2900 + AbortTimeRemaining: 5000 + Architectures: + - arm64 + Role: !GetAtt VSCodeHealthCheckLambdaRole.Arn + Code: + ZipFile: | + import json + import cfnresponse + import logging + import time + import os + import http.client + from urllib.parse import urlparse + + logger = logging.getLogger() + logger.setLevel(logging.INFO) + + def healthURLOk(url): + # Using try block to catch connection errors and JSON conversion errors + try: + logger.debug(f'url: {url}') + parsed_url = urlparse(url) + if parsed_url.scheme == 'https': + logger.debug(f'Trying https: {parsed_url.netloc}. Parsed_url: {parsed_url}') + conn = http.client.HTTPSConnection(parsed_url.netloc) + else: + logger.debug(f'Trying http: {parsed_url.netloc}. Parsed_url: {parsed_url}') + conn = http.client.HTTPConnection(parsed_url.netloc) + conn.request("GET", parsed_url.path or "/") + response = conn.getresponse() + logger.debug(f'response: {response}') + logger.debug(f'response.status: {response.status}') + content = response.read() + logger.debug(f'content: {content}') + # This will be true for any return code below 4xx (so 3xx and 2xx) + if 200 <= response.status < 400: + response_dict = json.loads(content.decode('utf-8')) + logger.debug(f'response_dict: {response_dict}') + # Checking for expected keys and if the key has the expected value + if 'status' in response_dict and (response_dict['status'].lower() == 'alive' or response_dict['status'].lower() == 'expired'): + # Response code 200 and correct JSON returned + logger.info(f'Health check OK. Status: {response_dict['status'].lower()}') + return True + else: + # Response code 200 but the 'status' key is either not present or does not have the value 'alive' or 'expired' + logger.info(f'Health check failed. Status: {response_dict['status'].lower()}') + return False + else: + # Response was not ok (error 4xx or 5xx) + logger.info(f'Healthcheck failed. Return code: {response.status}') + return False + + except http.client.HTTPException as e: + # URL malformed or endpoint not ready yet, this should only happen if we can not DNS resolve the URL + logger.error(e, exc_info=True) + logger.error(f'Healthcheck failed: HTTP Exception. URL invalid and/or endpoint not ready yet') + return False + + except json.decoder.JSONDecodeError as e: + # The response we got was not a properly formatted JSON + logger.error(e, exc_info=True) + logger.info(f'Healthcheck failed: Did not get JSON object from URL as expected') + return False + + except Exception as e: + logger.error(e, exc_info=True) + logger.info(f'Healthcheck failed: General error') + return False + + finally: + if 'conn' in locals(): + conn.close() + + def is_valid_json(json_string): + try: + json.loads(json_string) + return True + except ValueError: + return False + + def lambda_handler(event, context): + logger.debug(f'event: {event}') + logger.debug(f'context: {context}') + try: + if event['RequestType'] != 'Create': + cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData={}, reason='No action to take') + else: + sleep_ms = int(os.environ.get('RetrySleep')) + abort_time_remaining_ms = int(os.environ.get('AbortTimeRemaining')) + resource_properties = event['ResourceProperties'] + url = resource_properties['Url'] + + logger.info(f'Testing url: {url}') + + time_remaining_ms = context.get_remaining_time_in_millis() + attempt_no = 0 + health_check = False + while (attempt_no == 0 or (time_remaining_ms > abort_time_remaining_ms and not health_check)): + attempt_no += 1 + logger.info(f'Attempt: {attempt_no}. Time Remaining: {time_remaining_ms/1000}s') + health_check = healthURLOk(url) + if not health_check: + logger.debug(f'Healthcheck failed. Sleeping: {sleep_ms/1000}s') + time.sleep(sleep_ms/1000) + time_remaining_ms = context.get_remaining_time_in_millis() + if health_check: + logger.info(f'Health check successful. Attempts: {attempt_no}. Time Remaining: {time_remaining_ms/1000}s') + cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData={}, reason='VS code-server healthcheck successful') + else: + logger.info(f'Health check failed. Timed out. Attempts: {attempt_no}. Time remaining {time_remaining_ms/1000}s < Abort time remaining {abort_time_remaining_ms/1000}s') + cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason='VS code-server healthcheck failed. Timed out after ' + str(attempt_no) + ' attempts') + logger.info(f'Response sent') + + except Exception as e: + logger.error(e, exc_info=True) + logger.info(f'Health check failed. General exception') + cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason=str(e)) + + Healthcheck: + Type: Custom::VSCodeHealthCheckLambda + Properties: + ServiceToken: !GetAtt VSCodeHealthCheckLambda.Arn + ServiceTimeout: 610 + Url: !Sub https://${CloudFrontDistribution.DomainName}/healthz + + CheckSSMDocLambda: + Type: AWS::Lambda::Function + Metadata: + cfn_nag: + rules_to_suppress: + - id: W58 + reason: Warning incorrectly reported. The role associated with the Lambda function has the AWSLambdaBasicExecutionRole managed policy attached, which includes permission to write CloudWatch Logs. See https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaBasicExecutionRole.html + - id: W89 + reason: CloudFormation custom function does not need the scaffolding of a VPC, to do so would add unnecessary complexity + - id: W92 + reason: CloudFormation custom function does not need reserved concurrent executions, to do so would add unnecessary complexity + Properties: + Description: Check SSM document on EC2 instance + Handler: index.lambda_handler + Runtime: python3.13 + MemorySize: 128 + Timeout: 600 + Environment: + Variables: + RetrySleep: 2900 + AbortTimeRemaining: 5000 + Architectures: + - arm64 + Role: !GetAtt SSMDocLambdaRole.Arn + Code: + ZipFile: | + import boto3 + import cfnresponse + import logging + import time + import os + + logger = logging.getLogger() + logger.setLevel(logging.INFO) + + def lambda_handler(event, context): + logger.debug(f'event: {event}') + logger.debug(f'context: {context}') + + if event['RequestType'] != 'Create': + cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData={}, reason='No action to take') + else: + sleep_ms = int(os.environ.get('RetrySleep')) + abort_time_remaining_ms = int(os.environ.get('AbortTimeRemaining')) + resource_properties = event['ResourceProperties'] + instance_id = resource_properties['InstanceId'] + document_name = resource_properties['DocumentName'] + + logger.info(f'Checking SSM Document {document_name} on EC2 instance {instance_id}') + + retry = True + attempt_no = 0 + time_remaining_ms = context.get_remaining_time_in_millis() + + ssm = boto3.client('ssm') + + while (retry == True): + attempt_no += 1 + logger.info(f'Attempt: {attempt_no}. Time Remaining: {time_remaining_ms/1000}s') + try: + # check to see if document has completed running on instance + response = ssm.list_command_invocations( + InstanceId=instance_id, + Details=True + ) + logger.debug(f'Response: {response}') + for invocation in response['CommandInvocations']: + if invocation['DocumentName'] == document_name: + invocation_status = invocation['Status'] + if invocation_status == 'Success': + logger.info(f'SSM Document {document_name} on EC2 instance {instance_id} complete. Status: {invocation_status}') + cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData={}, reason='OK') + retry = False + elif invocation_status == 'Failed' or invocation_status == 'Cancelled' or invocation_status == 'TimedOut': + logger.info(f'SSM Document {document_name} on EC2 instance {instance_id} failed. Status: {invocation_status}') + reason = '' + # Get information on step that failed, otherwise it's cancelled or timeout + for step in invocation['CommandPlugins']: + step_name = step['Name'] + step_status = step['Status'] + step_output = step['Output'] + logger.debug(f'Step {step_name} {step_status}: {step_output}') + if step_status != 'Success': + try: + response_step = ssm.get_command_invocation( + CommandId=invocation['CommandId'], + InstanceId=instance_id, + PluginName=step_name + ) + logger.debug(f'Step details: {response_step}') + step_output = response_step['StandardErrorContent'] + except Exception as e: + logger.error(e, exc_info=True) + logger.info(f'Step {step_name} {step_status}: {step_output}') + if reason == '': + reason = f'Step {step_name} {step_status}: {step_output}' + else: + reason += f'\nStep {step_name} {step_status}: {step_output}' + if reason == '': + reason = f'SSM Document {document_name} on EC2 instance {instance_id} failed. Status: {invocation_status}' + logger.info(f'{reason}') + cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason=reason) + retry = False + else: + logger.info(f'SSM Document {document_name} on EC2 instance {instance_id} not yet complete. Status: {invocation_status}') + retry = True + if retry == True: + if (time_remaining_ms > abort_time_remaining_ms): + logger.info(f'Sleeping: {sleep_ms/1000}s') + time.sleep(sleep_ms/1000) + time_remaining_ms = context.get_remaining_time_in_millis() + else: + logger.info(f'Time remaining {time_remaining_ms/1000}s < Abort time remaining {abort_time_remaining_ms/1000}s') + logger.info(f'Aborting check as time remaining {time_remaining_ms/1000}s < Abort time remaining {abort_time_remaining_ms/1000}s') + cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason='Timed out. Time remaining: ' + str(time_remaining_ms/1000) + 's < Abort time remaining: ' + str(abort_time_remaining_ms/1000) + 's') + retry = False + except Exception as e: + logger.error(e, exc_info=True) + cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason=str(e)) + retry = False + + CheckVSCodeSSMDoc: + Type: Custom::CheckSSMDocLambda + DependsOn: Healthcheck + Properties: + ServiceToken: !GetAtt CheckSSMDocLambda.Arn + ServiceTimeout: 610 + InstanceId: !Ref VSCodeInstance + DocumentName: !Ref VSCodeSSMDoc ################## OUTPUTS ##################### Outputs: - Cloud9IdeUrl: - Description: URL to launch the Cloud9 IDE - Value: !Sub https://${AWS::Region}.console.aws.amazon.com/cloud9/ide/${Cloud9Instance}?region=${AWS::Region} - Export: - Name: Cloud9IdeUrl - Cloud9LogBucketArn: - Description: S3 Bucket Arn - Value: !GetAtt Cloud9LogBucket.Arn - Cloud9LogBucketName: - Description: S3 Bucket Name - Value: !Ref Cloud9LogBucket - Export: - Name: Cloud9LogBucket MigrationS3BucketName: Description: S3 Bucket Name Value: !Ref MigrationS3Bucket Export: Name: MigrationS3Bucket - Cloud9RoleArn: + CodeRoleArn: Description: Role Arn - Value: !GetAtt Cloud9Role.Arn + Value: !GetAtt CodeInstanceRole.Arn Export: - Name: Cloud9RoleArn + Name: CodeInstanceRole + VSCodeServerURL: + Description: VSCode-Server URL + Value: !Sub https://${CloudFrontDistribution.DomainName}/?folder=${VSCodeHomeFolder}&tkn=${SecretPlaintext.password} + VSCodeServerPassword: + Description: VSCode-Server Password + Value: !GetAtt SecretPlaintext.password + VSCodeServerURLModernizer: + Description: VSCode-Server with Modernizer workspace + Value: !Sub https://${CloudFrontDistribution.DomainName}/?folder=${VSCodeHomeFolder}/LGAM&tkn=${SecretPlaintext.password} + GlueServiceRoleArn: + Description: Glue Service Role ARN for MySQL to DynamoDB Migration + Value: !GetAtt GlueServiceRole.Arn + GlueDatabaseName: + Description: AWS Glue Data Catalog Database Name + Value: !Ref GlueDatabase + MySQLGlueConnectionName: + Description: AWS Glue Connection Name for MySQL Database + Value: !Ref MySQLGlueConnection + SampleGlueETLJobName: + Description: Sample AWS Glue ETL Job Name for MySQL to DynamoDB Migration + Value: !Ref SampleGlueETLJob + MySQLDatabaseCredentials: + Description: MySQL Database Credentials for Glue Connection + Value: !Sub "Username: ${DbMasterUsername}, Password: ${DbMasterPassword}" + MySQLInstancePrivateIP: + Description: Private IP Address of MySQL instance (use this for JDBC connections from Glue) + Value: !GetAtt DbInstance.PrivateIp \ No newline at end of file diff --git a/design-patterns/cloudformation/bkp-C9.yaml b/design-patterns/cloudformation/bkp-C9.yaml new file mode 100644 index 00000000..d34d4802 --- /dev/null +++ b/design-patterns/cloudformation/bkp-C9.yaml @@ -0,0 +1,894 @@ +#Source: https://tiny.amazon.com/1dbfklsd7 +Description: Provides a Cloud9 instance, resizes the instance volume size, and installs required components. + +Parameters: + EnvironmentName: + Description: An environment name that is tagged to the resources. + Type: String + Default: DynamoDBID + InstanceName: + Description: Cloud9 instance name. + Type: String + Default: DynamoDBC9 + InstanceType: + Description: The memory and CPU of the EC2 instance that will be created for Cloud9 to run on. + Type: String + Default: t3.medium + AllowedValues: + - t2.micro + - t3.micro + - t3.small + - t3.medium + - t2.medium + - m5.large + ConstraintDescription: Must be a valid Cloud9 instance type + InstanceVolumeSize: + Description: The size in GB of the Cloud9 instance volume + Type: Number + Default: 16 + InstanceOwner: + Type: String + Description: Assumed role username of Cloud9 owner, in the format 'Role/username'. Leave blank to assign leave the instance assigned to the role running the CloudFormation template. + AutomaticStopTimeMinutes: + Description: How long Cloud9 can be inactive (no user input) before auto-hibernating. This helps prevent unnecessary charges. + Type: Number + Default: 0 + WorkshopZIP: + Type: String + Description: Location of LADV code ZIP + Default: https://amazon-dynamodb-labs.com/assets/workshop.zip + DBLatestAmiId: + Type: 'AWS::SSM::Parameter::Value' + Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2' + DbMasterUsername: + Description: The datbase master user name + Type: String + Default: dbuser + DbMasterPassword: + Description: The database master password + Type: String + Default: m7de4uwt2eG# + +Metadata: + AWS::CloudFormation::Interface: + ParameterGroups: + - Label: + default: General configuration + Parameters: + - EnvironmentName + - Label: + default: Cloud9 configuration + Parameters: + - InstanceName + - InstanceType + - InstanceVolumeSize + - InstanceOwner + - AutomaticStopTimeMinutes + ParameterLabels: + EnvironmentName: + default: Environment name + InstanceName: + default: Name + InstanceType: + default: Instance type + InstanceVolumeSize: + default: Attached volume size + InstanceOwner: + default: Role and username + AutomaticStopTimeMinutes: + default: Timeout + +Conditions: + AssignCloud9Owner: !Not [!Equals [!Ref InstanceOwner, ""]] +Mappings: + DesignPatterns: + options: + UserDataURL: "https://amazon-dynamodb-labs.com/assets/UserDataC9.sh" + version: "1" + # AWS Managed Prefix Lists for EC2 InstanceConnect + AWSRegions2PrefixListID: + ap-south-1: + PrefixList: pl-0fa83cebf909345ca + eu-north-1: + PrefixList: pl-0bd77a95ba8e317a6 + eu-west-3: + PrefixList: pl-0f2a97ab210dbbae1 + eu-west-2: + PrefixList: pl-067eefa539e593d55 + eu-west-1: + PrefixList: pl-0839cc4c195a4e751 + ap-northeast-3: + PrefixList: pl-086543b458dc7add9 + ap-northeast-2: + PrefixList: pl-00ec8fd779e5b4175 + ap-northeast-1: + PrefixList: pl-08d491d20eebc3b95 + ca-central-1: + PrefixList: pl-0beea00ad1821f2ef + sa-east-1: + PrefixList: pl-029debe66aa9d13b3 + ap-southeast-1: + PrefixList: pl-073f7512b7b9a2450 + ap-southeast-2: + PrefixList: pl-0e1bc5673b8a57acc + eu-central-1: + PrefixList: pl-03384955215625250 + us-east-1: + PrefixList: pl-0e4bcff02b13bef1e + us-east-2: + PrefixList: pl-03915406641cb1f53 + us-west-1: + PrefixList: pl-0e99958a47b22d6ab + us-west-2: + PrefixList: pl-047d464325e7bf465 + +Resources: + #LADV Role + DDBReplicationRole: + Type: AWS::IAM::Role + Properties: + AssumeRolePolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Principal: + Service: + - lambda.amazonaws.com + Action: + - sts:AssumeRole + Path: / + Policies: + - PolicyName: root + PolicyDocument: + Version: '2012-10-17' + Statement: + - Effect: Allow + Action: + - dynamodb:DescribeStream + - dynamodb:GetRecords + - dynamodb:GetShardIterator + - dynamodb:ListStreams + Resource: + - '*' + - Effect: Allow + Action: + - dynamodb:DeleteItem + - dynamodb:PutItem + Resource: + - '*' + - Effect: Allow + Action: + - logs:CreateLogGroup + - logs:CreateLogStream + - logs:PutLogEvents + Resource: + - '*' + ################## PERMISSIONS AND ROLES ################# + Cloud9Role: + Type: AWS::IAM::Role + Properties: + Tags: + - Key: Environment + Value: !Sub ${EnvironmentName} + AssumeRolePolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Allow + Principal: + Service: + - ec2.amazonaws.com + - ssm.amazonaws.com + - opensearchservice.amazonaws.com + - osis-pipelines.amazonaws.com + Action: + - sts:AssumeRole + ManagedPolicyArns: + - !Sub arn:${AWS::Partition}:iam::aws:policy/AdministratorAccess + Path: '/' + Policies: + - PolicyName: !Sub Cloud9InstanceDenyPolicy-${AWS::Region} + PolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Deny + Action: + - cloud9:UpdateEnvironment + Resource: '*' + + Cloud9LambdaExecutionRole: + Type: AWS::IAM::Role + Metadata: + cfn_nag: + rules_to_suppress: + - id: W11 + reason: Describe Action doesn't support any resource condition + Properties: + AssumeRolePolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Allow + Principal: + Service: + - lambda.amazonaws.com + Action: + - sts:AssumeRole + Path: '/' + ManagedPolicyArns: + - !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole + Policies: + - PolicyName: !Sub Cloud9LambdaPolicy-${AWS::Region} + PolicyDocument: + Version: 2012-10-17 + Statement: + - Effect: Allow + Action: + - cloudformation:DescribeStacks + - cloudformation:DescribeStackEvents + - cloudformation:DescribeStackResource + - cloudformation:DescribeStackResources + Resource: + - !Sub arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/* + - Effect: Allow + Action: + - ec2:AssociateIamInstanceProfile + - ec2:ModifyInstanceAttribute + - ec2:ReplaceIamInstanceProfileAssociation + - ec2:RebootInstances + Resource: + - !Sub arn:${AWS::Partition}:ec2:${AWS::Region}:${AWS::AccountId}:instance/* + - Effect: Allow + Action: + - ec2:DescribeInstances + - ec2:DescribeVolumesModifications + - ec2:DescribeVolumes + - ec2:DescribeIamInstanceProfileAssociations + - ec2:ModifyVolume + - ssm:DescribeInstanceInformation + - ssm:SendCommand + - ssm:GetCommandInvocation + - ec2:DescribeSubnets + - ec2:DescribeInstanceTypeOfferings + Resource: '*' + - Effect: Allow + Action: + - iam:ListInstanceProfiles + Resource: + - !Sub arn:${AWS::Partition}:iam::${AWS::AccountId}:instance-profile/* + - Effect: Allow + Action: + - s3:ListBucket + - s3:DeleteObject + Resource: + - !Sub arn:${AWS::Partition}:s3:::${Cloud9LogBucket} + - !Sub arn:${AWS::Partition}:s3:::${Cloud9LogBucket}/* + - Effect: Allow + Action: + - iam:PassRole + Resource: + Fn::GetAtt: + - Cloud9Role + - Arn + ################ LAMBDA INSTANCE TYPE FINDER ################ + Cloud9FindTheInstanceTypeLambda: + Type: Custom::Cloud9FindTheInstanceTypeLambda + DependsOn: + - Cloud9LambdaExecutionRole + Properties: + Tags: + - Key: Environment + Value: !Sub ${EnvironmentName} + ServiceToken: + Fn::GetAtt: + - Cloud9FindTheInstanceTypeLambdaFunction + - Arn + Region: + Ref: AWS::Region + StackName: + Ref: AWS::StackName + InstanceType: + Ref: InstanceType + LogBucket: + Ref: Cloud9LogBucket + Cloud9FindTheInstanceTypeLambdaFunction: + Type: AWS::Lambda::Function + Metadata: + cfn_nag: + rules_to_suppress: + - id: W58 + reason: Cloud9LambdaExecutionRole has the AWSLambdaBasicExecutionRole managed policy attached, allowing writing to CloudWatch logs + - id: W89 + reason: Bootstrap function does not need the scaffolding of a VPC or provisioned concurrency + - id: W92 + reason: Bootstrap function does not need provisioned concurrency + Properties: + Tags: + - Key: Environment + Value: !Sub ${EnvironmentName} + Handler: index.lambda_handler + Role: + Fn::GetAtt: + - Cloud9LambdaExecutionRole + - Arn + Runtime: python3.9 + MemorySize: 1024 + Timeout: 400 + Code: + ZipFile: | + import json + import boto3 + import random + import cfnresponse + import logging + import traceback + + logger = logging.getLogger(__name__) + + ec2 = boto3.client('ec2') + def lambda_handler(event, context): + print(event.values()) + print('context: {}'.format(context)) + responseData = {} + + status = cfnresponse.SUCCESS + if event['RequestType'] == 'Delete': + responseData = {'Success': 'Custom Resource removed'} + cfnresponse.send(event, context, status, responseData, 'CustomResourcePhysicalID') + elif event['RequestType'] == 'Update': + responseData = {'Success': 'No-op'} + cfnresponse.send(event, context, status, responseData, 'CustomResourcePhysicalID') + elif event['RequestType'] == 'Create': + try: + resp = ec2.describe_subnets( + Filters = [ + { + 'Name':'default-for-az', + 'Values': ['true'] + }]) + inst_types = list() + inst_types.append(event['ResourceProperties']['InstanceType']) + subnet_ids = dict() + for subnet in resp['Subnets']: + subnet_ids[subnet['AvailabilityZone']] = subnet['SubnetId'] + offerings = get_offerings(inst_types) + subnet_id = None + #hunt time + results = dict() + for instance in inst_types: + for az in offerings[instance]: + if az in subnet_ids: + subnet_id = subnet_ids[az] + if instance not in results: + results[instance] = subnet_ids[az] + instance_type, subnet = random.choice(list(results.items())) + responseData = {'InstanceType':instance_type, 'SubnetId': subnet} + cfnresponse.send(event, context, status, responseData, 'CustomResourcePhysicalID') + except Exception as err: + print(err) + status = cfnresponse.FAILED + print(traceback.format_exc()) + responseData = {'Error': traceback.format_exc(err)} + finally: + cfnresponse.send(event, context, status, responseData, 'CustomResourcePhysicalID') + + + def get_offerings(inst_types): + product_types = ('Linux/UNIX (Amazon VPC)', 'Windows (Amazon VPC)') + resp = ec2.describe_instance_type_offerings( + LocationType='availability-zone', + Filters = [ + { + 'Name': 'instance-type', + 'Values': inst_types + } + ]) + offerings = dict() + for inst in resp['InstanceTypeOfferings']: + if inst['InstanceType'] not in offerings: + offerings[inst['InstanceType']] = list() + offerings[inst['InstanceType']].append(inst['Location']) + + # TODO implement + return offerings + + + ################## LAMBDA BOOTSTRAP FUNCTION ################ + Cloud9BootstrapInstanceLambda: + Type: Custom::Cloud9BootstrapInstanceLambda + DependsOn: + - Cloud9LambdaExecutionRole + Properties: + Tags: + - Key: Environment + Value: !Sub ${EnvironmentName} + ServiceToken: + Fn::GetAtt: + - Cloud9BootstrapInstanceLambdaFunction + - Arn + Region: + Ref: AWS::Region + StackName: + Ref: AWS::StackName + Cloud9Name: !GetAtt Cloud9Instance.Name + EnvironmentId: + Ref: Cloud9Instance + SsmDocument: + Ref: Cloud9BootStrapSSMDocument + LabIdeInstanceProfileName: + Ref: Cloud9InstanceProfile + LabIdeInstanceProfileArn: + Fn::GetAtt: + - Cloud9InstanceProfile + - Arn + LogBucket: + Ref: Cloud9LogBucket + Cloud9BootstrapInstanceLambdaFunction: + Type: AWS::Lambda::Function + Metadata: + cfn_nag: + rules_to_suppress: + - id: W58 + reason: Cloud9LambdaExecutionRole has the AWSLambdaBasicExecutionRole managed policy attached, allowing writing to CloudWatch logs + - id: W89 + reason: Bootstrap function does not need the scaffolding of a VPC or provisioned concurrency + - id: W92 + reason: Bootstrap function does not need provisioned concurrency + Properties: + Tags: + - Key: Environment + Value: !Sub ${EnvironmentName} + Handler: index.lambda_handler + Role: + Fn::GetAtt: + - Cloud9LambdaExecutionRole + - Arn + Runtime: python3.9 + MemorySize: 1024 + Environment: + Variables: + DiskSize: + Ref: InstanceVolumeSize + LogS3Bucket: + Fn::GetAtt: + - Cloud9LogBucket + - Arn + Timeout: 400 + Code: + ZipFile: | + from __future__ import print_function + import boto3 + import json + import os + import time + import traceback + import cfnresponse + import logging + logger = logging.getLogger(__name__) + + def lambda_handler(event, context): + print(event.values()) + print('context: {}'.format(context)) + responseData = {} + + status = cfnresponse.SUCCESS + + if event['RequestType'] == 'Delete': + logger.info("Emptying the S3 bucket to allow for successful bucket delete.") + s3 = boto3.resource('s3') + bucket_name = os.getenv('LogS3Bucket', None) + bucket_name = bucket_name.split(':::')[1] + try: + bucket = s3.Bucket(bucket_name) + bucket.objects.all().delete() + logger.info("Successfully deleted all objects in bucket '{}'".format(bucket_name)) + except err as err: + logger.error(err) + pass + responseData = {'Success': 'Custom Resource removed'} + cfnresponse.send(event, context, status, responseData, 'CustomResourcePhysicalID') + else: + try: + # Open AWS clients + ec2 = boto3.client('ec2') + ssm = boto3.client('ssm') + + # Get the InstanceId of the Cloud9 IDE + instance = ec2.describe_instances(Filters=[{'Name': 'tag:Name','Values': ['aws-cloud9-'+event['ResourceProperties']['Cloud9Name']+'-'+event['ResourceProperties']['EnvironmentId']]}])['Reservations'][0]['Instances'][0] + print('instance: {}'.format(instance)) + instance_id = instance['InstanceId'] + + # Create the IamInstanceProfile request object + iam_instance_profile = { + 'Arn': event['ResourceProperties']['LabIdeInstanceProfileArn'], + 'Name': event['ResourceProperties']['LabIdeInstanceProfileName'] + } + print('Found IAM instance profile: {}'.format(iam_instance_profile)) + + time.sleep(10) + + print('Waiting for the instance to be ready...') + + # Wait for Instance to become ready before adding Role + instance_state = instance['State']['Name'] + print('instance_state: {}'.format(instance_state)) + while instance_state != 'running': + time.sleep(5) + instance_state = ec2.describe_instances(InstanceIds=[instance_id]) + print('instance_state: {}'.format(instance_state)) + + print('Instance is ready') + + associations = ec2.describe_iam_instance_profile_associations( + Filters=[ + { + 'Name': 'instance-id', + 'Values': [instance_id], + }, + ], + ) + + if len(associations['IamInstanceProfileAssociations']) > 0: + print('Replacing existing IAM profile...') + for association in associations['IamInstanceProfileAssociations']: + if association['State'] == 'associated': + print("{} is active with state {}".format(association['AssociationId'], association['State'])) + ec2.replace_iam_instance_profile_association(AssociationId=association['AssociationId'], IamInstanceProfile=iam_instance_profile) + else: + print('Associating IAM profile...') + ec2.associate_iam_instance_profile(IamInstanceProfile=iam_instance_profile, InstanceId=instance_id) + + block_volume_id = instance['BlockDeviceMappings'][0]['Ebs']['VolumeId'] + + block_device = ec2.describe_volumes(VolumeIds=[block_volume_id])['Volumes'][0] + + DiskSize = int(os.environ['DiskSize']) + if block_device['Size'] < DiskSize: + ec2.modify_volume(VolumeId=block_volume_id,Size=DiskSize) + print('Modifying block volume: {}'.format(block_volume_id)) + time.sleep(10) + + for i in range(1, 30): + response = ec2.describe_volumes_modifications( + VolumeIds=[ + block_volume_id + ] + ) + modify_state = response['VolumesModifications'][0]['ModificationState'] + if modify_state != 'modifying': + print('Volume has been resized') + break + time.sleep(10) + else: + print('Volume is already sized') + + # Reboot is required to avoid weird race condition with IAM role and SSM agent + # It also causes the file system to expand in the OS + print('Rebooting instance') + + ec2.reboot_instances( + InstanceIds=[ + instance_id, + ], + ) + + time.sleep(60) + + print('Waiting for instance to come online in SSM...') + + for i in range(1, 60): + response = ssm.describe_instance_information(Filters=[{'Key': 'InstanceIds', 'Values': [instance_id]}]) + if len(response["InstanceInformationList"]) == 0: + print('No instances in SSM') + elif len(response["InstanceInformationList"]) > 0 and \ + response["InstanceInformationList"][0]["PingStatus"] == "Online" and \ + response["InstanceInformationList"][0]["InstanceId"] == instance_id: + print('Instance is online in SSM') + break + time.sleep(10) + + ssm_document = event['ResourceProperties']['SsmDocument'] + + print('Sending SSM command...') + + response = ssm.send_command( + InstanceIds=[instance_id], + DocumentName=ssm_document) + + command_id = response['Command']['CommandId'] + + waiter = ssm.get_waiter('command_executed') + + waiter.wait( + CommandId=command_id, + InstanceId=instance_id, + WaiterConfig={ + 'Delay': 10, + 'MaxAttempts': 30 + } + ) + + responseData = {'Success': 'Started bootstrapping for instance: '+instance_id} + cfnresponse.send(event, context, status, responseData, 'CustomResourcePhysicalID') + + except Exception as e: + status = cfnresponse.FAILED + print(traceback.format_exc()) + responseData = {'Error': traceback.format_exc(e)} + finally: + cfnresponse.send(event, context, status, responseData, 'CustomResourcePhysicalID') + LambdaLogGroup: + Type: AWS::Logs::LogGroup + DeletionPolicy: Delete + UpdateReplacePolicy: Delete + Properties: + LogGroupName: !Sub /aws/lambda/${Cloud9BootstrapInstanceLambdaFunction} + RetentionInDays: 7 + + ################## SSM BOOTSTRAP HANDLER ############### + Cloud9LogBucket: + Type: AWS::S3::Bucket + Metadata: + cfn_nag: + rules_to_suppress: + - id: W35 + reason: Access logs aren't needed for this bucket + DeletionPolicy: Delete + Properties: + AccessControl: Private + BucketEncryption: + ServerSideEncryptionConfiguration: + - ServerSideEncryptionByDefault: + SSEAlgorithm: AES256 + PublicAccessBlockConfiguration: + BlockPublicAcls: true + BlockPublicPolicy: true + IgnorePublicAcls: true + RestrictPublicBuckets: true + Cloud9LogBucketPolicy: + Type: AWS::S3::BucketPolicy + Properties: + Bucket: !Ref Cloud9LogBucket + PolicyDocument: + Version: 2012-10-17 + Statement: + - Action: + - s3:GetObject + - s3:PutObject + - s3:PutObjectAcl + Effect: Allow + Resource: + - !Sub arn:${AWS::Partition}:s3:::${Cloud9LogBucket} + - !Sub arn:${AWS::Partition}:s3:::${Cloud9LogBucket}/* + Principal: + AWS: + Fn::GetAtt: + - Cloud9LambdaExecutionRole + - Arn + + Cloud9BootStrapSSMDocument: + Type: AWS::SSM::Document + Properties: + Tags: + - Key: Environment + Value: !Sub ${EnvironmentName} + Content: !Sub + - |+ + { + "schemaVersion": "1.2", + "description": "RunDaShellScript", + "parameters": {}, + "runtimeConfig": { + "aws:runShellScript": { + "properties": [ + { + "id": "0.aws:runShellScript", + "runCommand": [ + "#!/bin/bash", + "echo \"`date -u +\"%Y-%m-%dT%H:%M:%SZ\"` Started DynamoDB Workshop User Data\"", + "set -x", + + "function sleep_delay", + "{", + " if (( $SLEEP_TIME < $SLEEP_TIME_MAX )); then", + " echo Sleeping $SLEEP_TIME", + " sleep $SLEEP_TIME", + " SLEEP_TIME=$(($SLEEP_TIME * 2))", + " else", + " echo Sleeping $SLEEP_TIME_MAX", + " sleep $SLEEP_TIME_MAX", + " fi", + "}", + "# Executing bootstrap script", + "SLEEP_TIME=10", + "SLEEP_TIME_MAX=3600", + "while true; do", + " curl \"${SUB_USERDATA_URL}\" > /tmp/dynamodbworkshop.sh", + " RESULT=$?", + " if [[ \"$RESULT\" -ne 0 ]]; then", + " sleep_delay", + " else", + " /bin/bash /tmp/dynamodbworkshop.sh ${SUB_VERSION} ${AWS::AccountId} ${AWS::Region} \"${WorkshopZIP}\" \"${SUB_REPL_ROLE}\" \"${SUB_DB_USER}\" \"${SUB_DB_PASSWORD}\" &&", + " exit 0", + " fi", + "done" + ] + } + ] + } + } + } + - { + SUB_USERDATA_URL: !FindInMap [DesignPatterns, options, UserDataURL], + SUB_VERSION: !FindInMap [DesignPatterns, options, version], + SUB_REPL_ROLE: !GetAtt ['DDBReplicationRole', 'Arn'], + SUB_DB_USER: !Ref 'DbMasterUsername', + SUB_DB_PASSWORD: !Ref 'DbMasterPassword', + } + Cloud9BootstrapAssociation: + Type: AWS::SSM::Association + Properties: + Name: !Ref Cloud9BootStrapSSMDocument + OutputLocation: + S3Location: + OutputS3BucketName: !Ref Cloud9LogBucket + OutputS3KeyPrefix: bootstrap + Targets: + - Key: tag:SSMBootstrap + Values: + - Active + + ################## INSTANCE ##################### + Cloud9InstanceProfile: + Type: AWS::IAM::InstanceProfile + Properties: + Path: '/' + Roles: + - Ref: Cloud9Role + + Cloud9Instance: + DependsOn: Cloud9BootstrapAssociation + Type: AWS::Cloud9::EnvironmentEC2 + Properties: + Description: !Sub AWS Cloud9 instance for ${EnvironmentName} + AutomaticStopTimeMinutes: !Ref AutomaticStopTimeMinutes + InstanceType: !GetAtt Cloud9FindTheInstanceTypeLambda.InstanceType + ImageId: ubuntu-22.04-x86_64 + SubnetId: !GetAtt Cloud9FindTheInstanceTypeLambda.SubnetId + Name: !Ref InstanceName + OwnerArn: + Fn::If: + - AssignCloud9Owner + - !Sub arn:${AWS::Partition}:iam::${AWS::AccountId}:assumed-role/${InstanceOwner} + - Ref: AWS::NoValue + Tags: + - Key: SSMBootstrap + Value: Active + - Key: SSMInstallFiles + Value: Active + - Key: Environment + Value: !Ref EnvironmentName + ############ RELATIONAL MIGRATION STAGING BUCKET ######### + MigrationS3Bucket: + Type: AWS::S3::Bucket + ###### RELATIONAL MIGRATION MYSQL EC2 PUBLIC INSTANCE ###### + DbSecurityGroup: + Type: AWS::EC2::SecurityGroup + Properties: + GroupDescription: MySQL security group + SecurityGroupIngress: + - CidrIp: 172.31.0.0/16 + IpProtocol: tcp + FromPort: 3306 + ToPort: 3306 + - Description: "Allow Instance Connect" + FromPort: 22 + ToPort: 22 + IpProtocol: tcp + SourcePrefixListId: !FindInMap [AWSRegions2PrefixListID, !Ref 'AWS::Region', PrefixList] + Tags: + - Key: Name + Value: MySQL-SecurityGroup + DBInstanceProfile: + Type: AWS::IAM::InstanceProfile + Properties: + InstanceProfileName: DBInstanceProfile + Path: / + Roles: + - !Ref DBInstanceRole + DBInstanceRole: + Type: AWS::IAM::Role + Properties: + RoleName: DBInstanceRole + AssumeRolePolicyDocument: + Version: 2012-10-17 + Statement: + - + Effect: Allow + Principal: + Service: + - ec2.amazonaws.com + Action: + - sts:AssumeRole + Path: / + ManagedPolicyArns: + - arn:aws:iam::aws:policy/AmazonS3FullAccess + - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore + DbInstance: + Type: AWS::EC2::Instance + Properties: + ImageId: !Ref DBLatestAmiId + InstanceType: !GetAtt Cloud9FindTheInstanceTypeLambda.InstanceType + SecurityGroupIds: + - !GetAtt DbSecurityGroup.GroupId + SubnetId: !GetAtt Cloud9FindTheInstanceTypeLambda.SubnetId + IamInstanceProfile: !Ref DBInstanceProfile + BlockDeviceMappings: + - DeviceName: /dev/xvda + Ebs: + VolumeType: gp2 + VolumeSize: 50 + DeleteOnTermination: True + Encrypted: True + UserData: + Fn::Base64: !Sub | + #!/bin/bash -ex + sudo su + rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2023 + rpm -Uvh https://repo.mysql.com/mysql80-community-release-el7-3.noarch.rpm + yum install -y mysql-community-server + systemctl enable mysqld + systemctl start mysqld + export DbMasterPassword=${DbMasterPassword} + export DbMasterUsername=${DbMasterUsername} + mysql -u root "-p$(grep -oP '(?<=root@localhost\: )\S+' /var/log/mysqld.log)" -e "ALTER USER 'root'@'localhost' IDENTIFIED BY '${DbMasterPassword}'" --connect-expired-password + mysql -u root "-p${DbMasterPassword}" -e "CREATE USER '${DbMasterUsername}' IDENTIFIED BY '${DbMasterPassword}'" + mysql -u root "-p${DbMasterPassword}" -e "GRANT ALL PRIVILEGES ON *.* TO '${DbMasterUsername}'" + mysql -u root "-p${DbMasterPassword}" -e "FLUSH PRIVILEGES" + mysql -u root "-p${DbMasterPassword}" -e "CREATE DATABASE app_db;" + ## Setup MySQL Tables + cd /var/lib/mysql-files/ + curl -O https://www.amazondynamodblabs.com/static/rdbms-migration/rdbms-migration.zip + unzip -q rdbms-migration.zip + chmod 775 *.* + mysql -u root "-p${DbMasterPassword}" -e "CREATE DATABASE imdb;" + mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_akas (titleId VARCHAR(200), ordering VARCHAR(200),title VARCHAR(1000), region VARCHAR(1000), language VARCHAR(1000), types VARCHAR(1000),attributes VARCHAR(1000),isOriginalTitle VARCHAR(5),primary key (titleId, ordering));" + mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_basics (tconst VARCHAR(200), titleType VARCHAR(1000),primaryTitle VARCHAR(1000), originalTitle VARCHAR(1000), isAdult VARCHAR(1000), startYear VARCHAR(1000),endYear VARCHAR(1000),runtimeMinutes VARCHAR(1000),genres VARCHAR(1000),primary key (tconst));" + mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_crew (tconst VARCHAR(200), directors VARCHAR(1000),writers VARCHAR(1000),primary key (tconst));" + mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_principals (tconst VARCHAR(200), ordering VARCHAR(200),nconst VARCHAR(200), category VARCHAR(1000), job VARCHAR(1000), characters VARCHAR(1000),primary key (tconst,ordering,nconst));" + mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_ratings (tconst VARCHAR(200), averageRating float,numVotes integer,primary key (tconst));" + mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.name_basics (nconst VARCHAR(200), primaryName VARCHAR(1000),birthYear VARCHAR(1000), deathYear VARCHAR(1000), primaryProfession VARCHAR(1000), knownForTitles VARCHAR(1000),primary key (nconst));" + mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_ratings.tsv' IGNORE INTO TABLE imdb.title_ratings FIELDS TERMINATED BY '\t';" + mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_basics.tsv' IGNORE INTO TABLE imdb.title_basics FIELDS TERMINATED BY '\t';" + mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_crew.tsv' IGNORE INTO TABLE imdb.title_crew FIELDS TERMINATED BY '\t';" + mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_principals.tsv' IGNORE INTO TABLE imdb.title_principals FIELDS TERMINATED BY '\t';" + mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/name_basics.tsv' IGNORE INTO TABLE imdb.name_basics FIELDS TERMINATED BY '\t';" + mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_akas.tsv' IGNORE INTO TABLE imdb.title_akas FIELDS TERMINATED BY '\t';" + Tags: + - Key: Name + Value: MySQL-Instance + + +################## OUTPUTS ##################### +Outputs: + Cloud9IdeUrl: + Description: URL to launch the Cloud9 IDE + Value: !Sub https://${AWS::Region}.console.aws.amazon.com/cloud9/ide/${Cloud9Instance}?region=${AWS::Region} + Export: + Name: Cloud9IdeUrl + Cloud9LogBucketArn: + Description: S3 Bucket Arn + Value: !GetAtt Cloud9LogBucket.Arn + Cloud9LogBucketName: + Description: S3 Bucket Name + Value: !Ref Cloud9LogBucket + Export: + Name: Cloud9LogBucket + MigrationS3BucketName: + Description: S3 Bucket Name + Value: !Ref MigrationS3Bucket + Export: + Name: MigrationS3Bucket + Cloud9RoleArn: + Description: Role Arn + Value: !GetAtt Cloud9Role.Arn + Export: + Name: Cloud9RoleArn diff --git a/static/images/awsconsole1.png b/static/images/awsconsole1.png deleted file mode 100644 index 3581deeb..00000000 Binary files a/static/images/awsconsole1.png and /dev/null differ diff --git a/static/images/awsconsole2.png b/static/images/awsconsole2.png deleted file mode 100644 index d8cda045..00000000 Binary files a/static/images/awsconsole2.png and /dev/null differ diff --git a/static/images/common/common-vs-code-01.png b/static/images/common/common-vs-code-01.png new file mode 100644 index 00000000..2eb5edbb Binary files /dev/null and b/static/images/common/common-vs-code-01.png differ diff --git a/static/images/common/common-vs-code-02.png b/static/images/common/common-vs-code-02.png new file mode 100644 index 00000000..b7db9dee Binary files /dev/null and b/static/images/common/common-vs-code-02.png differ diff --git a/static/images/common/common-vs-code-03.png b/static/images/common/common-vs-code-03.png new file mode 100644 index 00000000..6fa637ac Binary files /dev/null and b/static/images/common/common-vs-code-03.png differ diff --git a/static/images/common/on-your-own-cf-01.png b/static/images/common/on-your-own-cf-01.png new file mode 100644 index 00000000..fe0eca17 Binary files /dev/null and b/static/images/common/on-your-own-cf-01.png differ diff --git a/static/images/common/on-your-own-cf-02.png b/static/images/common/on-your-own-cf-02.png new file mode 100644 index 00000000..ed8b7100 Binary files /dev/null and b/static/images/common/on-your-own-cf-02.png differ diff --git a/static/images/common/on-your-own-cf-03.png b/static/images/common/on-your-own-cf-03.png new file mode 100644 index 00000000..399fadc2 Binary files /dev/null and b/static/images/common/on-your-own-cf-03.png differ diff --git a/static/images/common/workshop-studio-01.png b/static/images/common/workshop-studio-01.png new file mode 100644 index 00000000..2d5e412d Binary files /dev/null and b/static/images/common/workshop-studio-01.png differ diff --git a/static/images/common/workshop-studio-02.png b/static/images/common/workshop-studio-02.png new file mode 100644 index 00000000..52b84449 Binary files /dev/null and b/static/images/common/workshop-studio-02.png differ diff --git a/static/images/hands-on-labs/load-sample-data.png b/static/images/hands-on-labs/load-sample-data.png new file mode 100644 index 00000000..3a9e0d1d Binary files /dev/null and b/static/images/hands-on-labs/load-sample-data.png differ diff --git a/static/images/ladv-small-file.png b/static/images/ladv-small-file.png new file mode 100644 index 00000000..b2ac5898 Binary files /dev/null and b/static/images/ladv-small-file.png differ