Skip to content

Commit afae41c

Browse files
authored
Frontmatter for WS migration (#97)
* frontmatter headers changed * Fixing bools * More frontmatter * Renameing and fixing start-here/setup content for frontmatter mig * Fixing ordering of setup steps to be consistent
1 parent 1b9325a commit afae41c

File tree

140 files changed

+920
-914
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

140 files changed

+920
-914
lines changed

content/_index.en.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
+++
1+
---
22
#TODO swap this for variable
33
#ref https://learn.netlify.com/en/
4-
title = "Amazon DynamoDB Labs"
5-
chapter = true
6-
weight = 1
7-
+++
4+
title: "Amazon DynamoDB Labs"
5+
chapter: true
6+
weight: 1
7+
---
88

99
![Open the DynamoDB Logo](/images/Amazon-DynamoDB.png)
1010

content/all-content.en.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
+++
2-
hidden = "true"
3-
chapter="true"
4-
type = "all-content"
5-
description = "Placeholder for an experimental single page holding all doc pages, generated during the build process"
6-
+++
1+
---
2+
hidden: true
3+
chapter: true
4+
type: "all-content"
5+
description: "Placeholder for an experimental single page holding all doc pages, generated during the build process"
6+
---

content/authors.en.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
1-
+++
2-
title = "Contributors to Amazon DynamoDB Labs"
3-
hidden = "true"
4-
chapter = "true"
5-
description = "Our editors and hall of fame."
6-
+++
1+
---
2+
title: "Contributors to Amazon DynamoDB Labs"
3+
hidden: true
4+
chapter: true
5+
description: "Our editors and hall of fame."
6+
---
77

88

99

content/design-patterns/_index.en.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,10 @@
1-
+++
2-
title = "Advanced Design Patterns for Amazon DynamoDB"
3-
chapter = true
4-
description = "300 level: Hands-on exercise using Python and DynamoDB best practices."
5-
pre = "<b>LADV: </b>"
6-
7-
weight = 1
8-
+++
1+
---
2+
title: "Advanced Design Patterns for Amazon DynamoDB"
3+
chapter: true
4+
description: "300 level: Hands-on exercise using Python and DynamoDB best practices."
5+
pre: "<b>LADV: </b>"
6+
weight: 1
7+
---
98
In this workshop, you review [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) design patterns and best practices to build highly scalable applications that are optimized for performance and cost. This workshop implements these design patterns by using Python scripts. At the end of this workshop, you will have the knowledge to build and monitor DynamoDB applications that can grow to any size and scale.
109

1110
Here's what this workshop includes:

content/design-patterns/ex1capacity/Step1.en.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
+++
2-
title = "Step 1 - Create the DynamoDB table"
3-
date = 2019-12-02T10:26:23-08:00
4-
weight = 2
5-
+++
1+
---
2+
title: "Step 1 - Create the DynamoDB table"
3+
date: 2019-12-02T10:26:23-08:00
4+
weight: 2
5+
---
66

77

88
Run the following AWS CLI command to create the first DynamoDB table called `logfile`:

content/design-patterns/ex1capacity/Step2.en.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
+++
2-
title = "Step 2 - Load sample data into the table"
3-
date = 2019-12-02T10:26:28-08:00
4-
weight = 3
5-
+++
1+
---
2+
title: "Step 2 - Load sample data into the table"
3+
date: 2019-12-02T10:26:28-08:00
4+
weight: 3
5+
---
66

77

88
Now that you have created the table, you can load some sample data into the table by running the following Python script.

content/design-patterns/ex1capacity/Step3.en.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
+++
2-
title = "Step 3 - Load a larger file to compare the execution times"
3-
date = 2019-12-02T10:26:29-08:00
4-
weight = 4
5-
+++
1+
---
2+
title: "Step 3 - Load a larger file to compare the execution times"
3+
date: 2019-12-02T10:26:29-08:00
4+
weight: 4
5+
---
66

77

88
Run the script again, but this time use a larger input data file.

content/design-patterns/ex1capacity/Step4.en.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
+++
2-
title = "Step 4 - View the CloudWatch metrics on your table"
3-
date = 2019-12-02T10:26:29-08:00
4-
weight = 6
5-
+++
1+
---
2+
title: "Step 4 - View the CloudWatch metrics on your table"
3+
date: 2019-12-02T10:26:29-08:00
4+
weight: 6
5+
---
66

77
To view the Amazon CloudWatch metrics for your table:
88

content/design-patterns/ex1capacity/Step5.en.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
+++
2-
title = "Step 5 - Increase the capacity of the table"
3-
date = 2019-12-02T10:26:29-08:00
4-
weight = 6
5-
+++
1+
---
2+
title: "Step 5 - Increase the capacity of the table"
3+
date: 2019-12-02T10:26:29-08:00
4+
weight: 6
5+
---
66

77

88
Run the following AWS CLI command to increase the write capacity units and read capacity units from 5 to 100.

content/design-patterns/ex1capacity/Step6.en.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
+++
2-
title = "Step 6 - After increasing the table’s capacity, load more data"
3-
date = 2019-12-02T10:26:29-08:00
4-
weight = 7
5-
+++
1+
---
2+
title: "Step 6 - After increasing the table’s capacity, load more data"
3+
date: 2019-12-02T10:26:29-08:00
4+
weight: 7
5+
---
66

77
After you increased the table’s capacity, run the following Python script again to populate the table using the `logfile_medium2.csv` input data file with the same number of rows as when you ran this command previously. Notice that the execution of the command happens more quickly this time.
88

0 commit comments

Comments
 (0)