Skip to content

Commit 74b45bb

Browse files
committed
first draft corpus analysis
1 parent 4c5eda2 commit 74b45bb

File tree

1 file changed

+136
-1
lines changed

1 file changed

+136
-1
lines changed

chapters/2.TextAnalysis/corpus_analysis.qmd

Lines changed: 136 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,145 @@ editor_options:
1212

1313
In this chapter, we will explore some basic techniques for analyzing text data.
1414

15+
## Importing the Data
16+
17+
```{r}
18+
# Load necessary libraries
19+
library(tidyverse)
20+
library(tidytext)
21+
```
22+
23+
```{r}
24+
# Load the text data
25+
comments <- readr::read_csv("data/clean/comments_preprocessed.csv") # Adjust the path to your data location
26+
```
27+
28+
Explore the first few rows of the dataset to understand its structure.
29+
30+
```{r}
31+
head(comments)
32+
```
33+
34+
We can see that the dataset is imported as a tibble with three columns: `..1`, `id`, and `comments`. We are going to focus on the `comments` column for our text analysis, but the `id` column can be useful for grouping or filtering the data by season.
35+
1536
## Word Frequency Analysis
1637

1738
Word frequency analysis is one of the most fundamental techniques in text analysis. It helps us understand which words appear most often in our texts and can reveal important patterns about content, style, and themes.
1839

1940
### Word Counts
2041

21-
We can start by calculating the frequency of each word in our corpus. This involves tokenizing the text into individual words, counting the occurrences of each word, and then sorting them by frequency.
42+
We can start by calculating the frequency of each word in our corpus. This involves tokenizing the text into individual words, counting the occurrences of each word, and then sorting them by frequency.
43+
44+
**Tokenization** is the process of breaking down text into smaller units, such as words or phrases. In this case, we will use the `unnest_tokens()` function from the `tidytext` package to tokenize our comments into words.
45+
46+
::: {.callout-note title="Why not use strsplit()?" collapse="true"}
47+
While the `strsplit()` function can be used for basic tokenization, it lacks the advanced features provided by `unnest_tokens()`, such as handling punctuation, converting to lowercase, and removing stop words. Using `unnest_tokens()` ensures a more accurate and efficient tokenization process, especially for larger datasets.
48+
:::
49+
50+
```{r}
51+
# Tokenizing the comments into words
52+
tokens <- comments %>%
53+
unnest_tokens(word, comments)
54+
55+
head(tokens)
56+
```
57+
58+
Note that the resulting `tokens` tibble contains a column named `word`, which holds the individual words extracted from each comment.
59+
60+
With this tokenized data, we can now counting words. For instance, just simply counting the occurrences of each word:
61+
62+
```{r}
63+
# Counting word frequencies
64+
word_counts <- tokens %>%
65+
count(word, sort = TRUE)
66+
67+
head(word_counts)
68+
```
69+
70+
This will give us a list of words along with their corresponding frequencies, sorted in descending order. We can also visualize the most common words using a bar plot or a word cloud.
71+
72+
```{r}
73+
# Visualizing the top 20 most common words
74+
top_words <- word_counts %>%
75+
top_n(20)
76+
77+
ggplot(top_words, aes(x = reorder(word, n), y = n)) +
78+
geom_bar(stat = "identity") +
79+
labs(title = "Top 20 Most Common Words", x = "Words", y = "Frequency")
80+
```
81+
82+
We can also create a word cloud to visualize word frequencies in a more engaging way.
83+
84+
```{r}
85+
# Creating a word cloud
86+
library(wordcloud2)
87+
wordcloud2(data = word_counts, size = 1)
88+
```
89+
90+
As expected, even in a preprocessed corpus, some words become dominant due to their frequent usage. In this case, "severance", "season", and "finale", pop up as the most frequent words. To get a more meaningful analysis, we can filter out these common words.
91+
92+
```{r}
93+
# Filtering out common words for a more meaningful word cloud
94+
common_words <- c("severance", "season", "appleTV", "apple", "tv", "show", "finale", "episode") # you can expand this list as needed
95+
96+
filtered_word_counts <- word_counts %>%
97+
filter(!word %in% common_words)
98+
99+
# Creating a filtered word cloud
100+
wordcloud2(data = filtered_word_counts, size = 1)
101+
```
102+
103+
Now we have a more distributed word cloud that highlights other significant words in the corpus.
104+
105+
### Words by Season
106+
107+
A simple but effective way to analyze text data is to compare word frequencies across different categories or groups. In this case, we can compare the word frequencies between different seasons of the show.
108+
109+
```{r}
110+
# Filtering and creating word clouds by season
111+
112+
season_1_tokens <- tokens %>%
113+
filter(grepl("^s1", id)) %>%
114+
count(word, sort = TRUE) %>%
115+
filter(!word %in% common_words) %>%
116+
top_n(20)
117+
118+
season_2_tokens <- tokens %>%
119+
filter(grepl("^s2", id)) %>%
120+
count(word, sort = TRUE) %>%
121+
filter(!word %in% common_words) %>%
122+
top_n(20)
123+
124+
# Displaying word clouds for each season
125+
wordcloud2(data = season_1_tokens, size = 1, color = "random-light", backgroundColor = "black")
126+
wordcloud2(data = season_2_tokens, size = 1, color = "random-light", backgroundColor = "black")
127+
```
128+
129+
We can even select the 50 more frequent words per season and extract those that are unique to each season.
130+
131+
```{r}
132+
# Finding unique words per season
133+
top_50_s1 <- season_1_tokens %>%
134+
top_n(50) %>%
135+
pull(word)
136+
top_50_s2 <- season_2_tokens %>%
137+
top_n(50) %>%
138+
pull(word)
139+
140+
unique_s1 <- setdiff(top_50_s1, top_50_s2)
141+
unique_s2 <- setdiff(top_50_s2, top_50_s1)
142+
143+
unique_s1_tokens <- season_1_tokens %>%
144+
filter(word %in% unique_s1)
145+
unique_s2_tokens <- season_2_tokens %>%
146+
filter(word %in% unique_s2)
147+
148+
# Displaying unique word clouds for each season
149+
wordcloud2(data = unique_s1_tokens, size = 1, color = "random-light", backgroundColor = "black")
150+
wordcloud2(data = unique_s2_tokens, size = 1, color = "random-light", backgroundColor = "black")
151+
```
152+
153+
This analysis allows us to see which words are more prominent in each season, providing insights into the themes and topics that were more relevant during those times.
154+
155+
With these basic text analysis techniques, we can start to uncover patterns and insights from our text data. Although simple, these methods are helpful to explore the content and structure of the text, setting the stage for more advanced analyses, or even informing about the quality of the pre-processing steps applied to the data.
156+

0 commit comments

Comments
 (0)