You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: chapters/2.TextAnalysis/corpus_analysis.qmd
+136-1Lines changed: 136 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -12,10 +12,145 @@ editor_options:
12
12
13
13
In this chapter, we will explore some basic techniques for analyzing text data.
14
14
15
+
## Importing the Data
16
+
17
+
```{r}
18
+
# Load necessary libraries
19
+
library(tidyverse)
20
+
library(tidytext)
21
+
```
22
+
23
+
```{r}
24
+
# Load the text data
25
+
comments <- readr::read_csv("data/clean/comments_preprocessed.csv") # Adjust the path to your data location
26
+
```
27
+
28
+
Explore the first few rows of the dataset to understand its structure.
29
+
30
+
```{r}
31
+
head(comments)
32
+
```
33
+
34
+
We can see that the dataset is imported as a tibble with three columns: `..1`, `id`, and `comments`. We are going to focus on the `comments` column for our text analysis, but the `id` column can be useful for grouping or filtering the data by season.
35
+
15
36
## Word Frequency Analysis
16
37
17
38
Word frequency analysis is one of the most fundamental techniques in text analysis. It helps us understand which words appear most often in our texts and can reveal important patterns about content, style, and themes.
18
39
19
40
### Word Counts
20
41
21
-
We can start by calculating the frequency of each word in our corpus. This involves tokenizing the text into individual words, counting the occurrences of each word, and then sorting them by frequency.
42
+
We can start by calculating the frequency of each word in our corpus. This involves tokenizing the text into individual words, counting the occurrences of each word, and then sorting them by frequency.
43
+
44
+
**Tokenization** is the process of breaking down text into smaller units, such as words or phrases. In this case, we will use the `unnest_tokens()` function from the `tidytext` package to tokenize our comments into words.
45
+
46
+
::: {.callout-note title="Why not use strsplit()?" collapse="true"}
47
+
While the `strsplit()` function can be used for basic tokenization, it lacks the advanced features provided by `unnest_tokens()`, such as handling punctuation, converting to lowercase, and removing stop words. Using `unnest_tokens()` ensures a more accurate and efficient tokenization process, especially for larger datasets.
48
+
:::
49
+
50
+
```{r}
51
+
# Tokenizing the comments into words
52
+
tokens <- comments %>%
53
+
unnest_tokens(word, comments)
54
+
55
+
head(tokens)
56
+
```
57
+
58
+
Note that the resulting `tokens` tibble contains a column named `word`, which holds the individual words extracted from each comment.
59
+
60
+
With this tokenized data, we can now counting words. For instance, just simply counting the occurrences of each word:
61
+
62
+
```{r}
63
+
# Counting word frequencies
64
+
word_counts <- tokens %>%
65
+
count(word, sort = TRUE)
66
+
67
+
head(word_counts)
68
+
```
69
+
70
+
This will give us a list of words along with their corresponding frequencies, sorted in descending order. We can also visualize the most common words using a bar plot or a word cloud.
71
+
72
+
```{r}
73
+
# Visualizing the top 20 most common words
74
+
top_words <- word_counts %>%
75
+
top_n(20)
76
+
77
+
ggplot(top_words, aes(x = reorder(word, n), y = n)) +
78
+
geom_bar(stat = "identity") +
79
+
labs(title = "Top 20 Most Common Words", x = "Words", y = "Frequency")
80
+
```
81
+
82
+
We can also create a word cloud to visualize word frequencies in a more engaging way.
83
+
84
+
```{r}
85
+
# Creating a word cloud
86
+
library(wordcloud2)
87
+
wordcloud2(data = word_counts, size = 1)
88
+
```
89
+
90
+
As expected, even in a preprocessed corpus, some words become dominant due to their frequent usage. In this case, "severance", "season", and "finale", pop up as the most frequent words. To get a more meaningful analysis, we can filter out these common words.
91
+
92
+
```{r}
93
+
# Filtering out common words for a more meaningful word cloud
94
+
common_words <- c("severance", "season", "appleTV", "apple", "tv", "show", "finale", "episode") # you can expand this list as needed
95
+
96
+
filtered_word_counts <- word_counts %>%
97
+
filter(!word %in% common_words)
98
+
99
+
# Creating a filtered word cloud
100
+
wordcloud2(data = filtered_word_counts, size = 1)
101
+
```
102
+
103
+
Now we have a more distributed word cloud that highlights other significant words in the corpus.
104
+
105
+
### Words by Season
106
+
107
+
A simple but effective way to analyze text data is to compare word frequencies across different categories or groups. In this case, we can compare the word frequencies between different seasons of the show.
This analysis allows us to see which words are more prominent in each season, providing insights into the themes and topics that were more relevant during those times.
154
+
155
+
With these basic text analysis techniques, we can start to uncover patterns and insights from our text data. Although simple, these methods are helpful to explore the content and structure of the text, setting the stage for more advanced analyses, or even informing about the quality of the pre-processing steps applied to the data.
0 commit comments