You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: chapters/3.SentimentAnalysis/emotion.qmd
+148-3Lines changed: 148 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ The `syuzhet` package implements the [National Research Council Canada (NRC) Emo
11
11
12
12
This framework uses eight categories of emotions based on Robert Plutchik's theory of the emotional wheel, a foundational model that illustrates the relationships between human emotions from a psychological perspective. Plutchik’s wheel identifies eight primary emotions: anger, disgust, sadness, surprise, fear, trust, joy, and anticipation. As illustrated in Figure ? below, these emotions are organized into four pairs of opposites on the wheel. Emotions positioned diagonally across from each other represent opposites, while adjacent emotions share similarities, reflecting a positive correlation.
13
13
14
-
{fig-align="center" width="376"}
14
+
{fig-align="center" width="376"}
15
15
16
16
The NRC Emotion Lexicon was developed as part of research into affective computing and sentiment analysis using a combination of manual annotation and crowdsourcing. Human annotators evaluated thousands of words, indicating which emotions were commonly associated with each word. This method ensured that the lexicon captured human-perceived emotional associations, rather than relying solely on statistical co-occurrences in text.
17
17
@@ -22,9 +22,154 @@ You may explore NRC's lexicon Tableau dashboard to explore words associated with
Now that we have a better understanding of this package, let's get back to business and perform emotion detection to our data:
25
+
Now that we have a better understanding of this package, let's get back to business and perform emotion detection to our data.
26
+
27
+
#### Emotion Detection with Syuzhet's NRC Lexicon
28
+
29
+
##### Break Text into Sentences
30
+
31
+
```r
32
+
sentences<- get_sentences(comments$comments)
33
+
```
34
+
35
+
The `get_sentences()` function splits your text into individual sentences.
36
+
37
+
This allows us to analyze emotions at a finer level — rather than by entire comments, we examine each sentence separately. If a comment says: “I love the show. The ending made me sad.” It becomes two sentences.
38
+
39
+
##### Compute Emotion Scores per Sentence
40
+
41
+
```r
42
+
emotion_score<- get_nrc_sentiment(sentences)
43
+
```
44
+
45
+
The `get_nrc_sentiment()` function assigns emotion and sentiment scores (based on the NRC lexicon) to each sentence. Each sentence gets numeric values (0 or 1) for the eight emotions to represent their absence or presence. The output also includes positive and negative sentiment scores.
46
+
47
+
##### Review Summary of Emotion Scores
48
+
49
+
Let's now compute basic statistics (min, max, mean, etc.) for each emotion column and get an overview of how frequent or strong each emotion is on our example dataset.
50
+
51
+
```r
52
+
summary(emotion_score)
53
+
```
54
+
55
+
##### Rejoin Sentences
56
+
57
+
After sentence-level analysis, we want to link each emotion score back to its **original comment or ID**.
58
+
59
+
```r
60
+
comments$comments<-sentences
61
+
emotion_data<- bind_cols(comments, emotion_score)
62
+
```
63
+
64
+
`bind_cols()` merges the original `comments` data frame with the new `emotion_score` table.
65
+
66
+
##### Summarize Emotion Counts Across All Sentences
67
+
68
+
Now, let's count **how many times each emotion appears** overall.
69
+
70
+
```r
71
+
emotion_summary<-emotion_data %>%
72
+
select(anger:trust) %>% # get only the emotion columns
73
+
summarise(across(everything(), sum)) %>% # sum counts
scale_fill_manual(values= brewer.pal(10, "Paired")) +# Color palette
89
+
theme_minimal(base_size=12) +# Clean theme
90
+
labs(title="Overall Emotion Distribution",
91
+
x="Emotion", y="Total Count") +# Titles and axis labels
92
+
coord_flip() # Flip axes for readability
28
93
```
29
94
30
-
You might be wondering: if the **`syuzhet`** package also computes polarity, why did we choose **`sentimentr`** in our pipeline? The reason is that syuzhet does not inherently account for valence shifters. In the original syuzhet implementation, words are scored in isolation—so “good” = +1, “bad” = −1—regardless of nearby negations or intensifiers. For example, “not good” would still be counted as +1. Because **`sentimentr`** adjusts sentiment scores for negators and amplifiers, polarity results are more nuanced, robust, and reliable.
95
+
##### Add a “Season” Variable (Grouping) and Summarize
96
+
97
+
Let's now add a new column called `season` by looking at the ID pattern — for example, `s1_` means season 1 and `s2_` means season 2. This makes it easy to compare the emotional tone across seasons.
98
+
99
+
```r
100
+
emotion_seasons<-emotion_data %>%
101
+
mutate(season= ifelse(grepl("^s1_", id), "s1",
102
+
ifelse(grepl("^s2_", id), "s2", NA)))
103
+
```
104
+
105
+
Time to aggregates the total count of each emotion within each season.
You might be wondering: if the **`syuzhet`** package also computes polarity, why did we choose **`sentimentr`** in our pipeline? The reason is that syuzhet does not inherently account for valence shifters. In the original syuzhet implementation, words are scored in isolation—so “good” = +1, “bad” = −1—regardless of nearby negations or intensifiers. For example, “not good” would still be counted as +1. Because **`sentimentr`** adjusts sentiment scores for negators and amplifiers, polarity results are more nuanced, robust, and reliable.
Copy file name to clipboardExpand all lines: chapters/3.SentimentAnalysis/introduction.qmd
+18-5Lines changed: 18 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ title: "Introduction to Sentiment Analysis"
4
4
5
5
Now that we have completed all the key preprocessing steps and our example dataset is in much better shape, we can finally proceed with sentiment analysis.
6
6
7
-
{width="750"}
7
+
{fig-align="center" width="500"}
8
8
9
9
## What is Sentiment Analysis?
10
10
@@ -23,12 +23,25 @@ Our analysis pipeline will follow a two-step approach. First, we will compute ba
23
23
Let’s start by installing and loading the necessary packages, then bringing in the cleaned dataset so we can begin our sentiment analysis. We will discuss the role of each package in the next episodes.
24
24
25
25
```r
26
-
# Install Packages
27
-
install.packages(c("sentimentr", "syuzhet"))
28
-
29
-
# Load Packages
26
+
# Install packages (remove comments for packages you might have skipped)
Here we’re using the **`sentiment_by()`** function which looks at each comment and calculates a **sentiment score** representing how positive or negative the language is.
35
+
36
+
So after running this, we get a new object called `sentiment_scores` with the average sentiment for every comment.
37
+
38
+
##### Adding those scores back to our dataset
27
39
40
+
```r
41
+
polarity<-comments %>%
42
+
mutate(score=sentiment_scores$ave_sentiment,
43
+
sentiment_label= case_when(
44
+
score>0.1~"positive",
45
+
score<-0.1~"negative",
46
+
TRUE~"neutral"
47
+
))
28
48
```
29
49
50
+
Now we’re using the **`dplyr`** package to make our dataset more informative. We take our `comments` dataset, and with **`mutate()`**, we add two new columns: `score` and `sentiment label`. The little rule inside **`case_when()`** decides what label to give. The small buffer around zero (±0.1) helps us avoid overreacting to tiny fluctuations.
51
+
30
52
Let's now take a look at the `sentiment_scores` data frame:
31
53
32
54
<addoutput>
33
55
34
-
It’s expected that the standard deviation is missing, because each row/case is treated as a single sentence when computing the score. Now, let's make sure to add these scores and the labels our dataset:
56
+
To get a sense of the overall mood of our dataset let's run:
35
57
36
58
```r
37
-
59
+
table(polarity$sentiment_label)
38
60
```
39
61
40
-
#### Plotting Things
62
+
#### Plotting Scores
41
63
42
-
Next, let's plot some results and histograms to check the distribution:
64
+
Next, let's plot some results and histograms to check the distribution per season:
We could have spent more time refining these plots, but this is sufficient for our initial exploration. In pairs, review the plots and discuss what they reveal about viewers’ perceptions of the *Severance* show.
93
+
We could have spent more time refining these plots, but this is sufficient for our initial exploration. In pairs, review the plots and discuss what they reveal about viewers’ perceptions of the Severance show.
49
94
50
-
Well, that’s only part of the story. Now we move on to emotion detection to discover what else we can learn from the data.
95
+
Well, that’s only part of the story. Now we move on to emotion detection to discover what else we can learn from the data.
0 commit comments