Skip to content

Commit 8168bc1

Browse files
committed
added images, final ep, revised code and improved text
1 parent a980f6c commit 8168bc1

25 files changed

+102
-26
lines changed

_quarto.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,8 @@ website:
6565
text: Polarity Classification
6666
- href: chapters/3.SentimentAnalysis/emotion.qmd
6767
text: Emotion Detection
68+
- href: chapters/3.SentimentAnalysis/considerations.qmd
69+
text: Final Considerations
6870
- about.qmd
6971

7072
page-footer:
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
---
2+
title: "Final Considerations"
3+
editor: visual
4+
---
5+
6+
Sentiment analysis, while a powerful method to extract insights from data, is far from perfect or straightforward. After all, it seeks to interpret natural language, which is constantly evolving. Speaking of evolution, did you know that the [Cambridge dictionary added 6,000 words](https://www.npr.org/2025/08/19/nx-s1-5506163/cambridge-dictionary-adds-more-than-6-000-words-including-skibidi-and-delulu) only this year, including "broligarchy" and "delulu", many of which are widely used by Gen Alpha? This constant expansion of language highlights just how dynamic the texts we analyze can be.
7+
8+
Language is also inherently rich, ambiguous, and culturally nuanced. Lexicon-based approaches, for instance, rely on predefined word lists and often struggle to capture subtleties in human expression.
9+
10+
In practice, sentiment analysis encounters issues like *code-switching*, where people mix languages in a single post, or compound sentences with mixed sentiments, such as “The movie had great acting, but the ending was lame,” which are difficult to score accurately. *Context dependence* further complicates interpretation: words can flip polarity depending on the domain, like “cheap,” which is positive when describing flights or monetary advantage in general, but negative when describing fabric or referring to quality.
11+
12+
Temporal dynamics also play a role, as *slang and cultural references* evolve rapidly, e.g., “bad” meaning “good” in some communities. Ambiguity adds another layer of difficulty: polysemous words like “sick” can mean either “ill” or “awesome”.
13+
14+
Another complication is to deal with *sarcasm* and irony which can completely invert the intended sentiment as in: “Oh great, another awesome Monday morning traffic jam!".
15+
16+
*Implicit sentiment* may be present even when emotional words are absent, as in “The waiter ignored us for 30 minutes before taking our order.” These factors collectively make sentiment analysis a useful but inherently imperfect tool for understanding human language and emotion.
17+
18+
However, it is important to emphasize that as described before, we have only explored sentiment analysis through a lexicon-based approach, and that, as illustrated in Figure ? below, there are other methods, including machine learning, deep learning and their combination (hybrid), that can be employed to extract emotions from text, including user generated content, all with their own limitations and challenges.
19+
20+
![Overview of sentiment analysis methods, applications, and challenges from (Mao, Liu, & Zhang, 2024).](images/wheel_NLP.jpg){fig-align="center" width="500"}
21+
22+
For example, Amazon relies on deep learning algorithms to determine the sentiment of customer reviews by identifying positive, negative, or neutral tones in the text. The models are trained on a vast dataset of Amazon’s product descriptions and reviews and are regularly updated with new information. This robust approach enables Amazon to efficiently analyze and interpret customer feedback on a large scale.
23+
24+
While there are more advanced approaches to sentiment analysis, including AI-assisted methods, these are discussion topics for future workshops!
25+
26+
------------------------------------------------------------------------
27+
28+
## References
29+
30+
Mao, Y., Liu, Q., & Zhang, Y. (2024). Sentiment analysis methods, applications, and challenges: A systematic literature review. *Journal of King Saud University - Computer and Information Sciences, 36*(4), 102048. <https://doi.org/10.1016/j.jksuci.2024.102048>

chapters/3.SentimentAnalysis/emotion.qmd

Lines changed: 35 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -22,27 +22,26 @@ You may explore NRC's lexicon Tableau dashboard to explore words associated with
2222
```{=html}
2323
<iframe width="780" height="500" src="https://public.tableau.com/views/NRC-Emotion-Lexicon-viz1/NRCEmotionLexicon-viz1?:embed=y&:loadOrderID=0&:display_count=no&:showVizHome=no" title="NRC Lexicon Interactive Visualization"></iframe>
2424
```
25+
2526
Now that we have a better understanding of this package, let's get back to business and perform emotion detection to our data.
2627

2728
#### Emotion Detection with Syuzhet's NRC Lexicon
2829

29-
##### Break Text into Sentences
30+
##### Detecting Emotions per Comment/Sentence
3031

3132
``` r
3233
sentences <- get_sentences(comments$comments)
3334
```
3435

35-
The `get_sentences()` function splits your text into individual sentences.
36-
37-
This allows us to analyze emotions at a finer level — rather than by entire comments, we examine each sentence separately. If a comment says: “I love the show. The ending made me sad.” It becomes two sentences.
38-
3936
##### Compute Emotion Scores per Sentence
4037

4138
``` r
4239
emotion_score <- get_nrc_sentiment(sentences)
4340
```
4441

45-
The `get_nrc_sentiment()` function assigns emotion and sentiment scores (based on the NRC lexicon) to each sentence. Each sentence gets numeric values (0 or 1) for the eight emotions to represent their absence or presence. The output also includes positive and negative sentiment scores.
42+
The `get_nrc_sentiment()` function assigns emotion and sentiment scores (based on the NRC lexicon) to each sentence. Each sentence gets numeric values (0 or 1) for the eight emotions to represent their absence or presence. The output also includes positive and negative sentiment scores:
43+
44+
![](images/emotions_scores-dataframe.png)
4645

4746
##### Review Summary of Emotion Scores
4847

@@ -52,9 +51,17 @@ Let's now compute basic statistics (min, max, mean, etc.) for each emotion colum
5251
summary(emotion_score)
5352
```
5453

55-
##### Rejoin Sentences
54+
This step should generate the following output:
55+
56+
![](images/emotion-score.png)
5657

57-
After sentence-level analysis, we want to link each emotion score back to its **original comment or ID**.
58+
Based on the results the overall emotion in these comments leans heavily toward **sadness**, which scored the highest average (1.236). It looks like **sadness** and **trust** are the most common feelings, since they're the only ones with a median score of 1.000, meaning at least half the comments contained words for them.
59+
60+
On the flip side, **Disgust** was the rarest emotion, with the lowest average (0.145). It's also worth noting that while Sadness and Trust are the most *common*, a few comments really went off the rails with **Trust (47.000), Anger (44.000)**, and **Fear (37.000)**, hitting the highest extreme scores.
61+
62+
##### Regroup with comments and IDs
63+
64+
After computing scores for emotions, we want to link them back to its **original comment and ID**.
5865

5966
``` r
6067
comments$comments <- sentences
@@ -79,6 +86,8 @@ emotion_summary <- emotion_data %>%
7986
arrange(desc(count)) # sort emotions
8087
```
8188

89+
![](images/emotion-counts.png){width="194"}
90+
8291
##### Plot the Overall Emotion Distribution
8392

8493
``` r
@@ -92,6 +101,8 @@ ggplot(emotion_summary, aes(x = emotion, y = count, fill = emotion)) +
92101
coord_flip() # Flip axes for readability
93102
```
94103

104+
![](images/barchart-emotions.png)
105+
95106
##### Add a “Season” Variable (Grouping) and Summarize
96107

97108
Let's now add a new column called `season` by looking at the ID pattern — for example, `s1_` means season 1 and `s2_` means season 2. This makes it easy to compare the emotional tone across seasons.
@@ -105,12 +116,17 @@ emotion_seasons <- emotion_data %>%
105116
Time to aggregates the total count of each emotion within each season.
106117

107118
``` r
119+
# Aggregate emotion counts per season
108120
emotion_by_season <- emotion_seasons %>%
109121
group_by(season) %>%
110-
summarise(across(anger:positive, sum, na.rm = TRUE))
122+
summarise(
123+
across(anger:positive, ~sum(., na.rm = TRUE))
124+
)
111125
```
112126

113-
##### Compare Emotions by Season (Visualization)
127+
##### Plotting the Data
128+
129+
Comparing emotions by season:
114130

115131
``` r
116132
emotion_long <- emotion_by_season %>%
@@ -126,7 +142,7 @@ ggplot(emotion_long, aes(x = reorder(emotion, -count), y = count, fill = season)
126142
coord_flip()
127143
```
128144

129-
##### Plotting the Data
145+
![](images/plot-emotion-by-season-01.png)
130146

131147
Now, let's explore to see which emotions tend to occur together, revealing patterns of emotional co-occurrence in the text.
132148

@@ -162,14 +178,18 @@ ggplot(co_occurrence_long, aes(x = emotion1, y = emotion2, fill = correlation))
162178
)
163179
```
164180

181+
After running the script we should get the following heat map:
182+
183+
![](images/emotion-correlation-heatmap.png)
184+
185+
Based on these results, Overall, the emotional picture is pretty interconnected. It looks like the **negative emotions—Sadness, Fear, Anger, and Disgust—are more tightly linked**, meaning when people express one of these, they usually express the others too. In other words, they often show up together in the same comments.
186+
187+
While we've only scratched the surface of this particular dataset, the steps we've completed—from calculating basic sentiment scores to visualizing the co-occurrence of emotions—have demonstrated the **power of sentiment and emotion detection**. You now have the foundational skills to convert unstructured text into actionable data, allowing you to understand the **polarity (positive/negative)** and **specific emotional landscape** of any textual dataset.
188+
165189
##### Saving our work
166190

167191
After performing all the calculations and visualizations, it’s important to save the results so they can be reused or shared.
168192

169193
``` r
170194
write_csv(emotion_data, "output/sentiment_emotion_results.csv")
171195
```
172-
173-
#### Final Thoughts
174-
175-
You might be wondering: if the **`syuzhet`** package also computes polarity, why did we choose **`sentimentr`** in our pipeline? The reason is that syuzhet does not inherently account for valence shifters. In the original syuzhet implementation, words are scored in isolation—so “good” = +1, “bad” = −1—regardless of nearby negations or intensifiers. For example, “not good” would still be counted as +1. Because **`sentimentr`** adjusts sentiment scores for negators and amplifiers, polarity results are more nuanced, robust, and reliable.
264 KB
Loading
142 KB
Loading
166 KB
Loading
236 KB
Loading
46.6 KB
Loading
267 KB
Loading
171 KB
Loading

0 commit comments

Comments
 (0)