#Masks: a Twitter Sentiment Analysis Throughout COVID-19
Now more than six months into the COVID-19 pandemic, wearing a mask is not only commonplace but is required to enter many businesses and public spaces. In the United States and Europe, they have become completely normalized in just a few short months to the point where the act of not wearing a mask is often shunned upon. However, you may recall that in the early days of the outbreak many top health officials publicly stated that masks were ineffective in reducing the spread of the virus, some even adamantly recommending against their use.
A now infamous tweet was made by the United States Surgeon General on February 29, 2020:
“Seriously people- STOP BUYING MASKS! They are NOT effective in preventing general public from catching #Coronavirus…”
— @Surgeon_General, Twitter 2020.02.29
Less than three months later, the Surgeon General had completely changed his mind on the subject:
“…wearing a mask is completely safe. As an anesthesiologist, I wear a mask all day long to protect my patients…”
— @Surgeon_General, Twitter 2020.05.23
Given this major shift in narrative from leading health officials, Joshua Szymanowski and I were interested to see if the same trend applied to the general public. We set out to gather and analyze a large dataset of tweets containing the word ‘Mask’ or ‘Masks’ over the course of the COVID-19 pandemic and apply some Machine Learning and Natural Language Processing (NLP) techniques to find out.
Collecting the Data
Using twint (a Python package for scraping Twitter), Josh and I ran a loop 24/7 for three weeks, scraping all tweets containing the word ‘Mask’ or ‘Masks’ between January 1st and June 30th 2020: over 150 Million tweets in total.
Due to the incredible size of the dataset, we needed to pair it down in order to run computations locally on our machines. To assure topic relevance, we applied filters to only include tweets which also included at least one of the following words: ‘Covid, Dead, Death, Doctor, Infect, Novel, Nurse, Outbreak, Rona, Sars, Viral, Virus, Wuhan’. From there we randomly selected 5,000 tweets per day, leaving us with about 900,000 tweets to analyze.
Preparing Tweets for Analysis
In order to run Machine Learning and NLP models on the tweets, we needed to clean up the text. We made all text lowercase, removed links and usernames, converted emojis into their associated words or phrases, and removed punctuation and stop words (common English connecting words such as ‘the’ and words redundant given the subject matter such as ‘virus’).
Next, we used nltk (an NLP Python library) to tokenize and lemmatize each tweet, essentially breaking up compound words and conjugating each word to its root form. Now our tweets were finally ready for some analysis. The following graph shows the top 25 words by frequency across all tweets after cleaning and preparation:
Sentiment Towards Masks Over Time
Using Python package vader, we computed a sentiment score for each tweet on a scale from -1 (negative sentiment) to +1 (positive sentiment). Tweets with a sentiment score between -0.5 and +0.5 are considered to be neutral.
The graph above shows the overall distribution of positive, negative and neutral sentiment in the tweets over time. We can see it remained mostly constant, however there was a larger percentage of negative sentiment at the very start of the pandemic, consistent with the leading health officials.
The following word bubbles contain the most common words found in each tweet sentiment category:
LDA Topic Modeling
In order to gain further insights into the subject matter of each sentiment category, we employed Latent Dirichlet Allocation (LDA) topic modeling with another NLP Python library gensim. Essentially LDA calculates N given number of ‘topics’ based on the words in all the tweets combined and then scores each tweet a score for each topic, all of which add up to 1 or 100%. Due to computational resource constraints, we had to limit topic modeling to words that appeared at least 250 unique times across the entire dataset.
After some experimentation, we decided that ten was a good number of interesting, non-repetitive topics. Our interpretation of each LDA topic is as follows:
0. Healthcare workers, hospitals
1. Social distancing
2. Protesting, lockdowns
3. Government, health organizations
4. Spreading the virus
5. Emojis, profanity
6. COVID-19 statistics
7. Preventing infection
8. News, general information
9. Riots, Black Lives Matter
The following graph shows the distribution of sentiment across each LDA topic:
In general we were surprised to see how consistent sentiment remained towards masks over the course of the pandemic across the general public (who use Twitter), which was not the case with major health officials. Upon further reflection, our results may have highlighted the so called ‘wisdom in the crowd’ vs. the opinion of a few experts.
One thing we would be interested trying in the future is using NLP Opinion Mining towards the word ‘Mask’ or ‘Masks’ itself rather than the overall sentiment of the tweet. Please feel free to reach out if you’d like access to the dataset as I believe we have only scratched the surface!