Sentiment evaluation is the automated means of classifying text data in accordance with their polarity, equivalent to positive, negative and neutral. Firms leverage sentiment evaluation of tweets to get a way of how customers are talking about their services and products, get insights to drive business decisions, and discover product issues and potential PR crises early on.
On this guide, we’ll cover every part you must learn to start with sentiment evaluation on Twitter. We’ll share a step-by-step process to do sentiment evaluation, for each, coders and non-coders. If you happen to are a coder, you may learn the best way to use the Inference API, a plug & play machine learning API for doing sentiment evaluation of tweets at scale in only a number of lines of code. If you happen to do not know the best way to code, don’t fret! We’ll also cover the best way to do sentiment evaluation with Zapier, a no-code tool that can enable you to assemble tweets, analyze them with the Inference API, and eventually send the outcomes to Google Sheets ⚡️
Read along or jump to the section that sparks 🌟 your interest:
- What’s sentiment evaluation?
- The way to do Twitter sentiment evaluation with code?
- The way to do Twitter sentiment evaluation without coding?
Buckle up and revel in the ride! 🤗
What’s Sentiment Evaluation?
Sentiment evaluation uses machine learning to robotically discover how persons are talking a few given topic. Essentially the most common use of sentiment evaluation is detecting the polarity of text data, that’s, robotically identifying if a tweet, product review or support ticket is talking positively, negatively, or neutral about something.
For example, let’s take a look at some tweets mentioning @Salesforce and see how they’d be tagged by a sentiment evaluation model:
-
“The more I exploit @salesforce the more I dislike it. It’s slow and stuffed with bugs. There are elements of the UI that seem like they have not been updated since 2006. Current frustration: app exchange pages won’t stop refreshing every 10 seconds” –> This primary tweet could be tagged as “Negative”.
-
“That’s what I really like about @salesforce. That it’s about relationships and about caring about people and it’s not only about business and money. Thanks for caring about #TrailblazerCommunity” –> In contrast, this tweet could be classified as “Positive”.
-
“Coming Home: #Dreamforce Returns to San Francisco for twentieth Anniversary. Learn more: http://bit.ly/3AgwO0H via @Salesforce” –> Lastly, this tweet could be tagged as “Neutral” because it doesn’t contain an opinion or polarity.
Up until recently, analyzing tweets mentioning a brand, services or products was a really manual, hard and tedious process; it required someone to manually go over relevant tweets, and browse and label them in accordance with their sentiment. As you’ll be able to imagine, not only this does not scale, it is pricey and really time-consuming, but it’s also liable to human error.
Luckily, recent advancements in AI allowed firms to make use of machine learning models for sentiment evaluation of tweets which might be nearly as good as humans. Through the use of machine learning, firms can analyze tweets in real-time 24/7, do it at scale and analyze hundreds of tweets in seconds, and more importantly, get the insights they’re on the lookout for after they need them.
Why do sentiment evaluation on Twitter? Firms use this for a wide selection of use cases, however the two of essentially the most common use cases are analyzing user feedback and monitoring mentions to detect potential issues early on.
Analyze Feedback on Twitter
Listening to customers is vital for detecting insights on how you’ll be able to improve your services or products. Although there are multiple sources of feedback, equivalent to surveys or public reviews, Twitter offers raw, unfiltered feedback on what your audience thinks about your offering.
By analyzing how people discuss your brand on Twitter, you’ll be able to understand whether or not they like a brand new feature you simply launched. You can even get a way in case your pricing is obvious to your audience. You can even see what points of your offering are essentially the most liked and disliked to make business decisions (e.g. customers loving the simplicity of the user interface but hate how slow customer support is).
Monitor Twitter Mentions to Detect Issues
Twitter has change into the default option to share a nasty customer experience and express frustrations each time something goes unsuitable while using a services or products. This is the reason firms monitor how users mention their brand on Twitter to detect any issues early on.
By implementing a sentiment evaluation model that analyzes incoming mentions in real-time, you’ll be able to robotically be alerted about sudden spikes of negative mentions. Most times, that is caused is an ongoing situation that should be addressed asap (e.g. an app not working due to server outages or a extremely bad experience with a customer support representative).
Now that we covered what’s sentiment evaluation and why it’s useful, let’s get our hands dirty and really do sentiment evaluation of tweets!💥
The way to do Twitter sentiment evaluation with code?
Nowadays, getting began with sentiment evaluation on Twitter is kind of easy and easy 🙌
With a number of lines of code, you’ll be able to robotically get tweets, run sentiment evaluation and visualize the outcomes. And you’ll be able to learn the best way to do all this stuff in only a number of minutes!
On this section, we’ll show you the best way to do it with a cool little project: we’ll do sentiment evaluation of tweets mentioning Notion!
First, you may use Tweepy, an open source Python library to get tweets mentioning @NotionHQ using the Twitter API. Then you definately’ll use the Inference API for doing sentiment evaluation. When you get the sentiment evaluation results, you’ll create some charts to visualise the outcomes and detect some interesting insights.
You need to use this Google Colab notebook to follow this tutorial.
Let’s start with it! 💪
- Install Dependencies
As a primary step, you’ll have to put in the required dependencies. You may use Tweepy for gathering tweets, Matplotlib for constructing some charts and WordCloud for constructing a visualization with essentially the most common keywords:
!pip install -q transformers tweepy matplotlib wordcloud
- Organising Twitter credentials
Then, you must arrange the Twitter API credentials so you’ll be able to authenticate with Twitter after which gather tweets robotically using their API:
import tweepy
consumer_key = "XXXXXX"
consumer_secret = "XXXXXX"
auth = tweepy.AppAuthHandler(consumer_key, consumer_secret)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
- Seek for tweets using Tweepy
Now you’re ready to begin collecting data from Twitter! 🎉 You’ll use Tweepy Cursor to robotically collect 1,000 tweets mentioning Notion:
def limit_handled(cursor):
while True:
try:
yield cursor.next()
except tweepy.RateLimitError:
print('Reached rate limite. Sleeping for >quarter-hour')
time.sleep(15 * 61)
except StopIteration:
break
query = '@NotionHQ'
query = query + ' -filter:retweets'
count = 1000
search = limit_handled(tweepy.Cursor(api.search,
q=query,
tweet_mode='prolonged',
lang='en',
result_type="recent").items(count))
tweets = []
for result in search:
tweet_content = result.full_text
tweets.append(tweet_content)
- Analyzing tweets with sentiment evaluation
Now that you will have data, you’re ready to research the tweets with sentiment evaluation! 💥
You shall be using Inference API, an easy-to-use API for integrating machine learning models via easy API calls. With the Inference API, you should utilize state-of-the-art models for sentiment evaluation without the effort of constructing infrastructure for machine learning or coping with model scalability. You possibly can serve the most recent (and biggest!) open source models for sentiment evaluation while staying out of MLOps. 🤩
For using the Inference API, first you will have to define your model id and your Hugging Face API Token:
-
The
model IDis to specify which model you need to use for making predictions. Hugging Face has greater than 400 models for sentiment evaluation in multiple languages, including various models specifically fine-tuned for sentiment evaluation of tweets. For this particular tutorial, you’ll use twitter-roberta-base-sentiment-latest, a sentiment evaluation model trained on ≈124 million tweets and fine-tuned for sentiment evaluation. -
You may also have to specify your
Hugging Face token; you’ll be able to get one without cost by signing up here after which copying your token on this page.
model = "cardiffnlp/twitter-roberta-base-sentiment-latest"
hf_token = "XXXXXX"
Next, you’ll create the API call using the model id and hf_token:
API_URL = "https://api-inference.huggingface.co/models/" + model
headers = {"Authorization": "Bearer %s" % (hf_token)}
def evaluation(data):
payload = dict(inputs=data, options=dict(wait_for_model=True))
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
Now, you’re able to do sentiment evaluation on each tweet. 🔥🔥🔥
tweets_analysis = []
for tweet in tweets:
try:
sentiment_result = evaluation(tweet)[0]
top_sentiment = max(sentiment_result, key=lambda x: x['score'])
tweets_analysis.append({'tweet': tweet, 'sentiment': top_sentiment['label']})
except Exception as e:
print(e)
- Explore the outcomes of sentiment evaluation
Wondering if people on Twitter are talking positively or negatively about Notion? Or what do users discuss when talking positively or negatively about Notion? We’ll use some data visualization to explore the outcomes of the sentiment evaluation and discover!
First, let’s have a look at examples of tweets that were labeled for every sentiment to get a way of different polarities of those tweets:
import pandas as pd
pd.set_option('max_colwidth', None)
pd.set_option('display.width', 3000)
df = pd.DataFrame(tweets_analysis)
display(df[df["sentiment"] == 'Positive'].head(1))
display(df[df["sentiment"] == 'Neutral'].head(1))
display(df[df["sentiment"] == 'Negative'].head(1))
Results:
@thenotionbar @hypefury @NotionHQ That’s genuinely smart. So mainly you’ve setup your posting queue to by a recurrent recycling of top content that runs 100% automatic? Sentiment: Positive
@itskeeplearning @NotionHQ How you have linked gallery cards? Sentiment: Neutral
@NotionHQ Running into a problem here recently were content isn't showing on on web but still within the app. This happens for all of our pages. https://t.co/3J3AnGzDau. Sentiment: Negative
Next, you may count the variety of tweets that were tagged as positive, negative and neutral:
sentiment_counts = df.groupby(['sentiment']).size()
print(sentiment_counts)
Remarkably, a lot of the tweets about Notion are positive:
sentiment
Negative 82
Neutral 420
Positive 498
Then, let’s create a pie chart to visualise each sentiment in relative terms:
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6,6), dpi=100)
ax = plt.subplot(111)
sentiment_counts.plot.pie(ax=ax, autopct='%1.1f%%', startangle=270, fontsize=12, label="")
It’s cool to see that fifty% of all tweets are positive and only 8.2% are negative:
As a final step, let’s create some wordclouds to see which words are essentially the most used for every sentiment:
from wordcloud import WordCloud
from wordcloud import STOPWORDS
positive_tweets = df['tweet'][df["sentiment"] == 'Positive']
stop_words = ["https", "co", "RT"] + list(STOPWORDS)
positive_wordcloud = WordCloud(max_font_size=50, max_words=50, background_color="white", stopwords = stop_words).generate(str(positive_tweets))
plt.figure()
plt.title("Positive Tweets - Wordcloud")
plt.imshow(positive_wordcloud, interpolation="bilinear")
plt.axis("off")
plt.show()
negative_tweets = df['tweet'][df["sentiment"] == 'Negative']
stop_words = ["https", "co", "RT"] + list(STOPWORDS)
negative_wordcloud = WordCloud(max_font_size=50, max_words=50, background_color="white", stopwords = stop_words).generate(str(negative_tweets))
plt.figure()
plt.title("Negative Tweets - Wordcloud")
plt.imshow(negative_wordcloud, interpolation="bilinear")
plt.axis("off")
plt.show()
Curiously, among the words that stand out from the positive tweets include “notes”, “cron”, and “paid”:
In contrast, “figma”, “enterprise” and “account” are among the most used words from the negatives tweets:
That was fun, right?
With just a number of lines of code, you were capable of robotically gather tweets mentioning Notion using Tweepy, analyze them with a sentiment evaluation model using the Inference API, and eventually create some visualizations to research the outcomes. 💥
Are you keen on doing more? As a next step, you might use a second text classifier to categorise each tweet by their theme or topic. This manner, each tweet shall be labeled with each sentiment and topic, and you’ll be able to get more granular insights (e.g. are users praising how easy to make use of is Notion but are complaining about their pricing or customer support?).
The way to do Twitter sentiment evaluation without coding?
To start with sentiment evaluation, you do not have to be a developer or know the best way to code. 🤯
There are some amazing no-code solutions that can enable you to simply do sentiment evaluation in only a number of minutes.
On this section, you’ll use Zapier, a no-code tool that allows users to attach 5,000+ apps with a simple to make use of user interface. You’ll create a Zap, that’s triggered each time someone mentions Notion on Twitter. Then the Zap will use the Inference API to research the tweet with a sentiment evaluation model and eventually it’ll save the outcomes on Google Sheets:
- Step 1 (trigger): Getting the tweets.
- Step 2: Analyze tweets with sentiment evaluation.
- Step 3: Save the outcomes on Google Sheets.
No worries, it won’t take much time; in under 10 minutes, you may create and activate the zap, and can start seeing the sentiment evaluation results pop up in Google Sheets.
Let’s start! 🚀
Step 1: Getting the Tweets
To start, you’ll have to create a Zap, and configure step one of your Zap, also called the “Trigger” step. In your case, you will have to set it up in order that it triggers the Zap each time someone mentions Notion on Twitter. To set it up, follow the next steps:
- First select “Twitter” and choose “Search mention” as event on “Select app & event”.
- Then connect your Twitter account to Zapier.
- Arrange the trigger by specifying “NotionHQ” because the search term for this trigger.
- Finally test the trigger to be certain it gather tweets and runs appropriately.
Step 2: Analyze Tweets with Sentiment Evaluation
Now that your Zap can gather tweets mentioning Notion, let’s add a second step to do the sentiment evaluation. 🤗
You shall be using Inference API, an easy-to-use API for integrating machine learning models. For using the Inference API, you will have to define your “model id” and your “Hugging Face API Token”:
-
The
model IDis to inform the Inference API which model you need to use for making predictions. For this guide, you’ll use twitter-roberta-base-sentiment-latest, a sentiment evaluation model trained on ≈124 million tweets and fine-tuned for sentiment evaluation. You possibly can explore the greater than 400 models for sentiment evaluation available on the Hugging Face Hub in case you need to use a distinct model (e.g. doing sentiment evaluation on a distinct language). -
You may also have to specify your
Hugging Face token; you’ll be able to get one without cost by signing up here after which copying your token on this page.
Once you will have your model ID and your Hugging Face token ID, return to your Zap and follow these instructions to establish the second step of the zap:
- First select “Code by Zapier” and “Run python” in “Select app and event”.
- On “Arrange motion”, you will have to first add the tweet “full text” as “input_data”. Then you will have so as to add these 28 lines of python code within the “Code” section. This code will allow the Zap to call the Inference API and make the predictions with sentiment evaluation. Before adding this code to your zap, please be certain that you just do the next:
- Change line 5 and add your Hugging Face token, that’s, as an alternative of
hf_token = "ADD_YOUR_HUGGING_FACE_TOKEN_HERE", you will have to vary it to something likehf_token = "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" - If you need to use a distinct sentiment evaluation model, you will have to vary line 4 and specify the id of the brand new model here. For instance, as an alternative of using the default model, you might use this model to do sentiment evaluation on tweets in Spanish by changing this line
model = "cardiffnlp/twitter-roberta-base-sentiment-latest"tomodel = "finiteautomata/beto-sentiment-analysis".
- Change line 5 and add your Hugging Face token, that’s, as an alternative of
- Finally, test this step to be certain it makes predictions and runs appropriately.
Step 3: Save the outcomes on Google Sheets
Because the last step to your Zap, you’ll save the outcomes of the sentiment evaluation on a spreadsheet on Google Sheets and visualize the outcomes. 📊
First, create a brand new spreadsheet on Google Sheets, and define the next columns:
- Tweet: this column will contain the text of the tweet.
- Sentiment: may have the label of the sentiment evaluation results (e.g. positive, negative and neutral).
- Rating: will store the worth that reflects how confident the model is with its prediction.
- Date: will contain the date of the tweet (which will be handy for creating graphs and charts over time).
Then, follow these instructions to configure this last step:
- Select Google Sheets as an app, and “Create Spreadsheet Row” because the event in “Select app & Event”.
- Then connect your Google Sheets account to Zapier.
- Next, you’ll have to establish the motion. First, you’ll have to specify the Google Drive value (e.g. My Drive), then select the spreadsheet, and eventually the worksheet where you would like Zapier to robotically write latest rows. Once you’re done with this, you will have to map each column on the spreadsheet with the values you need to use when your zap robotically writes a brand new row in your file. If you will have created the columns we suggested before, this may seem like the next (column → value):
- Tweet → Full Text (value from the step 1 of the zap)
- Sentiment → Sentiment Label (value from step 2)
- Sentiment Rating → Sentiment Rating (value from step 2)
- Date → Created At (value from step 1)
- Finally, test this last step to be certain it will probably add a brand new row to your spreadsheet. After confirming it’s working, you’ll be able to delete this row in your spreadsheet.
4. Activate your Zap
At this point, you will have accomplished all of the steps of your zap! 🔥
Now, you simply have to turn it on so it will probably start gathering tweets, analyzing them with sentiment evaluation, and store the outcomes on Google Sheets. ⚡️
To show it on, just click on “Publish” button at the underside of your screen:
After a number of minutes, you will note how your spreadsheet starts populating with tweets and the outcomes of sentiment evaluation. You can even create a graph that will be updated in real-time as tweets are available in:
Super cool, right? 🚀
Wrap up
Twitter is the general public town hall where people share their thoughts about every kind of topics. From people talking about politics, sports or tech, users sharing their feedback a few latest shiny app, or passengers complaining to an Airline a few canceled flight, the quantity of information on Twitter is huge. Sentiment evaluation allows making sense of all that data in real-time to uncover insights that may drive business decisions.
Luckily, tools just like the Inference API makes it super easy to start with sentiment evaluation on Twitter. Irrespective of if you happen to know or do not know the best way to code and/otherwise you do not have experience with machine learning, in a number of minutes, you’ll be able to arrange a process that may gather tweets in real-time, analyze them with a state-of-the-art model for sentiment evaluation, and explore the outcomes with some cool visualizations. 🔥🔥🔥
If you will have questions, you’ll be able to ask them within the Hugging Face forum so the Hugging Face community can make it easier to out and others can profit from seeing the discussion. You can even join our Discord server to speak with us and your entire Hugging Face community.
