This notebook will cover:
The field of Natural Language Processing is strongly interwined with statistical learning. For this reasons, the lessons we covered on statistical learning form the foundation for the modelling approach we will take when analyzing text-as-data. As an example, the difference between unsupervised and supervised tasks we saw earlier in this course provides an useful analytical framework for us to separate distinct tasks when working with text.
Supervised learning consists on tasks in which for every observation $i$, we observe both inputs/predictors/features and outcomes we wish to predict. To solve this task, we use statistical learning on some sort of labeled data (outcomes), with a focus on maping inputs on outputs.
On Unsupervised tasks, we only observe the input data, but our tasks do not use any type of outcome/label we wish to predict or explain. The goal of unsupervised learning is to recover hidden structures in the data, for example, clusters, groups, or topics based on the co-occurence of words.
When working with text, supervised tasks often involve some sort of classification into known categories. For example, sentiment analysis, detection of stance, detection of spam, classification of social media posts containing toxic language or misinformation. These are all examples of supervised learning tasks. On the other side, unsupervised tasks are often used when the goal is to discover patterns in the text without having to make assumptions about the content of the corpus. The most common type of unsupervised task is to find topics, or words that occur together in large volume of text. We know there is some hidden structure in the text, but we don't have the labels, so this becomes an unsupervised learning task.
In this notebook, we will:
This notebook serves the purpose of providing you with the intuition behind these models in a super applied manner so that you can use them on your final projects. That is all to say, I will give you code, but will not explain in details the implementation of the models.
# Open data
import pandas as pd
import numpy as np
tweets_data = pd.read_csv("tweets_congress.csv")
tweets_data.head()
tweets_data.shape
## Pre processing steps
# stopwords
from nltk.corpus import stopwords
# tokenizer
from nltk.tokenize import word_tokenize
# lemmatizer
from nltk.stem import WordNetLemmatizer
# stemming
from nltk.stem.porter import PorterStemmer
To estimate topic models, we will use the gensim library. gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. It is the main library to retrieve pre-trained word embeddings, or to train word embeddings using the famous word2vec algorithm.
This is a step-by-step of estimate LDA using gensim:
Preprocess the Text: Follow most of the steps we saw before, including tokenization, removing stopwords, normalization, etc..
Create a dictionary: gensim requires you to create a dictionary of all stemmed/preprocessed words in the corpus (collection of documents); the method Dictionary from gensim will crete this data structure for us.
Filter out words from the dictionary that appear in either a very low proportion of documents (lower bound) or a very high proportion of documents (upper bound).
Create a bag-of-words representation of the documents: maps words from the dictionary representation to each document.
Estimate the topic model: use LDA model within gensim
# get a sample
import random
td = tweets_data.iloc[random.sample(range(1, tweets_data.shape[0]), 10000)].copy()
Write a function with all our previous steps
# Write a preprocessing function
def preprocess_text(text):
# increase stop words
stop_words = stopwords.words('english')
stop_words = stop_words + ["https", "rt", "amp"]
# tokenization
tokens_ = word_tokenize(text)
# Generate a list of tokens after preprocessing
# normalize
tokens_ = [word.lower() for word in tokens_ if word.isalpha()]
# stem and stopwords
# instatiate the stemmer
porter = PorterStemmer()
tokens_ = [porter.stem(word) for word in tokens_ if word not in stop_words]
# Return the preprocessed tokens as a string
return tokens_
# apply
td["tokens"] = td["text"].apply(preprocess_text)
# import dictionary
from gensim.corpora import Dictionary
# convert to a list
tokens = td["tokens"].tolist()
# let's look what this input is.
# should be a list of list for each document split by tokens
tokens[1]
#td["text"][999540]
# Create a dictionary representation of the documents
dictionary = Dictionary(tokens)
# see
dictionary.token2id
This is a additional pre-processing task. More meaningful topics comes when we remove rare and overly common words`
# Filter out words that occur in less than 20 documents, or more than 50% of the documents.
dictionary.filter_extremes(no_below=20, no_above=0.5)
# Create a bag-of-words representation of the documents
# notice here you are just inputing every doc in a .doc2bow methods
corpus = [dictionary.doc2bow(doc) for doc in tokens]
corpus[0]
# see case by case
# tuple with (id for every token, frequency)
dictionary.doc2bow(tokens[0])
from gensim.models.ldamodel import LdaModel
# Train the LDA model
lda_model = LdaModel(
corpus=corpus,
id2word=dictionary,
# this is your only input!!!
num_topics=10,
eval_every=False
)
# Print the Keyword in the 10 topics
lda_model.print_topics()
# Extract topics from each documenct
td['topic'] = [sorted(lda_model[corpus][text]) for text in range(len(td["text"]))]
td.head()
# expand the dataframe
df_exploded = td["topic"].explode().reset_index()
# separate information
df_exploded[["topic", "probability"]] = pd.DataFrame(df_exploded['topic'].tolist(), index=df_exploded.index)
# data frame with the distribution for each topic vs document
df_exploded
# merge
df_exploded = pd.merge(df_exploded, td.reset_index(), on="index")
# topic prevalence
tp_prev = df_exploded.groupby("topic_x")["probability"].mean().reset_index()
tp_prev.sort_values("probability", ascending=False)
# Get the most important words for each topic
topic_words = list()
for i in range(lda_model.num_topics):
# Get the top words for the topic
words = lda_model.show_topic(i, topn=10)
topic_words.append(", ".join([word for word, prob in words]))
topic_words
tp_prev["words"] = topic_words
tp_prev
Very nice representation of the topics. you can merge this back with the core data set and see different distributions for candidates, parties, time of the day, any other group variable you have
# preparing the dataframe
# convert topics to category
tp_prev['topic_x'] = tp_prev.topic_x.astype('category')
# get a list for ordering
topics = tp_prev["probability"].sort_values().index.tolist()
# create a new re-ordered variable
tp_prev = tp_prev.assign(topics_ord=
tp_prev['topic_x'].cat.reorder_categories(topics))
# plot
from plotnine import *
(ggplot(tp_prev, aes(x='topics_ord', y='probability', label='words'))
+ geom_col( fill='lightblue')
+ coord_flip()
+ geom_text(aes(y='probability + 0.01'), nudge_y=-.05) # Adjusting text position
+ theme(axis_text_y=element_text(angle=90)) # Rotating x-axis labels for better visibility
+ theme(figure_size=(12, 6))
)
How many topics should I use? As argued here, to decide how many topics you should use, one needs to use both qualitative assessment of the topics (humans in the loop) and some coeherence measures across the topics.
There are many coherence measures to be used to assess topic models and the quality of the topics. Broadly speaking, these measures all try to compare words that appear in the same topic and measures on average how similar they are compared to words from different topics.
What matters here is for you to understand the procedure. In general, we fit many models with different number of topics, look at which point gains in these measures are very marginal, and then make qualitative assessments of how our topics look like.
Let's see an example below using two measures (u_mass and c_v). On both cases, higher values means better topics
lda_model = LdaModel(
corpus=corpus,
id2word=dictionary,
num_topics=10,
eval_every=False
)
from gensim.models import CoherenceModel
coherence_values = []
model_list = []
for num_topics in range(5, 30, 4):
print(num_topics)
# estimate the model
model = LdaModel(corpus=corpus, id2word=dictionary, num_topics=num_topics)
# save the model
model_list.append(model)
# get coherence umass
coherencemodel_umass = CoherenceModel(model=model, corpus=corpus, dictionary=dictionary, coherence='u_mass')
# get another coherence measures
coherencemodel_cv = CoherenceModel(model=model, texts=tokens, dictionary=dictionary, coherence='c_v')
coherence_values.append((num_topics, coherencemodel_umass.get_coherence(), coherencemodel_cv.get_coherence()))
# grab the results
res = pd.DataFrame(coherence_values, columns=["topics", "u_mass", "c_v"])
# tidy
res = res.melt(id_vars="topics")
# plotnine
(ggplot(res, aes(y="value", x="topics"))
+ geom_line()
+ facet_wrap("variable", scales="free")
+ theme_minimal())
To practice with supervised learning with text data, we will perform some classic sentiment analysis classification task. Sentiment analysis natural language processing technique that given a textual input (tweets, movie reviews, comments on a website chatbox, etc... ) identifies the polarity of the text.
There are different flavors of sentiment analysis, but one of the most widely used techniques labels data into positive, negative and neutral. Other options are classifying text according to the levels of toxicity, which I did in the paper I asked you to read, or more fine-graine measures of sentiments.
Sentiment analysis is just one of many types of classification tasks that can be done with text. For any type of task in which you need to identify if the input pertains to a certain category, you can use a similar set of tools as we will see for sentiment analysis. For example, these are some classification tasks I have used in my work before:
For all these tasks, you need:
Here, we will work with data that was alread labelled for us. We will analyze the sentiment on IMDB dataset of reviews
For the rest of this notebook, we will IMDB dataset provided by Hugging Face. The IMDB dataset contains 25,000 movie reviews labeled by sentiment for training a model and 25,000 movie reviews for testing it.
We will talk more about the Hugging Face project later in this notebook. For now, just download their main transformers library, and import the IMDB Review Dataset
#pip install -q transformers
from datasets import load_dataset
imdb = load_dataset("imdb")
small_train_dataset = imdb["train"].shuffle(seed=42).select([i for i in list(range(3000))])
small_test_dataset = imdb["test"].shuffle(seed=42).select([i for i in list(range(300))])
# convert to a dataframe
pd_train = pd.DataFrame(small_train_dataset)
pd_test = pd.DataFrame(small_test_dataset)
# see the data
pd_train.head()
Our first approach for sentiment classification will use dictionary methods.
Common Procedure: Consists on using a pre-determined set of words (dictionary) that identifies the categories you want to classify documents. With this dictionary, you can do a simple search through the documents, count how many times these words appear, and use some type of aggregation function to classify the text. For example:
Dictionaries are the most basic strategy to classify documents. Its simplicity requires some unrealistic assumptions (for example related to ignoring contextual information of the documents). However, the use of dicitionaries have one major advantage: it allows for a bridge between qualititative and quantitative knowledge. You need human experts to build good dictionaries.
There are many options for dictionaries for sentiment classification. We will use one popular open-source option available at NLTK: The VADER dictionary. VADER stands for Valence Aware Dictionary for Sentiment Reasoning. It is a model used for text sentiment analysis that is sensitive to both polarity (positive/negative) and intensity (strength) of emotion, and it was developed to handling particularly social media content.
Key Components of the VADER Dictionary:*
Sentiment Lexicon: This is a list of known words and their associated sentiment scores.
Sentiment Intensity Scores: Each word in the lexicon is assigned a score that ranges from -4 (extremely negative) to +4 (extremely positive).
Handling of Contextual and Qualitative Modifiers: VADER is sensitive to both intensifiers (e.g., "very") and negations (e.g., "not").
You can read the original paper that created the VADER here
#### Import dictionary
import nltk
# nltk.download('vader_lexicon')
from nltk.sentiment.vader import SentimentIntensityAnalyzer
# instantiate the model
sid = SentimentIntensityAnalyzer()
# simple example
review1 = "Oh, I loved the Data Science I course. The best course I have ever done!"
# classify
sid.polarity_scores(review1)
# simple example
review2 = "DS I was ok. Professor Ventura jokes were not great"
# classify
sid.polarity_scores(review2)
Let's now apply the dictionary at scale in our IMDB review dataset
# apply the dictionary to your data frame
pd_test["vader_scores"]=pd_test["text"].apply(sid.polarity_scores)
# let's see
pd_test.head()
# grab final sentiment
pd_test["sentiment_vader"]=pd_test["vader_scores"].apply(lambda x: np.where(x["compound"] > 0, 1, 0))
Now that we have performed the classification task, we can see compare the labels and our predictions. We will be using a simple accuracy measure of how many labels were correctly classified
pd_test['vader_scores']
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(pd_test['label'], pd_test['sentiment_vader'])
# see
print(accuracy)
The next step to try if your dictionary is not working well is to train your own machine learning classifier. To build a simple Machine Learning classifier you will need to combine two different processes:
Build your input: here you will do all the steps we saw before to convert text to numbers. The goal here is to use a document feature matrix as the input for the ML model.
Use sklearn to train your model and assess its accuracy.
As before, let's not dig deep on the differences between each model. We will learn how to train a model using a logistic regression with penalized terms. You can use the same code with sklearn to try different models.
# apply the same pre-process function we used for topic models
# notice we are working with the training data
pd_train["tokens"] = pd_train["text"].apply(preprocess_text)
# let's see
pd_train.head()
# Repeate for test
pd_test["tokens"] = pd_test["text"].apply(preprocess_text)
# join all
pd_train["tokens"] = pd_train["tokens"].apply(' '.join)
pd_test["tokens"] = pd_test["tokens"].apply(' '.join)
# lets build our document feature matrix using Tfidf
from sklearn.feature_extraction.text import TfidfVectorizer
# instantiate a vectorizer
vectorizer = TfidfVectorizer()
# get tfidf
vectorizer = TfidfVectorizer(
lowercase=True,
stop_words='english',
min_df=5, max_df=.90
)
# transform
train_tfidf = vectorizer.fit_transform(pd_train["tokens"]) # transform train
test_tfidf = vectorizer.transform(pd_test["tokens"]) # transform test
# check
print(train_tfidf.shape)
print(test_tfidf.shape)
# separate the targer
y_train = pd_train["label"]
y_test = pd_test["label"]
# import the models
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
# train the model
model = LogisticRegression(penalty='l1', solver='liblinear', random_state=42)
model.fit(train_tfidf,y_train)
# assess the model
y_pred = model.predict(test_tfidf)
# Evaluate the model
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Confusion Matrix:\n", confusion_matrix(y_test, y_pred))
Pretty cool accuracy!
Modify the code above to try a different model. You can either use different parameters for the logistic regression or just try a different model. Did you do better than the model I showed?
# Your code here
In the past few years, the field of natural language processing has undergone through a major revolution. As we first saw, the early generation of NLP models was based on the idea of converting text to numbers through the use of document-feature matrix relying on the bag-of-words assumptions.
In the past ten-years, we have seen the emergence of a new paradigm using deep-learning and neural networks models to improve on the representation of text as numbers. These new models move away from the idea of a bag-of-words towards a more refined representation of text capturing the contextual meaning of words and sentences. This is achieved by training models with billions of parameters on text-sequencing tasks, using as inputs a dense representation of words. These are the famous word embeddings.
The most recent innovation on this revolution has been the Transformers Models. These models use multiple embeddings (matrices) to represent word, in which each matrix can capture different contextual representations of words. This dynamic representation allow for higher predictive power on downstream tasks in which these matrices form the foundation of the entire machine learning architecture. For example, Transformers are the the core of the language models like Open AI's GPTs and Meta's LLaMa.
The Transformers use a sophisticated architecture that requires a huge amount of data and computational power to be trained. However, several of these models are open-sourced and are made available for us on the web through a platform called Hugging Face. Those are what we call pre-trained large language models. At this point, there are thousands of pre-trained models based on the transformers framework available at hugging face.
Once you find a model that fits your task, you have two options:
Use the model architecture: access the model through the transformers library, and use it in you predictive tasks.
Fine-Tunning: this is the most traditional way. You will get the model, give some data, re-train the model slightly so that the model will learn patterns from your data, and use on your predictive task. By fine-tuning a Transformers-based model for our own application, we can improve contextual understanding and therefore task-specific performance
We will see example of the first for sentiment analysis. If you were to do build a full pipeline for classification, you would probably need to fine-tune the model. To learn more about fine-tunning, I suggest you to read:
here on hugging face: https://huggingface.co/blog/sentiment-analysis-python
and this forthcoming paper for political science applications:https://joantimoneda.netlify.app/files/Timoneda%20Vallejo%20V%20JOP.pdf
To use a model available on hugging face, you only need a few lines of code.
# import the pipeline function
from transformers import pipeline
Use the pipeline class to access the model. The pipeline function will give you the default model for this task, that in thsi case is a Bert-Based Model, see here: https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english?text=I+like+you.+I+love+you
# instantiate your model
sentiment_pipeline = pipeline("sentiment-analysis")
# see simple cases
review1 = "Oh, I loved the Data Science I course. The best course I have ever done!"
review2 = "DS I was ok... bit less than ok... actually, only thing worthy was the cute baby we met at the end"
print(review1, review2)
#prediction
sentiment_pipeline([review1, review2])
We can easily use this model to make predictions on our entire dataset
# predict in the entire model.
# notice here I am truncating the model. Transformers can only deal with 512 tokens max
pd_test["bert_scores"]=pd_test["text"].apply(sentiment_pipeline, truncation=True, max_length=512)
# let's clean it up
pd_test["bert_class"]=pd_test["bert_scores"].apply(lambda x: np.where(x[0]["label"]=="POSITIVE", 1, 0))
pd_test.head()
## accuracy
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(pd_test['label'], pd_test['bert_class'])
# see
print(accuracy)
Without any fine-tunning, we are already doing much, much better than dictionaries!
Since I do not want go to the in-depth process of fine-tunning your model, let's see if there are models on Hugging Face that were actually trained on a similar task: predicting reviews.
Actually, there are many. See here: https://huggingface.co/models?sort=trending&search=sentiment+reviews
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# acessing the model
model = AutoModelForSequenceClassification.from_pretrained("MICADEE/autonlp-imdb-sentiment-analysis2-7121569")
# Acessing the tokenizer
tokenizer = AutoTokenizer.from_pretrained("MICADEE/autonlp-imdb-sentiment-analysis2-7121569")
# use in my model
sentiment_pipeline = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
# Run in the dataframe
pd_test["imdb_scores"]=pd_test["text"].apply(sentiment_pipeline, truncation=True, max_length=512)
# clean
pd_test["imdb_class"]=pd_test["imdb_scores"].apply(lambda x: np.where(x[0]["label"]=="positive", 1, 0))
## accuracy
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(pd_test['label'], pd_test['imdb_class'])
# see
print(accuracy)
If you want to see more about fine-tunning your own transformer models, I suggest you read this chapter: https://huggingface.co/learn/llm-course/chapter3/1
The last thing I will show you in class is the possibility of using ChatGPT as a classification tool. As you know, ChatGPT is an large language model (as we just saw) developed by OpenAI, based on the GPT architecture. The model was trained on a word-prediction task and it has blown the world by its capacity to engage in conversational interactions.
Some recent papers have shown ChatGPT exhbits a strong performance on downstream classification tasks, like sentiment analysis, even though the model has not been trained or even fine-tuned for this task. Read here:https://osf.io/preprints/psyarxiv/sekf5/
In this paper, there is availabe code in R on how to interact with ChatGPT API. The example I show you below pretty much converts their code to Python. You can see a nice video showing their R code here: https://www.youtube.com/watch?v=Mm3uoK4Fogc
The whole process requires us to have access to the Open AI API which allow us to query continously the GPT models. Notice, this is not free. You pay for every query. That being said, it is quite cheap.
# load api key
# load library to get environmental files
import os
from dotenv import load_dotenv
# load keys from environmental var
load_dotenv() # .env file in cwd
gpt_key = os.environ.get("gpt")
import requests
# define headers
headers = {
"Authorization": f"Bearer {gpt_key}",
"Content-Type": "application/json",
}
# define gpt model
question = "Please, tell me more about the Data Science and Public Policy Program at Georgetown's McCourt School"
data = {
"model": "gpt-3.5-turbo-0301",
"temperature": 0,
"messages": [{"role": "user", "content": question}]
}
# send a post request
response = requests.post("https://api.openai.com/v1/chat/completions",
json=data,
headers=headers)
# convert to json
response_json = response.json()
## see the output
response_json
Let's now write a function to query the api at scale
# Function to interact with the ChatGPT API
def hey_chatGPT(question_text, api_key):
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
}
data = {
"model": "gpt-3.5-turbo-0301",
"temperature": 0,
"messages": [{"role": "user", "content": question_text}]
}
response = requests.post("https://api.openai.com/v1/chat/completions",
json=data,
headers=headers, timeout=5)
response_json = response.json()
return response_json['choices'][0]['message']['content'].strip()
import time
output = []
# Run a loop over your dataset of reviews and prompt ChatGPT
for i in range(len(pd_test)):
try:
print(i)
question = "Is the sentiment of this text positive, neutral, or negative? \
Answer only with a number: 1 if positive, 0 if neutral or negative. \
Here is the text: "
text = pd_test.loc[i, "text"]
full_question = question + str(text)
output.append(hey_chatGPT(full_question, gpt_key))
except:
output.append(np.nan)
# add as a column
pd_test["gpt_scores"] = pd.to_numeric(output)
pd_test2 = pd_test.dropna().copy()
# check accuracy
accuracy = accuracy_score(pd_test2['label'], pd_test2['gpt_scores'])
# see
print(accuracy)
Pretty good results! Notice, we have no fine-tunning here. Just grabing results from the model!
!jupyter nbconvert _week_12_nlp_II.ipynb --to html --template classic