PPOL 5203 Data Science I: Foundations

Working with Text as Data

Tiago Ventura


Learning Goals

This notebook will cover:

  • Supervised: Sentiment Analysis
    • Dictionary
    • ML for Text-Classification
    • Working with Pre-trained Models - Transformers
    • Outsourcing to Generative Text-Based Models

Supervised Learning with Text

To practice with supervised learning with text data, we will perform some classic sentiment analysis classification task. Sentiment analysis natural language processing technique that given a textual input (tweets, movie reviews, comments on a website chatbox, etc... ) identifies the polarity of the text.

There are different flavors of sentiment analysis, but one of the most widely used techniques labels data into positive, negative and neutral. Other options are classifying text according to the levels of toxicity, which I did in the paper I asked you to read, or more fine-graine measures of sentiments.

Sentiment analysis is just one of many types of classification tasks that can be done with text. For any type of task in which you need to identify if the input pertains to a certain category, you can use a similar set of tools as we will see for sentiment analysis. For example, these are some classification tasks I have used in my work before:

  • Classify the levels of toxicity in social media live-streaming comments.
  • Analyze the sentiment of tweets.
  • Classify if the user is a Republican or Democrat given the their Twitter bios.
  • Identify if a particular social media post contains misinformation.

For all these tasks, you need:

  • some type of labelled data (which you and your research team will do),
  • build/or use a pre-trained machine learning models to make the prediction
  • evaluate the performance of the models

Here, we will work with data that was alread labelled for us. We will analyze the sentiment on IMDB dataset of reviews

IMDB Dataset

For the rest of this notebook, we will IMDB dataset provided by Hugging Face. The IMDB dataset contains 25,000 movie reviews labeled by sentiment for training a model and 25,000 movie reviews for testing it.

We will talk more about the Hugging Face project later in this notebook. For now, just download their main transformers library, and import the IMDB Review Dataset

Accessing the Dataset

In [3]:
import pandas as pd
import numpy as np
In [4]:
#pip install -q transformers
from datasets import load_dataset
imdb = load_dataset("imdb")

get a smaller sample

In [5]:
small_train_dataset = imdb["train"].shuffle(seed=42).select([i for i in list(range(3000))])
small_test_dataset = imdb["test"].shuffle(seed=42).select([i for i in list(range(300))])
In [6]:
# convert to a dataframe
pd_train = pd.DataFrame(small_train_dataset)
pd_test = pd.DataFrame(small_test_dataset)

# see the data
pd_train.head()
Out[6]:
text label
0 There is no relation at all between Fortier an... 1
1 This movie is a great. The plot is very true t... 1
2 George P. Cosmatos' "Rambo: First Blood Part I... 0
3 In the process of trying to establish the audi... 1
4 Yeh, I know -- you're quivering with excitemen... 0

Dictionary Methods

Our first approach for sentiment classification will use dictionary methods.

Common Procedure: Consists on using a pre-determined set of words (dictionary) that identifies the categories you want to classify documents. With this dictionary, you can do a simple search through the documents, count how many times these words appear, and use some type of aggregation function to classify the text. For example:

  • Positive or negative, for sentiment
  • Sad, happy, angry, anxious... for emotions
  • Sexism, homophobia, xenophobia, racism... for hate speech

Dictionaries are the most basic strategy to classify documents. Its simplicity requires some unrealistic assumptions (for example related to ignoring contextual information of the documents). However, the use of dicitionaries have one major advantage: it allows for a bridge between qualititative and quantitative knowledge. You need human experts to build good dictionaries.

VADER

There are many options for dictionaries for sentiment classification. We will use one popular open-source option available at NLTK: The VADER dictionary. VADER stands for Valence Aware Dictionary for Sentiment Reasoning. It is a model used for text sentiment analysis that is sensitive to both polarity (positive/negative) and intensity (strength) of emotion, and it was developed to handling particularly social media content.

Key Components of the VADER Dictionary:*

  • Sentiment Lexicon: This is a list of known words and their associated sentiment scores.

  • Sentiment Intensity Scores: Each word in the lexicon is assigned a score that ranges from -4 (extremely negative) to +4 (extremely positive).

  • Handling of Contextual and Qualitative Modifiers: VADER is sensitive to both intensifiers (e.g., "very") and negations (e.g., "not").

You can read the original paper that created the VADER here

In [7]:
#### Import dictionary
import nltk
# nltk.download('vader_lexicon')
from nltk.sentiment.vader import SentimentIntensityAnalyzer
In [8]:
# instantiate the model
sid = SentimentIntensityAnalyzer()
In [9]:
# simple example
review1 = "Oh, I loved the Data Science I course. The best course I have ever done!"

# classify
sid.polarity_scores(review1)
Out[9]:
{'neg': 0.0, 'neu': 0.544, 'pos': 0.456, 'compound': 0.8553}
In [10]:
# simple example
review2 = "DS I was ok. Professor Ventura jokes were not great"

# classify
sid.polarity_scores(review2)
Out[10]:
{'neg': 0.244, 'neu': 0.445, 'pos': 0.311, 'compound': -0.0243}

Let's now apply the dictionary at scale in our IMDB review dataset

In [11]:
# apply the dictionary to your data frame
pd_test["vader_scores"]=pd_test["text"].apply(sid.polarity_scores)
In [12]:
# let's see
pd_test.head()
Out[12]:
text label vader_scores
0 <br /><br />When I unsuspectedly rented A Thou... 1 {'neg': 0.069, 'neu': 0.788, 'pos': 0.143, 'co...
1 This is the latest entry in the long series of... 1 {'neg': 0.066, 'neu': 0.862, 'pos': 0.073, 'co...
2 This movie was so frustrating. Everything seem... 0 {'neg': 0.24, 'neu': 0.583, 'pos': 0.177, 'com...
3 I was truly and wonderfully surprised at "O' B... 1 {'neg': 0.075, 'neu': 0.752, 'pos': 0.173, 'co...
4 This movie spends most of its time preaching t... 0 {'neg': 0.066, 'neu': 0.707, 'pos': 0.227, 'co...
In [13]:
# grab final sentiment
pd_test["sentiment_vader"]=pd_test["vader_scores"].apply(lambda x: np.where(x["compound"] > 0, 1, 0))

Now that we have performed the classification task, we can see compare the labels and our predictions. We will be using a simple accuracy measure of how many labels were correctly classified

In [14]:
pd_test['vader_scores']
Out[14]:
0      {'neg': 0.069, 'neu': 0.788, 'pos': 0.143, 'co...
1      {'neg': 0.066, 'neu': 0.862, 'pos': 0.073, 'co...
2      {'neg': 0.24, 'neu': 0.583, 'pos': 0.177, 'com...
3      {'neg': 0.075, 'neu': 0.752, 'pos': 0.173, 'co...
4      {'neg': 0.066, 'neu': 0.707, 'pos': 0.227, 'co...
                             ...                        
295    {'neg': 0.059, 'neu': 0.812, 'pos': 0.129, 'co...
296    {'neg': 0.056, 'neu': 0.87, 'pos': 0.074, 'com...
297    {'neg': 0.056, 'neu': 0.825, 'pos': 0.119, 'co...
298    {'neg': 0.092, 'neu': 0.768, 'pos': 0.14, 'com...
299    {'neg': 0.083, 'neu': 0.785, 'pos': 0.132, 'co...
Name: vader_scores, Length: 300, dtype: object
In [15]:
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(pd_test['label'], pd_test['sentiment_vader'])

# see
print(accuracy)
0.6966666666666667

Training a Machine Learning Classifier

The next step to try if your dictionary is not working well is to train your own machine learning classifier. To build a simple Machine Learning classifier you will need to combine two different processes:

  • Build your input: here you will do all the steps we saw before to convert text to numbers. The goal here is to use a document feature matrix as the input for the ML model.

  • Use sklearn to train your model and assess its accuracy.

As before, let's not dig deep on the differences between each model. We will learn how to train a model using a logistic regression with penalized terms. You can use the same code with sklearn to try different models.

Building you input

In [16]:
## Pre processing steps
# stopwords
from nltk.corpus import stopwords

# tokenizer
from nltk.tokenize import word_tokenize

# lemmatizer
from nltk.stem import WordNetLemmatizer

# stemming
from nltk.stem.porter import PorterStemmer
In [17]:
# Write a preprocessing function
def preprocess_text(text):
    
    # increase stop words
    stop_words = stopwords.words('english')
    stop_words = stop_words + ["https", "rt", "amp"]
    
    # tokenization 
    tokens_ = word_tokenize(text)
    
    # Generate a list of tokens after preprocessing
 
    # normalize
    tokens_ = [word.lower() for word in tokens_ if word.isalpha()]

    # stem and stopwords
    
    # instatiate the stemmer
    porter = PorterStemmer()

    tokens_ =  [porter.stem(word) for word in tokens_ if word not in stop_words]
    # Return the preprocessed tokens as a string
    return tokens_
In [18]:
# apply the same pre-process function we used for topic models
# notice we are working with the training data
pd_train["tokens"] = pd_train["text"].apply(preprocess_text)

# let's see
pd_train.head()
Out[18]:
text label tokens
0 There is no relation at all between Fortier an... 1 [relat, fortier, profil, fact, polic, seri, vi...
1 This movie is a great. The plot is very true t... 1 [movi, great, plot, true, book, classic, writt...
2 George P. Cosmatos' "Rambo: First Blood Part I... 0 [georg, cosmato, rambo, first, blood, part, ii...
3 In the process of trying to establish the audi... 1 [process, tri, establish, audienc, empathi, ja...
4 Yeh, I know -- you're quivering with excitemen... 0 [yeh, know, quiver, excit, well, secret, live,...
In [19]:
# Repeate for test
pd_test["tokens"] = pd_test["text"].apply(preprocess_text)
In [20]:
# join all
pd_train["tokens"] = pd_train["tokens"].apply(' '.join)
pd_test["tokens"] = pd_test["tokens"].apply(' '.join)
In [21]:
# lets build our document feature matrix using Tfidf
from sklearn.feature_extraction.text import TfidfVectorizer

# instantiate a vectorizer
vectorizer = TfidfVectorizer()

# get tfidf
vectorizer = TfidfVectorizer(
    lowercase=True,
    stop_words='english',
    min_df=5, max_df=.90
)

# transform
train_tfidf = vectorizer.fit_transform(pd_train["tokens"]) # transform train
test_tfidf = vectorizer.transform(pd_test["tokens"]) # transform test

# check
print(train_tfidf.shape)
print(test_tfidf.shape)
(3000, 5608)
(300, 5608)
In [22]:
# separate the targer
y_train = pd_train["label"]
y_test = pd_test["label"]

Train your model

In [23]:
# import the models
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report

# train the model
model = LogisticRegression(penalty='l1', solver='liblinear', random_state=42)
model.fit(train_tfidf,y_train)
Out[23]:
LogisticRegression(penalty='l1', random_state=42, solver='liblinear')
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
In [24]:
# assess the model
y_pred = model.predict(test_tfidf)

# Evaluate the model
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Confusion Matrix:\n", confusion_matrix(y_test, y_pred))
Accuracy: 0.8366666666666667
Confusion Matrix:
 [[121  29]
 [ 20 130]]

Pretty cool accuracy!

Practice

Modify the code above to try a different model. You can either use different parameters for the logistic regression or just try a different model. Did you do better than my baseline model?

In [25]:
# Your code here

Pre-Trained Large Language Models: Hugging Face

In the past few years, the field of natural language processing has undergone through a major revolution. As we first saw, the early generation of NLP models was based on the idea of converting text to numbers through the use of document-feature matrix relying on the bag-of-words assumptions.

In the past ten-years, we have seen the emergence of a new paradigm using deep-learning and neural networks models to improve on the representation of text as numbers. These new models move away from the idea of a bag-of-words towards a more refined representation of text capturing the contextual meaning of words and sentences. This is achieved by training models with billions of parameters on text-sequencing tasks, using as inputs a dense representation of words. These are the famous word embeddings.

The most recent innovation on this revolution has been the Transformers Models. These models use multiple embeddings (matrices) to represent word, in which each matrix can capture different contextual representations of words. This dynamic representation allow for higher predictive power on downstream tasks in which these matrices form the foundation of the entire machine learning architecture. For example, Transformers are the the core of the language models like Open AI's GPTs and Meta's LLaMa.

The Transformers use a sophisticated architecture that requires a huge amount of data and computational power to be trained. However, several of these models are open-sourced and are made available for us on the web through a platform called Hugging Face. Those are what we call pre-trained large language models. At this point, there are thousands of pre-trained models based on the transformers framework available at hugging face.

Once you find a model that fits your task, you have two options:

  • Use the model architecture: access the model through the transformers library, and use it in you predictive tasks.

  • Fine-Tunning: this is the most traditional way. You will get the model, give some data, re-train the model slightly so that the model will learn patterns from your data, and use on your predictive task. By fine-tuning a Transformers-based model for our own application, we can improve contextual understanding and therefore task-specific performance

We will see example of the first for sentiment analysis. If you were to do build a full pipeline for classification, you would probably need to fine-tune the model. To learn more about fine-tunning, I suggest you to read:

Transformers Library

To use a model available on hugging face, you only need a few lines of code.

In [26]:
# import the pipeline function
from transformers import pipeline

Use the pipeline class to access the model. The pipeline function will give you the default model for this task, that in this case is a Bert-Based Model, see here: https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english?text=I+like+you.+I+love+you

In [27]:
# instantiate your model
sentiment_pipeline = pipeline("sentiment-analysis")
No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english and revision af0f99b (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english).
Using a pipeline without specifying a model name and revision in production is not recommended.
In [28]:
# see the model
sentiment_pipeline
Out[28]:
<transformers.pipelines.text_classification.TextClassificationPipeline at 0x2bf4916d0>
In [29]:
# see simple cases
print(review1, review2)
Oh, I loved the Data Science I course. The best course I have ever done! DS I was ok. Professor Ventura jokes were not great
In [30]:
#prediction
sentiment_pipeline([review1, review2])
Out[30]:
[{'label': 'POSITIVE', 'score': 0.9998651742935181},
 {'label': 'NEGATIVE', 'score': 0.6337788105010986}]

We can easily use this model to make predictions on our entire dataset

In [31]:
# predict in the entire model. 
# notice here I am truncating the model. Transformers can only deal with 512 tokens max
pd_test["bert_scores"]=pd_test["text"].apply(sentiment_pipeline, truncation=True, max_length=512)
In [32]:
# let's clean it up
pd_test["bert_class"]=pd_test["bert_scores"].apply(lambda x: np.where(x[0]["label"]=="POSITIVE", 1, 0))
In [33]:
pd_test.head()
Out[33]:
text label vader_scores sentiment_vader tokens bert_scores bert_class
0 <br /><br />When I unsuspectedly rented A Thou... 1 {'neg': 0.069, 'neu': 0.788, 'pos': 0.143, 'co... 1 br br unsuspectedli rent thousand acr thought ... [{'label': 'POSITIVE', 'score': 0.998875796794... 1
1 This is the latest entry in the long series of... 1 {'neg': 0.066, 'neu': 0.862, 'pos': 0.073, 'co... 1 latest entri long seri film french agent frenc... [{'label': 'POSITIVE', 'score': 0.996983110904... 1
2 This movie was so frustrating. Everything seem... 0 {'neg': 0.24, 'neu': 0.583, 'pos': 0.177, 'com... 0 movi frustrat everyth seem energet total prepa... [{'label': 'NEGATIVE', 'score': 0.997244238853... 0
3 I was truly and wonderfully surprised at "O' B... 1 {'neg': 0.075, 'neu': 0.752, 'pos': 0.173, 'co... 1 truli wonder surpris brother art thou video st... [{'label': 'NEGATIVE', 'score': 0.649209976196... 0
4 This movie spends most of its time preaching t... 0 {'neg': 0.066, 'neu': 0.707, 'pos': 0.227, 'co... 1 movi spend time preach script make movi appar ... [{'label': 'NEGATIVE', 'score': 0.998503446578... 0
In [34]:
## accuracy
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(pd_test['label'], pd_test['bert_class'])
# see
print(accuracy)
0.87

Without any fine-tunning, we are already doing much, much better than dictionaries!

Use contextual knowledge: Model Trained on Amazon Reviews

Since I do not want go to the in-depth process of fine-tunning your model, let's see if there are models on Hugging Face that were actually trained on a similar task: predicting reviews.

Actually, there are many. See here: https://huggingface.co/models?sort=trending&search=sentiment+reviews

In [35]:
from transformers import AutoModelForSequenceClassification, AutoTokenizer

# acessing the model
model = AutoModelForSequenceClassification.from_pretrained("MICADEE/autonlp-imdb-sentiment-analysis2-7121569")

# Acessing the tokenizer
tokenizer = AutoTokenizer.from_pretrained("MICADEE/autonlp-imdb-sentiment-analysis2-7121569")
In [36]:
# use in my model
sentiment_pipeline = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)

# Run in the dataframe
pd_test["imdb_scores"]=pd_test["text"].apply(sentiment_pipeline, truncation=True, max_length=512)
In [37]:
# clean
pd_test["imdb_class"]=pd_test["imdb_scores"].apply(lambda x: np.where(x[0]["label"]=="positive", 1, 0))
In [38]:
## accuracy
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(pd_test['label'], pd_test['imdb_class'])
# see
print(accuracy)
0.97

Outsourcing to Generative Text-Based Models

The last thing I will show you in class is the possibility of using ChatGPT as a classification tool. As you know, ChatGPT is an large language model (as we just saw) developed by OpenAI, based on the GPT architecture. The model was trained on a word-prediction task and it has blown the world by its capacity to engage in conversational interactions.

Some recent papers have shown ChatGPT exhbits a strong performance on downstream classification tasks, like sentiment analysis, even though the model has not been trained or even fine-tuned for this task. Read here:https://osf.io/preprints/psyarxiv/sekf5/

In this paper, there is availabe code in R on how to interact with ChatGPT API. The example I show you below pretty much converts their code to Python. You can see a nice video showing their R code here: https://www.youtube.com/watch?v=Mm3uoK4Fogc

The whole process requires us to have access to the Open AI API which allow us to query continously the GPT models. Notice, this is not free. You pay for every query. That being said, it is quite cheap.

In [39]:
# load api key
# load library to get environmental files
import os
from dotenv import load_dotenv


# load keys from  environmental var
load_dotenv() # .env file in cwd
gpt_key = os.environ.get("gpt") 
In [40]:
import requests 

# define headers
headers = {
        "Authorization": f"Bearer {gpt_key}",
        "Content-Type": "application/json",
    }

# define gpt model
question = "Please, tell me more about the Data Science and Public Policy Program at Georgetown's McCourt School"

data = {
        "model": "gpt-4",
        "temperature": 0,
        "messages": [{"role": "user", "content": question}]
    }



# send a post request
response = requests.post("https://api.openai.com/v1/chat/completions", 
                             json=data, 
                             headers=headers)
# convert to json
response_json = response.json()
In [41]:
## see the output
response_json
Out[41]:
{'id': 'chatcmpl-AXpZexgEXoUCbLi0PTVcF8odqvjtO',
 'object': 'chat.completion',
 'created': 1732626438,
 'model': 'gpt-4-0613',
 'choices': [{'index': 0,
   'message': {'role': 'assistant',
    'content': "The Data Science and Public Policy (DSPP) program at Georgetown University's McCourt School of Public Policy is a unique program that combines traditional public policy studies with cutting-edge technical skills in data science. The program is designed to equip students with the necessary skills to develop, assess, and execute complex public policies using data-driven decision making.\n\nThe curriculum of the DSPP program includes courses in statistics, economics, computer science, and public policy. Students learn how to use data to analyze policy issues, make policy recommendations, and evaluate policy outcomes. They also learn how to communicate their findings effectively to policymakers and the public.\n\nThe program is designed for students who have a strong interest in public policy and a desire to use data science to make a positive impact on society. Graduates of the program are prepared for careers in government, non-profit organizations, and the private sector where they can use their skills to inform policy decisions and improve public services.\n\nThe DSPP program is a two-year, full-time program. Students are required to complete a capstone project in their second year, where they work with a real-world client to solve a policy problem using data science.\n\nThe program also offers opportunities for students to gain practical experience through internships and research projects. Students have the opportunity to work with faculty members on research projects, and the program has partnerships with various organizations that offer internships to students.\n\nIn addition to the technical skills, the program also emphasizes the ethical considerations of using data in policy making, ensuring that students are prepared to use data responsibly and effectively in their careers.",
    'refusal': None},
   'logprobs': None,
   'finish_reason': 'stop'}],
 'usage': {'prompt_tokens': 26,
  'completion_tokens': 313,
  'total_tokens': 339,
  'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0},
  'completion_tokens_details': {'reasoning_tokens': 0,
   'audio_tokens': 0,
   'accepted_prediction_tokens': 0,
   'rejected_prediction_tokens': 0}},
 'system_fingerprint': None}

Let's now write a function to query the api at scale

In [42]:
# Function to interact with the ChatGPT API
def hey_chatGPT(question_text, api_key):
    headers = {
        "Authorization": f"Bearer {api_key}",
        "Content-Type": "application/json",
    }

    data = {
        "model": "gpt-4",
        "temperature": 0,
        "messages": [{"role": "user", "content": question_text}]
    }

    response = requests.post("https://api.openai.com/v1/chat/completions", 
                             json=data, 
                             headers=headers, timeout=5)
    
    response_json = response.json()
    return response_json['choices'][0]['message']['content'].strip()
In [43]:
import time
output = []
# Run a loop over your dataset of reviews and prompt ChatGPT
for i in range(len(pd_test)):
    try: 
        print(i)
        question = "Is the sentiment of this text positive, neutral, or negative? \
        Answer only with a number: 1 if positive, 0 if neutral or negative. \
        Here is the text: "
        text = pd_test.loc[i, "text"]
        full_question = question + str(text)
        output.append(hey_chatGPT(full_question, gpt_key))
    except:
        output.append(np.nan)
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
In [44]:
output[0]
Out[44]:
'1'
In [45]:
# add as a column
pd_test["gpt_scores"] = pd.to_numeric(output)
In [46]:
pd_test2 = pd_test.dropna().copy()
In [47]:
# check accuracy
accuracy = accuracy_score(pd_test2['label'], pd_test2['gpt_scores'])
# see
print(accuracy)
0.967479674796748

Pretty good results! Notice, we have no fine-tunning here. Just grabing results from the model!

In [48]:
!jupyter nbconvert _week_12_nlp_II_onlysp.ipynb --to html --template classic
[NbConvertApp] Converting notebook _week_12_nlp_II_onlysp.ipynb to html
[NbConvertApp] Writing 355837 bytes to _week_12_nlp_II_onlysp.html