Digital technologies have made vast amounts of text available to researchers, and this same technological moment has provided us with the capacity to analyze that text faster than humanly possible. The first step in that analysis is to transform texts designed for human consumption into a form a computer can analyze. Using Python and the Natural Language ToolKit (commonly called NLTK), this workshop introduces strategies to turn qualitative texts into quantitative objects. Through that process, we will present a variety of strategies for simple analysis of text-based data.
In this workshop, you will:
In this workshop, you will learn skills like:
- How to prepare texts for computational analysis, including strategies for transforming texts into numbers
- How to use NLTK methods such as
concordance
andsimilar
- How to clean and standardize your data, including powerful tools such as stemmers and lemmatizers
- Compare frequency distribution of words in a text to quantify the narrative arc
- Understand stop words and how to remove them when needed.
- Utilize Part-of-Speech tagging to gather insights about a text
- Transform any document that you have (or have access to) in a .txt format into a text that can be analyzed computationally
- How to tokenize your data and put it in nltk compatible format.
This workshop is estimated to take you 10 hours to complete.
- Text as Data
- Cleaning and Normalizing
- Using the NLTK Corpus
- Searching for Words
- Positioning Words
- Types vs. Tokens
- Length and Unique Words
- Lexical Density
- Data Cleaning: Removing Stop Words
- Data Cleaning: Lemmatizing Words
- Data Cleaning: Stemming Words
- Data Cleaning: Results
- Make Your Own Corpus
- Make Your Own Corpus (continued)
- Part-of-Speech Tagging
If you do not have experience or basic knowledge of the following workshops, you may want to look into those before you start with Text Analysis with Python and NLTK:
- Introduction to Python (required)
- Introduction to the Command Line (recommended)
- Short introduction to Jupyter Notebooks (recommended)
- Installing Python (and Anaconda) (required) This workshop uses Python and you will need to have a Python installation. If you choose to install a different version of Python, make sure it is version 3 as other versions will not work with our workshop.
- Installing NLTK (required) You will need to install NLTK for the purposes of this workshop.
Before you start the Text Analysis with Python and NLTK workshop, we want to remind you of some ethical considerations to take into account when you read through the lessons of this workshop:
- In working with massive amounts of text, it is natural to lose the original context. We must be aware of that and be careful when analizing it.
- It is important to constantly question our assumptions and the indexes we are using. Numbers and graphs do not tell the story, our analysis does. We must be careful not to draw hasty and simplistic conclusions for things that are complex. Just because we found out that author A uses more unique words than author B, does it mean that A is a better writer than B?
Before you start the Text Analysis with Python and NLTK workshop, you may want to read a couple of our pre-reading suggestions:
You may also want to check out a couple of projects that use the skills discussed in this workshop:
- Short list of academic Text & Data mining projects
- Building a Simple Chatbot from Scratch in Python
- Classifying personality type by social media posts
This workshop is the result of a collaborative effort of a team of people, mostly involved presently or in the past, with the Graduate Center's Digital Initiatives. If you want to see statistics for contributions to this workshop, you can do so here. This is a list of all the contributors:
- Current author: Rafael Davis Portela
- Past contributor: Michelle McSweeney
- Past contributor: Rachel Rakov
- Past contributor: Kalle Westerling
- Past contributor: Patrick Smyth
- Past contributor: Hannah Aizenman
- Past contributor: Kelsey Chatlosh
- Past reviewer: Filipa Calado
- Current editor: Lisa Rhody
- Current editor: Kalle Westerling
Digital Research Institute (DRI) Curriculum by Graduate Center Digital Initiatives is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Based on a work at https://github.com/DHRI-Curriculum. When sharing this material or derivative works, preserve this paragraph, changing only the title of the derivative work, or provide comparable attribution.