This is the code repository for the 2nd edition of Manning Publications' Machine Learning with TensorFlow written by Chris Mattmann.
The code in this repository is mostly Jupyter Notebooks that correspond to the numbered listings in each chapter of the book. The code has beeen tested with TensorFlow 1.15.2 but there is a complete porting of the code in the book to TensorFlow 2.x.
We welcome contributions to the TF2 port and to all of the notebooks in TF1.15.x too!
The repository contains two fully functional Docker images. The first latest
runs with TF1.15.x and tracks with the book examples. You can get going by simply running from a command prompt:
$ docker pull chrismattmann/mltf2:latest
$ ./run_environment.sh
This will pull the TF1.15.x image and start Juptyer running on localhost. Watch for the startup
message, and then click through (including the token needed past the ?
in the URL) to start
Your Juptyer session.
To run the TF2.x version of the code and notebooks you can similarly run the tf2
tag:
$ docker pull chrismattmann/mltf2:tf2
$ ./run_TFv2_environment.sh
Follow the URL from the startup message.
Enjoy!
Though the book has TensorFlow in the name, the book is also just as machine about generalized machine learning and its theory, and the suite of frameworks that also come in handy when dealing with machine learning. The requirements for running the notebooks are below. You should PIP install them using your favorite Python. The examples from the book have been shown to work in Python 2.7, and Python 3.7. I didn't have time to test all of them but we are happy to receive PRs for things we've missed.
Additionally the Docker has been tested and on latest Docker for Mac only adds about 1.5% overhead on CPU mode and is totally usable and a one-shot easy installer for all of the dependencies. Browse the file to see what you'll need to install and how to run the code locally if desired.
- TensorFlow
- Jupyter
- Pandas - for data frames and easy tabular data manipulation
- NumPy, SciPy
- Matplotlib
- NLTK - for anything text or NLP (such as Sentiment Analysis from Chapter 6)
- TQDM - for progress bars
- SKLearn - for various helper functions
- Bregman Toolkit (for audio examples in Chapter 7)
- Tika
- Ystockquote
- Requests
- OpenCV
- Horovod - use 0.18.2 (or 0.18.1) for use with the Maverick2 VGG Face model.
- VGG16 - grab
vgg16.py
andvgg16_weights.npz
,imagenet_classes.py
andlaska.png
- only works with Python2.7, place in thelib
directory. - PyDub - for Chapter 17 in the LSTM chapter.
- Basic Units - for use in Chapter 17. Place in
libs/basic_units/
folder. - RNN-Tutorial - used in Chapter 17 to help implement the deep speech model and train it.
You will generate lots of data when running the notebooks in particular building models. But to train and build those models you will also need data. I have created an easy DropBox folder for you to pull input data for use in training models from the book. Access the DropBox folder here.
Note that the Docker build described below automatically pulls down all the data for you and incorporates it into the Docker environment so that you don't have to download a thing.
The pointers below let you know what data you need for what chapters, and where to put it. Unless otherwise
not specified, the data should be placed into the data
folder. Note that as you are running the notebooks
the notebooks will generate TF models and write them and checkpoint
files to the models/
folder.
data/311.csv
data/word2vec-nlp-tutorial/labeledTrainData.tsv
data/word2vec-nlp-tutorial/testData.tsv
data/aclImdb/test/neg/
data/aclImdb/test/pos/
data/audio_dataset/
data/TalkingMachinesPodcast.wav
data/User Identification From Walking Activity/
data/mobypos.txt
data/cifar-10-batches-py
data/MNIST_data/
(if you try the MNIST extra example)
data/cifar-10-batches-py
data/cifar-10-batches-py
data/vgg_face_dataset
- The VGG face metadata including Celeb Namesdata/vgg-face
- The actual VGG face datadata/vgg_face_full_urls.csv
- Metadata informmation about VGG Face URLsdata/vgg_face_full.csv
- Metadata information about all VGG Face datadata/vgg-models/checkpoints-1e3x4-2e4-09202019
- To run the VGG Face Estimator additional examplemodels/vgg_face_weights.h5
- To run the VGG Face verification additional example
data/international-airline-passengers.csv
data/LibriSpeech
libs/basic_units/
libs/RNN-Tutorial/
data/seq2seq
libs/vgg16/laska.png
data/cloth_folding_rgb_vids
# Only builds a Docker compatible with GPU and CPU.
./build_environment.sh #TensorFlow1
./build_TFv2_environment.sh #TensorFlow2
# Runs in GPU and CPU mode and will look for NVIDIA drivers first and fall back to reg CPU.
./run_environment.sh #TensorFlow1
./run_TFv2_environment.sh # TensorFlow2
You need to install nvidia-docker to use your GPU in docker. Follow these instructions (also on the linked page)
# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
If you want to build with your existing Python that's fine
you will need a Python2.7 for some of the chapters noted
above (like chapter7 which uses BregmanToolkit
), and
python 3.7 for everything else. The requirements.txt file
is different for each, so watch while one to pip install
below.
#Python3.7 - GPU and CPU
$ pip3.7 install -r requirements.txt
#Python3.7 - TensorFlow2 GPU and CPU
$ pip3.7 install -r requirements-tf2.txt
#Python2.7 - CPU
$ pip2.7 install -r requirements-py2.txt
#Python2.7 - GPU
$ pip2.7 install -r requirements-gpu-py2.txt
$ jupyter notebook
Send them to Chris A. Mattmann. Also please consider heading over to the livebook forum where you can discuss the book with other readers and the author too.
- Chris A. Mattmann
- Rob Royce (
tensorflow2
branch) - Philip Southam (Dockerfile build in
docker
branch)