Design of Efficient Deep Learning models for Determining Road Surface Condition from Roadside Camera Images and Weather Data
The official proceedings are available in the TAC-ITS'19 Conference website. Paper prepared for presentation at the Artificial Intelligence and Machine Learning for Smart Mobility Session.
Authors: Juan Carrillo 1, Mark Crowley 1, Guangyuan Pan 2, Liping Fu 2.
- 1 Department of Electrical and Computer Engineering, University of Waterloo
- 2 Department of Civil and Environmental Engineering, University of Waterloo
Road maintenance during the Winter season is a safety critical and resource demanding operation. One of its key activities is determining road surface condition (RSC) in order to prioritize roads and allocate cleaning efforts such as plowing or salting. Two conventional approaches for determining RSC are: visual examination of roadside camera images by trained personnel and patrolling the roads to perform on-site inspections. However, with more than 500 cameras collecting images across Ontario, visual examination becomes a resource-intensive activity, difficult to scale especially during periods of snowstorms. This paper presents the results of a study focused on improving the efficiency of road maintenance operations. We use multiple Deep Learning models to automatically determine RSC from roadside camera images and weather variables, extending previous research where similar methods have been used to deal with the problem. The dataset we use was collected during the 2017-2018 Winter season from 40 stations connected to the Ontario Road Weather Information System (RWIS), it includes 14.000 labeled images and 70.000 weather measurements. We train and evaluate the performance of seven state-of-the-art models from the Computer Vision literature, including the recent DenseNet, NASNet, and MobileNet. Moreover, by following systematic ablation experiments we adapt previously published Deep Learning models and reduce their number of parameters to about ~1.3% compared to their original parameter count, and by integrating observations from weather variables the models are able to better ascertain RSC under poor visibility conditions.
Deep Learning, Machine Learning, Spatial Statistics, Road Monitoring, Computer Vision.
We design a Deep Convolutional Neural Network model to automatically classify Road Surface Condition (RSC) and optimize its efficiency by two orders of magnitude compared to previous models. This paper buils on our previous work on data integration for road monitoring, described in another paper.
- Winter road maintenance: Current approach
- Winter road maintenance: Suggested approach
- 14,000 images from roadside cameras installed in 40 Road Weather Information System (RWIS) stations across Ontario. The images are labeled into three categories (Figure 1) of Road Surface Condition (RSC) according to guidelines used by the Ministry of Transportation of Ontario (MTO). To access this dataset please contact Prof. Liping Fu at the University of Waterloo iTSS Lab.
Figure 1. Example images from a roadside camera near Otter lake in Ontario. Left: Bare pavement. Center: Partial snow cover. Right: Full snow cover.
- Coordinates of the 40 RWIS stations used for this project as seen in Figure 2. This table also includes the number of images retrieved from each station as well as their respective RSC category.
Figure 2. Location of the sample 40 RWIS stations in the province of Ontario.
- Weather data for the 40 RWIS stations listed above, from November 2017 to March 2018. For each station this dataset contains observations of five weather variables collected every 10 minutes. The folder contains (40 stations x 5 months x 5 variables) 1,000 .csv files, for a total of more than 3,600,000 weather records. From these observations we extract 70,000 records to joint them with the date and time of each of the 14,000 images. Table 1 describes the weather variables and units.
Table 1. Weather variables and their units.
Variable | Units |
---|---|
Air Temp | (°C) |
Dew Point | (°C) |
Pressure | (kPa) |
Relative Humidity | (%) |
Wind Speed | (km/h) |
The Python source code used in this project is distributed in 39 Jupiter notebooks, stored after execution to preserve the outputs and results of each experiment. To facilitate access we grouped these notebooks into five folders depending on the stage of the project they were used for.
Notebook | Description |
---|---|
image-resize.ipynb | Crop and resize images |
split-train-test.ipynb | Split 90% of images for training and 10% for testing |
Notebook | Description |
---|---|
baseline.ipynb | Baseline model |
densenet-169_finetune_only_fc.ipynb | DenseNet model, finetuning only fully connected layers |
densenet-169_finetune_last_5perc.ipynb | DenseNet model, finetuning only last 5% of layers |
densenet-169_finetune_last_15perc.ipynb | DenseNet model, finetuning only last 15% of layers |
inception-resnet-v2_finetune_only_fc.ipynb | Inception-ResNet model, finetuning only fully connected layers |
inception-resnet-v2_finetune_last_5perc.ipynb | Inception-ResNet model, finetuning only last 5% of layers |
inception-resnet-v2_finetune_last_15perc.ipynb | Inception-ResNet model, finetuning only last 15% of layers |
inception-v3_finetune_only_fc.ipynb | Inception model, finetuning only fully connected layers |
inception-v3_finetune_last_5perc.ipynb | Inception model, finetuning only last 5% of layers |
inception-v3_finetune_last_15perc.ipynb | Inception model, finetuning only last 15% of layers |
mobilenet-v2_finetune_only_fc.ipynb | MobileNet model, finetuning only fully connected layers |
mobilenet-v2_finetune_last_5perc.ipynb | MobileNet model, finetuning only last 5% of layers |
mobilenet-v2_finetune_last_15perc.ipynb | MobileNet model, finetuning only last 15% of layers |
nasnetmobile_finetune_only_fc.ipynb | NasNet-mobile model, finetuning only fully connected layers |
nasnetmobile_finetune_last_5perc.ipynb | NasNet-mobile model, finetuning only last 5% of layers |
nasnetmobile_finetune_last_15perc.ipynb | NasNet-mobile model, finetuning only last 15% of layers |
xception_finetune_only_fc.ipynb | Xception model, finetuning only fully connected layers |
xception_finetune_last_5perc.ipynb | Xception model, finetuning only last 5% of layers |
xception_finetune_last_15perc.ipynb | Xception model, finetuning only last 15% of layers |
Notebook | Description |
---|---|
baseline.ipynb | Original baseline model, equivalent to ICF 2.0 |
baseline_icf_1.9.ipynb | Modified baseline, ICF 1.9 |
baseline_icf_1.8.ipynb | Modified baseline, ICF 1.8 |
baseline_icf_1.7.ipynb | Modified baseline, ICF 1.7 |
baseline_icf_1.6.ipynb | Modified baseline, ICF 1.6 |
baseline_icf_1.5.ipynb | Modified baseline, ICF 1.5 |
baseline_icf_1.4.ipynb | Modified baseline, ICF 1.4 |
baseline_icf_1.3.ipynb | Modified baseline, ICF 1.3 |
baseline_icf_1.2.ipynb | Modified baseline, ICF 1.2 |
baseline_icf_1.1.ipynb | Modified baseline, ICF 1.1 |
baseline_icf_1.0.ipynb | Modified baseline, ICF 1.0 |
Notebook | Description |
---|---|
baseline_icf_1.7.ipynb | Modified baseline, ICF 1.7, 72 neurons in FC layers |
baseline_icf_1.7_nfc_63.ipynb | Modified baseline, ICF 1.7, 63 neurons in FC layers |
baseline_icf_1.7_nfc_54.ipynb | Modified baseline, ICF 1.7, 54 neurons in FC layers |
baseline_icf_1.7_nfc_45.ipynb | Modified baseline, ICF 1.7, 45 neurons in FC layers |
baseline_icf_1.7_nfc_36.ipynb | Modified baseline, ICF 1.7, 36 neurons in FC layers |
baseline_icf_1.7_nfc_27.ipynb | Modified baseline, ICF 1.7, 27 neurons in FC layers |
baseline_icf_1.7_nfc_18.ipynb | Modified baseline, ICF 1.7, 18 neurons in FC layers |
baseline_icf_1.7_nfc_9.ipynb | Modified baseline, ICF 1.7, 9 neurons in FC layers |
Notebook | Description |
---|---|
baseline_icf_1.7_nfc_36_weather.ipynb | Modified baseline, ICF 1.7, 36 neurons in FC layers. Plus Machine Learning models using weather data. |
Juan Carrillo gives special thanks to Mark Crowley at the Machine Learning Lab, and Guangyuan Pan and Liping Fu at the iTSS Lab for their mentoring and contributions during this research project. Thanks as well to Matthew Muresan and Taimur Usman for the interesting technical discussions.