From 62edf5449d146d3c721f0bbb453e4999891373f2 Mon Sep 17 00:00:00 2001 From: Ethan Tang Date: Wed, 5 Jul 2017 14:49:20 -0700 Subject: [PATCH] Tensorflow work for DIGITS by Ethan (#7) * Fix visualization when palette is None (#1177) The palette may be `None`when working with grayscale labels. Fix #1147 * Bugfix for customizing previous models (#1202) * [Packaging] Disable tests (#1227) * [Tests] Skip if extension not installed (#1263) * [Docs] Fix spelling errors in comments * [Docs] Add note about torch pkg and cusparse (#1303) * [Docs] Add note about torch pkg and cusparse (#1303) * [Caffe] Fix batch accumulation bug (#1307) * Use official NVIDIA model store by default (#1308) * Mark v5.0.0 * [Packaging] Pull latest docker image before build * bAbI data plug-in Add utils Add inference form to bAbI dataset Allow inference without answer Allow unknown words in BaBI data plug-in Fix bAbI plugin Lint errors * Tensorflow integration updates Use TFRecords for TF inference TF: Don't rescale inputs Fix some TF classification tests Remove unnecessary print Fix TF imports when uninstalled Fix mean image scale Fix generic model tests Fix Torch single image inference Fix inference TMP TF Lint Revert changes in digits-lint script Lint: ignore tensorflow standard examples More Lint fixes * Add .pgm to list of supported image file formats * Restrict usage of cmap to labels DB in generic dataset exploration fix #1322 * Update Object Detection example doc (#1323) * Update Object Detection example doc (#1323) * [TravisCI] Cache local OpenBLAS build This fixes a Torch bug we've been having on Travis for a while now. We had only been building OpenBLAS from source when there was no cached torch build present on the build machine. That meant you could get a cached build of Torch which was built against one version of OpenBLAS, but the system actually installed an older version. This led to memory corruption and segmentation faults. * [Tests] Skip if extension not installed (part 2) (#1337) * [TravisCI] Install all plugins by default Also test no plugins * [Tests] Skip if extension not installed (#1337) * Add gradient hook * Add memn2n model * [Docs] Update model store documentation (#1346) TODO: add a screenshot of the official model store once approved * [Docs] Update model store documentation (#1346) TODO: add a screenshot of the official model store once approved * Add steps to specify the Python layer file (#1347) * Add steps to specify the Python layer file (#1347) * [Docs] Install minimal boost libs for caffe * Update memn2n with gradient hooks * Remove the selenium walkthrough * GAN example * Make batch size variable * Training/inference paths * Small update to TF 0.12 * Snapshot names, float inference, restore all vars * Update copyright year for 2017 * Add a few missing copyright notices * Fix Siamese example Broadcast -1 into all elements that equal 0 in original label. * Fix Siamese example (#1405) Broadcast -1 into all elements that equal 0 in original label. * [Packaging] Make nginx site easier to customize * Do not restore global_step or optimizer variables * Add TB link * Update GAN network * Dynamically select inference form * TF inference: convert images to float * Update GAN z-gen network * Small Update model view layout * Add GAN plug-ins * Fix documentation typo. train.txt and test.txt was swapped and shown in the wrong folders for mnist and cifar10 data sets. * Update GAN plug-in to create CelebA dataset * Document a cuDNN workaround for text example (#1422) * Document a cuDNN workaround for text example (#1422) * Add ability to show input in ImageOutput extension * Add all data to raw data view extension * Add model for CelebA dataset * Update GAN data plug-in * Update all losses in one session * Remove conversion to .png in GAN data plug-in * Correct shebang for prepare_pascal_voc_data.sh (#1450) * [Docs] Document workaround for torch+hdf5 error * Fix typo in ModelStore.md * Fix typo in medical-imaging/README.md * TF Slim Lenet example Divide input by 255 * Update GAN data plug-in * Fix TF model snapshot * Reduce scheduler delays to speed up inference * Update GAN plugins * Fix TF tests * Add API to LmdbReader (used by gan_features.py) * Save animated gif * Add GAN walk-through * Update GAN walkthrough with embeddings video * Fix GAN view for list encoding * Fix bash lint with shellcheck * Fix bugs when visiting nested image folder * Add animation task to GAN plugins * Fix shellcheck-related bug in PPA upload script * Add view task to see image attributes * Copy labels.txt inside the dataset Move import to the top * Fix Distribution Graph Move backwards-compatibility to setstate * Fix typo in Sunnybrook plug-in * Add comments to GAN models * Update README * Fix GAN features script * Fix a bug introduced when fixing shellcheck lint * GAN app * Fix another shellcheck-related bug * Fix table formatting in README.md Fix table formatting * Fix DIGITS inference * Adjust GAN window size automatically * Add attributes to GAN app * Move gandisplay.py * Remove wxpython 3.0 selection * Fix call to model * Clamp distance values from segementation boundaries before begin converted to uint8. That was causing banding in the image because of wrapping at V % 256 * lint * [Docs] 5.0 debs and Ubuntu 16.04 support * Adding disclaimer * Display the filename of the image that caused the exception while loading. * Ported DIGITS to using tensorflow 1.1.0. * Ported DIGITS to using tensorflow 1.1.0. Got master branch working * Fix softmax visualization by scaling to image range * added the official store image and updated the documentation * added the official store image and updated the documentation (#1650) * [TravisCI] Add `git fetch --unshallow` for DIST Useful for TravisCI builds in forks. * updated gitignore * first cherrypick for installation scripts * Tf install experimental (#2) * Fix visualization when palette is None (#1177) The palette may be `None`when working with grayscale labels. Fix #1147 * Bugfix for customizing previous models (#1202) * [Packaging] Disable tests (#1227) * [Tests] Skip if extension not installed (#1263) * [Docs] Fix spelling errors in comments * [Docs] Add note about torch pkg and cusparse (#1303) * [Docs] Add note about torch pkg and cusparse (#1303) * [Caffe] Fix batch accumulation bug (#1307) * Use official NVIDIA model store by default (#1308) * Mark v5.0.0 * [Packaging] Pull latest docker image before build * Add .pgm to list of supported image file formats * Restrict usage of cmap to labels DB in generic dataset exploration fix #1322 * Update Object Detection example doc (#1323) * Update Object Detection example doc (#1323) * [TravisCI] Cache local OpenBLAS build This fixes a Torch bug we've been having on Travis for a while now. We had only been building OpenBLAS from source when there was no cached torch build present on the build machine. That meant you could get a cached build of Torch which was built against one version of OpenBLAS, but the system actually installed an older version. This led to memory corruption and segmentation faults. * [Tests] Skip if extension not installed (part 2) (#1337) * [TravisCI] Install all plugins by default Also test no plugins * [Tests] Skip if extension not installed (#1337) * [Docs] Update model store documentation (#1346) TODO: add a screenshot of the official model store once approved * [Docs] Update model store documentation (#1346) TODO: add a screenshot of the official model store once approved * Add steps to specify the Python layer file (#1347) * Add steps to specify the Python layer file (#1347) * [Docs] Install minimal boost libs for caffe * Remove the selenium walkthrough * Update copyright year for 2017 * Add a few missing copyright notices * Fix Siamese example Broadcast -1 into all elements that equal 0 in original label. * Fix Siamese example (#1405) Broadcast -1 into all elements that equal 0 in original label. * [Packaging] Make nginx site easier to customize * Fix documentation typo. train.txt and test.txt was swapped and shown in the wrong folders for mnist and cifar10 data sets. * Document a cuDNN workaround for text example (#1422) * Document a cuDNN workaround for text example (#1422) * Correct shebang for prepare_pascal_voc_data.sh (#1450) * [Docs] Document workaround for torch+hdf5 error * Fix typo in ModelStore.md * Fix typo in medical-imaging/README.md * Fix bash lint with shellcheck * Fix bugs when visiting nested image folder * Fix shellcheck-related bug in PPA upload script * Copy labels.txt inside the dataset Move import to the top * Fix Distribution Graph Move backwards-compatibility to setstate * Fix typo in Sunnybrook plug-in * Fix a bug introduced when fixing shellcheck lint * Fix another shellcheck-related bug * Fix table formatting in README.md Fix table formatting * Clamp distance values from segementation boundaries before begin converted to uint8. That was causing banding in the image because of wrapping at V % 256 * lint * [Docs] 5.0 debs and Ubuntu 16.04 support * WIP lint fix * Linted most of what I can lint prior to asking for context * updated the model store urls in the readme * added debugs in build scripts to understand the point of failure * added travis wait to install openblas * removed tensorflow to the build process to see if affects openblas * removed suppressing log contents * added set -x * fixed control * re-enabling tensorflow to see if travis builds * updated the version of numpy to ensure a stable build for travis wrt to open issue 8653 on numpy github * forcing numpy to v 1.8.1 * added the official store image and updated the documentation (#1650) * [TravisCI] Add `git fetch --unshallow` for DIST Useful for TravisCI builds in forks. * Got travis script to work for tensorflow installation * removed the open blas stuff that somehow made it into here * embarassing merge residue * force install specific numpy version because 1.13 was being installed * asdf * trying changing the tensorflow install * reodered the installation order to see if it builds due to TF using numpy 1.13 now * Cleaning installation to work with Numpy 1.3 upgrade removed the open blas stuff that somehow made it into here embarassing merge residue force install specific numpy version because 1.13 was being installed asdf trying changing the tensorflow install reodered the installation order to see if it builds due to TF using numpy 1.13 now * Tf example (#3) * inital work on autoencoder TF example * Moved the example files to its proper location * atempting to get autoencoder to work * autoencoder work * validated tensorflow autoencoder example * updated gitignore * disabled comments in the segmentation-model.lua script to prevent crashing * commiting the changes made to binary segmentation tf * adding work to do something else * I am seriously wayy too tired to write this commit message, it's just random bits of stuff * got binary seg and siamese working * started to work on the regression network * milestone * got regression for TF working * Got fine tuning to work in TF * changed the code to the format that is wanted by tim and greg * Finished all the work for examples inital work on autoencoder TF example Moved the example files to its proper location atempting to get autoencoder to work autoencoder work validated tensorflow autoencoder example updated gitignore disabled comments in the segmentation-model.lua script to prevent crashing commiting the changes made to binary segmentation tf adding work to do something else I am seriously wayy too tired to write this commit message, it's just random bits of stuff got binary seg and siamese working rebase rebase started to work on the regression network milestone got regression for TF working Got fine tuning to work in TF changed the code to the format that is wanted by tim and greg got fine tuning working * Some small fixes * changes WRT PR trying renaming the weights tested renaming variables * fixed api problem for multi gpus * changes to example documentation * git removed installing tests * updated most of linting * Removed unused block of code as per suggestion by Greg * Removing spaces... * Script changes for tensorflow (#1) * Basic Tensorflow Support Added some initial tf tools Implemented UI Fixes for tensorflow 0.10 Removed tf-slim as its not part of the 0.10 master Added the lmdb reader with a tf.cond that needs replacement Implemented train and val seperation with a templating Fixed issue with dequeueing both runners by pulling both graphs Implemented training and validation rythm Added support for both png and jpg and added 16 bit support Implemented mean subtraction - but needs rework to load as constant Added an optimized implementation of mean subtraction Further optimized the mean loading by using a shared constant Wrapped the data loader in a factory to easily support more data types Implemented cropping Implemented floating point support. Implemented seperate LMDB database. Implemented regression support. Added some brief nosetests. Need to invoke accuracy only on classification though. Implemented variable restoration. Needs thorough testing Implemented inferencing, not entirely polished Moved some code into functions, started on modularization a bit Implemented digits custom helper functions Implemented custom printing ops Implemented autoencoder total rewrite of summaries Implemented output to console from scalar summaries Fixes for summary outputs: only simple scalar values are parsed to console Implemented binary segmentation and necessary fixes Some updates on binary seg Implemented all possible optimizers and started work on learning rate shaper Started work on the lr policies Fixes for learning_rates, implemented optimizers, tested variable summary output to UI Implemented and tested all learning rates and optimizers Introduces new model definition and improvements in loss handling and graph layout Major refactoring of main code. Implemented new model description. Implemented and tested inferencing. Implemented and tested weight/snapshot loading. All-round minor updates and fixes Fixes in summary cumulator and implemented an RNN model Fixes for mean subtraction in tf and tf-ui, implemented data order selection in image-view extension Implemented support for mean file of format: png, jpg, binaryproto - the latter being the fault that DIGITS will provide. Added support for runtime statistics and some allround fixes Added static tensorboard style network visualization for tensorflow. Added output of traces (no vis yet). Added a loader while waiting for network vis. Minor syntax cleanups. Implemented alexnet standard network Pulled in updates for travis build and added tensorflow install Added two more files for Mr Travis Implemented tensorflow configuration Added tf config to doc Fixes for ubuntu deployment of tf. Moved tf tools Fixes for tf ubuntu Fixes for tf ubuntu Some fixes and updates for TF in Travis Fix in network viz test Implemented default sinlge-gpu support and some nosetests Fixes for inference Added siamese network, bugfixes, minor features, some utility tf functions Added siamese network and example png Better error-ui format for network viz Added an alternative simpler siamese network that doesnt need a seperate db, minor error update Preliminary version of hdf5 implemented Implemented fine-tuning by renaming variables Implemented visualisation of variables and the activations of the Ops they belong to. Fix in inf vis naming Fixes in visualualisation shapes and naming Implemented softmax upon classification Implemented all nosetests for tf classification, and many allround bugfixes Implemented generic nosetests - some need work Fix for travis to find python exe Implemented a better file format deducer, and implemented a bare minimal TFRecord-reader Added top_n accuracy shortcut Implemented on-line data augmentation for TF, 5 types. Some minor bugfixes. Need to do something with image whitening though during validation and inf.. Added tensorflow data augmentation test Minor fixes and improvements from linter Implemented minimal and bare multigpu and fixes to get it running for greg Preliminary version of tfrecord writer for classification Some changes to optimize dataloading for tfr More fixes for tfrecrods Fix generic data loading Minor breaking changes but updates in namescoping Implemented new model structure. Improvements to multi-gpu handling. Updates to namespaces. Implemented accounting for regularization. Many allround updates Implemented proper visualisation for gpu devices Minor updates and converted alexnet and vgg16 to new format Fix in tfrecord shape WIP on timeline traces Finalized support for tensorflow timeline traces Fixed alexnet for tf Fix merge errors Minify tf-graph-basic.build.js * bAbI data plug-in Add utils Add inference form to bAbI dataset Allow inference without answer Allow unknown words in BaBI data plug-in Fix bAbI plugin Lint errors * Tensorflow integration updates Use TFRecords for TF inference TF: Don't rescale inputs Fix some TF classification tests Remove unnecessary print Fix TF imports when uninstalled Fix mean image scale Fix generic model tests Fix Torch single image inference Fix inference TMP TF Lint Revert changes in digits-lint script Lint: ignore tensorflow standard examples More Lint fixes * Add gradient hook * Add memn2n model * Update memn2n with gradient hooks * GAN example * Make batch size variable * Training/inference paths * Small update to TF 0.12 * Snapshot names, float inference, restore all vars * Do not restore global_step or optimizer variables * Add TB link * Update GAN network * Dynamically select inference form * TF inference: convert images to float * Update GAN z-gen network * Small Update model view layout * Add GAN plug-ins * Update GAN plug-in to create CelebA dataset * Add ability to show input in ImageOutput extension * Add all data to raw data view extension * Add model for CelebA dataset * Update GAN data plug-in * Update all losses in one session * Remove conversion to .png in GAN data plug-in * TF Slim Lenet example Divide input by 255 * Update GAN data plug-in * Fix TF model snapshot * Reduce scheduler delays to speed up inference * Update GAN plugins * Fix TF tests * Add API to LmdbReader (used by gan_features.py) * Save animated gif * Add GAN walk-through * Update GAN walkthrough with embeddings video * Fix GAN view for list encoding * Add animation task to GAN plugins * Add view task to see image attributes * Add comments to GAN models * Update README * Fix GAN features script * GAN app * Fix DIGITS inference * Adjust GAN window size automatically * Add attributes to GAN app * Move gandisplay.py * Remove wxpython 3.0 selection * Fix call to model * Adding disclaimer * Ported DIGITS to using tensorflow 1.1.0. * Ported DIGITS to using tensorflow 1.1.0. Got master branch working * updated gitignore * first cherrypick for installation scripts * Tf install experimental (#2) * Fix visualization when palette is None (#1177) The palette may be `None`when working with grayscale labels. Fix #1147 * Bugfix for customizing previous models (#1202) * [Packaging] Disable tests (#1227) * [Tests] Skip if extension not installed (#1263) * [Docs] Fix spelling errors in comments * [Docs] Add note about torch pkg and cusparse (#1303) * [Docs] Add note about torch pkg and cusparse (#1303) * [Caffe] Fix batch accumulation bug (#1307) * Use official NVIDIA model store by default (#1308) * Mark v5.0.0 * [Packaging] Pull latest docker image before build * Add .pgm to list of supported image file formats * Restrict usage of cmap to labels DB in generic dataset exploration fix #1322 * Update Object Detection example doc (#1323) * Update Object Detection example doc (#1323) * [TravisCI] Cache local OpenBLAS build This fixes a Torch bug we've been having on Travis for a while now. We had only been building OpenBLAS from source when there was no cached torch build present on the build machine. That meant you could get a cached build of Torch which was built against one version of OpenBLAS, but the system actually installed an older version. This led to memory corruption and segmentation faults. * [Tests] Skip if extension not installed (part 2) (#1337) * [TravisCI] Install all plugins by default Also test no plugins * [Tests] Skip if extension not installed (#1337) * [Docs] Update model store documentation (#1346) TODO: add a screenshot of the official model store once approved * [Docs] Update model store documentation (#1346) TODO: add a screenshot of the official model store once approved * Add steps to specify the Python layer file (#1347) * Add steps to specify the Python layer file (#1347) * [Docs] Install minimal boost libs for caffe * Remove the selenium walkthrough * Update copyright year for 2017 * Add a few missing copyright notices * Fix Siamese example Broadcast -1 into all elements that equal 0 in original label. * Fix Siamese example (#1405) Broadcast -1 into all elements that equal 0 in original label. * [Packaging] Make nginx site easier to customize * Fix documentation typo. train.txt and test.txt was swapped and shown in the wrong folders for mnist and cifar10 data sets. * Document a cuDNN workaround for text example (#1422) * Document a cuDNN workaround for text example (#1422) * Correct shebang for prepare_pascal_voc_data.sh (#1450) * [Docs] Document workaround for torch+hdf5 error * Fix typo in ModelStore.md * Fix typo in medical-imaging/README.md * Fix bash lint with shellcheck * Fix bugs when visiting nested image folder * Fix shellcheck-related bug in PPA upload script * Copy labels.txt inside the dataset Move import to the top * Fix Distribution Graph Move backwards-compatibility to setstate * Fix typo in Sunnybrook plug-in * Fix a bug introduced when fixing shellcheck lint * Fix another shellcheck-related bug * Fix table formatting in README.md Fix table formatting * Clamp distance values from segementation boundaries before begin converted to uint8. That was causing banding in the image because of wrapping at V % 256 * lint * [Docs] 5.0 debs and Ubuntu 16.04 support * WIP lint fix * Linted most of what I can lint prior to asking for context * updated the model store urls in the readme * added debugs in build scripts to understand the point of failure * added travis wait to install openblas * removed tensorflow to the build process to see if affects openblas * removed suppressing log contents * added set -x * fixed control * re-enabling tensorflow to see if travis builds * updated the version of numpy to ensure a stable build for travis wrt to open issue 8653 on numpy github * forcing numpy to v 1.8.1 * added the official store image and updated the documentation (#1650) * [TravisCI] Add `git fetch --unshallow` for DIST Useful for TravisCI builds in forks. * Got travis script to work for tensorflow installation * Cleaning installation to work with Numpy 1.3 upgrade removed the open blas stuff that somehow made it into here embarassing merge residue force install specific numpy version because 1.13 was being installed asdf trying changing the tensorflow install reodered the installation order to see if it builds due to TF using numpy 1.13 now * Tf example (#3) * inital work on autoencoder TF example * Moved the example files to its proper location * atempting to get autoencoder to work * autoencoder work * validated tensorflow autoencoder example * updated gitignore * disabled comments in the segmentation-model.lua script to prevent crashing * commiting the changes made to binary segmentation tf * adding work to do something else * I am seriously wayy too tired to write this commit message, it's just random bits of stuff * got binary seg and siamese working * started to work on the regression network * milestone * got regression for TF working * Got fine tuning to work in TF * changed the code to the format that is wanted by tim and greg * Finished all the work for examples inital work on autoencoder TF example Moved the example files to its proper location atempting to get autoencoder to work autoencoder work validated tensorflow autoencoder example updated gitignore disabled comments in the segmentation-model.lua script to prevent crashing commiting the changes made to binary segmentation tf adding work to do something else I am seriously wayy too tired to write this commit message, it's just random bits of stuff got binary seg and siamese working rebase rebase started to work on the regression network milestone got regression for TF working Got fine tuning to work in TF changed the code to the format that is wanted by tim and greg got fine tuning working * Some small fixes * changes WRT PR trying renaming the weights tested renaming variables * fixed api problem for multi gpus * git removed installing tests * updated most of linting * Removed unused block of code as per suggestion by Greg * Removing spaces... * Tf documentation (#4) * Worked on Tensorflow docs * milestone * changed some typos * added into the documentation for how to specify which weights to train * removed the open blas stuff that somehow made it into here * embarassing merge residue * force install specific numpy version because 1.13 was being installed * asdf * trying changing the tensorflow install * changed docs for freezing variables * added more to the documentation * capitalized some letters * fixed api problem for multi gpus * fixes to docs WRT to PR * changes WRT to PR comments * added the cudnn versioning problem with tf * added images for tensorflow image * updated dl for tensorflow to 1.2 * updated pip command * Greg gan work (#3) GAN support for DIGITS * Tensorflow Work * Fix visualization when palette is None (#1177) The palette may be `None`when working with grayscale labels. Fix #1147 * Bugfix for customizing previous models (#1202) * [Packaging] Disable tests (#1227) * [Tests] Skip if extension not installed (#1263) * [Docs] Fix spelling errors in comments * [Docs] Add note about torch pkg and cusparse (#1303) * [Docs] Add note about torch pkg and cusparse (#1303) * [Caffe] Fix batch accumulation bug (#1307) * Use official NVIDIA model store by default (#1308) * Mark v5.0.0 * [Packaging] Pull latest docker image before build * bAbI data plug-in Add utils Add inference form to bAbI dataset Allow inference without answer Allow unknown words in BaBI data plug-in Fix bAbI plugin Lint errors * Tensorflow integration updates Use TFRecords for TF inference TF: Don't rescale inputs Fix some TF classification tests Remove unnecessary print Fix TF imports when uninstalled Fix mean image scale Fix generic model tests Fix Torch single image inference Fix inference TMP TF Lint Revert changes in digits-lint script Lint: ignore tensorflow standard examples More Lint fixes * Add .pgm to list of supported image file formats * Restrict usage of cmap to labels DB in generic dataset exploration fix #1322 * Update Object Detection example doc (#1323) * Update Object Detection example doc (#1323) * [TravisCI] Cache local OpenBLAS build This fixes a Torch bug we've been having on Travis for a while now. We had only been building OpenBLAS from source when there was no cached torch build present on the build machine. That meant you could get a cached build of Torch which was built against one version of OpenBLAS, but the system actually installed an older version. This led to memory corruption and segmentation faults. * [Tests] Skip if extension not installed (part 2) (#1337) * [TravisCI] Install all plugins by default Also test no plugins * [Tests] Skip if extension not installed (#1337) * Add gradient hook * Add memn2n model * [Docs] Update model store documentation (#1346) TODO: add a screenshot of the official model store once approved * [Docs] Update model store documentation (#1346) TODO: add a screenshot of the official model store once approved * Add steps to specify the Python layer file (#1347) * Add steps to specify the Python layer file (#1347) * [Docs] Install minimal boost libs for caffe * Update memn2n with gradient hooks * Remove the selenium walkthrough * GAN example * Make batch size variable * Training/inference paths * Small update to TF 0.12 * Snapshot names, float inference, restore all vars * Update copyright year for 2017 * Add a few missing copyright notices * Fix Siamese example Broadcast -1 into all elements that equal 0 in original label. * Fix Siamese example (#1405) Broadcast -1 into all elements that equal 0 in original label. * [Packaging] Make nginx site easier to customize * Do not restore global_step or optimizer variables * Add TB link * Update GAN network * Dynamically select inference form * TF inference: convert images to float * Update GAN z-gen network * Small Update model view layout * Add GAN plug-ins * Fix documentation typo. train.txt and test.txt was swapped and shown in the wrong folders for mnist and cifar10 data sets. * Update GAN plug-in to create CelebA dataset * Document a cuDNN workaround for text example (#1422) * Document a cuDNN workaround for text example (#1422) * Add ability to show input in ImageOutput extension * Add all data to raw data view extension * Add model for CelebA dataset * Update GAN data plug-in * Update all losses in one session * Remove conversion to .png in GAN data plug-in * Correct shebang for prepare_pascal_voc_data.sh (#1450) * [Docs] Document workaround for torch+hdf5 error * Fix typo in ModelStore.md * Fix typo in medical-imaging/README.md * TF Slim Lenet example Divide input by 255 * Update GAN data plug-in * Fix TF model snapshot * Reduce scheduler delays to speed up inference * Update GAN plugins * Fix TF tests * Add API to LmdbReader (used by gan_features.py) * Save animated gif * Add GAN walk-through * Update GAN walkthrough with embeddings video * Fix GAN view for list encoding * Fix bash lint with shellcheck * Fix bugs when visiting nested image folder * Add animation task to GAN plugins * Fix shellcheck-related bug in PPA upload script * Add view task to see image attributes * Copy labels.txt inside the dataset Move import to the top * Fix Distribution Graph Move backwards-compatibility to setstate * Fix typo in Sunnybrook plug-in * Add comments to GAN models * Update README * Fix GAN features script * Fix a bug introduced when fixing shellcheck lint * GAN app * Fix another shellcheck-related bug * Fix table formatting in README.md Fix table formatting * Fix DIGITS inference * Adjust GAN window size automatically * Add attributes to GAN app * Move gandisplay.py * Remove wxpython 3.0 selection * Fix call to model * Clamp distance values from segementation boundaries before begin converted to uint8. That was causing banding in the image because of wrapping at V % 256 * lint * [Docs] 5.0 debs and Ubuntu 16.04 support * Adding disclaimer * Display the filename of the image that caused the exception while loading. * Ported DIGITS to using tensorflow 1.1.0. * Ported DIGITS to using tensorflow 1.1.0. Got master branch working * Fix softmax visualization by scaling to image range * added the official store image and updated the documentation * added the official store image and updated the documentation (#1650) * [TravisCI] Add `git fetch --unshallow` for DIST Useful for TravisCI builds in forks. * updated gitignore * first cherrypick for installation scripts * Tf install experimental (#2) * Fix visualization when palette is None (#1177) The palette may be `None`when working with grayscale labels. Fix #1147 * Bugfix for customizing previous models (#1202) * [Packaging] Disable tests (#1227) * [Tests] Skip if extension not installed (#1263) * [Docs] Fix spelling errors in comments * [Docs] Add note about torch pkg and cusparse (#1303) * [Docs] Add note about torch pkg and cusparse (#1303) * [Caffe] Fix batch accumulation bug (#1307) * Use official NVIDIA model store by default (#1308) * Mark v5.0.0 * [Packaging] Pull latest docker image before build * Add .pgm to list of supported image file formats * Restrict usage of cmap to labels DB in generic dataset exploration fix #1322 * Update Object Detection example doc (#1323) * Update Object Detection example doc (#1323) * [TravisCI] Cache local OpenBLAS build This fixes a Torch bug we've been having on Travis for a while now. We had only been building OpenBLAS from source when there was no cached torch build present on the build machine. That meant you could get a cached build of Torch which was built against one version of OpenBLAS, but the system actually installed an older version. This led to memory corruption and segmentation faults. * [Tests] Skip if extension not installed (part 2) (#1337) * [TravisCI] Install all plugins by default Also test no plugins * [Tests] Skip if extension not installed (#1337) * [Docs] Update model store documentation (#1346) TODO: add a screenshot of the official model store once approved * [Docs] Update model store documentation (#1346) TODO: add a screenshot of the official model store once approved * Add steps to specify the Python layer file (#1347) * Add steps to specify the Python layer file (#1347) * [Docs] Install minimal boost libs for caffe * Remove the selenium walkthrough * Update copyright year for 2017 * Add a few missing copyright notices * Fix Siamese example Broadcast -1 into all elements that equal 0 in original label. * Fix Siamese example (#1405) Broadcast -1 into all elements that equal 0 in original label. * [Packaging] Make nginx site easier to customize * Fix documentation typo. train.txt and test.txt was swapped and shown in the wrong folders for mnist and cifar10 data sets. * Document a cuDNN workaround for text example (#1422) * Document a cuDNN workaround for text example (#1422) * Correct shebang for prepare_pascal_voc_data.sh (#1450) * [Docs] Document workaround for torch+hdf5 error * Fix typo in ModelStore.md * Fix typo in medical-imaging/README.md * Fix bash lint with shellcheck * Fix bugs when visiting nested image folder * Fix shellcheck-related bug in PPA upload script * Copy labels.txt inside the dataset Move import to the top * Fix Distribution Graph Move backwards-compatibility to setstate * Fix typo in Sunnybrook plug-in * Fix a bug introduced when fixing shellcheck lint * Fix another shellcheck-related bug * Fix table formatting in README.md Fix table formatting * Clamp distance values from segementation boundaries before begin converted to uint8. That was causing banding in the image because of wrapping at V % 256 * lint * [Docs] 5.0 debs and Ubuntu 16.04 support * WIP lint fix * Linted most of what I can lint prior to asking for context * updated the model store urls in the readme * added debugs in build scripts to understand the point of failure * added travis wait to install openblas * removed tensorflow to the build process to see if affects openblas * removed suppressing log contents * added set -x * fixed control * re-enabling tensorflow to see if travis builds * updated the version of numpy to ensure a stable build for travis wrt to open issue 8653 on numpy github * forcing numpy to v 1.8.1 * added the official store image and updated the documentation (#1650) * [TravisCI] Add `git fetch --unshallow` for DIST Useful for TravisCI builds in forks. * Got travis script to work for tensorflow installation * removed the open blas stuff that somehow made it into here * embarassing merge residue * force install specific numpy version because 1.13 was being installed * asdf * trying changing the tensorflow install * reodered the installation order to see if it builds due to TF using numpy 1.13 now * Cleaning installation to work with Numpy 1.3 upgrade removed the open blas stuff that somehow made it into here embarassing merge residue force install specific numpy version because 1.13 was being installed asdf trying changing the tensorflow install reodered the installation order to see if it builds due to TF using numpy 1.13 now * Tf example (#3) * inital work on autoencoder TF example * Moved the example files to its proper location * atempting to get autoencoder to work * autoencoder work * validated tensorflow autoencoder example * updated gitignore * disabled comments in the segmentation-model.lua script to prevent crashing * commiting the changes made to binary segmentation tf * adding work to do something else * I am seriously wayy too tired to write this commit message, it's just random bits of stuff * got binary seg and siamese working * started to work on the regression network * milestone * got regression for TF working * Got fine tuning to work in TF * changed the code to the format that is wanted by tim and greg * Finished all the work for examples inital work on autoencoder TF example Moved the example files to its proper location atempting to get autoencoder to work autoencoder work validated tensorflow autoencoder example updated gitignore disabled comments in the segmentation-model.lua script to prevent crashing commiting the changes made to binary segmentation tf adding work to do something else I am seriously wayy too tired to write this commit message, it's just random bits of stuff got binary seg and siamese working rebase rebase started to work on the regression network milestone got regression for TF working Got fine tuning to work in TF changed the code to the format that is wanted by tim and greg got fine tuning working * Some small fixes * changes WRT PR trying renaming the weights tested renaming variables * fixed api problem for multi gpus * changes to example documentation * git removed installing tests * updated most of linting * Removed unused block of code as per suggestion by Greg * Removing spaces... * Tf documentation (#4) * Worked on Tensorflow docs * milestone * changed some typos * added into the documentation for how to specify which weights to train * removed the open blas stuff that somehow made it into here * embarassing merge residue * force install specific numpy version because 1.13 was being installed * asdf * trying changing the tensorflow install * changed docs for freezing variables * added more to the documentation * capitalized some letters * fixed api problem for multi gpus * fixes to docs WRT to PR * changes WRT to PR comments * added the cudnn versioning problem with tf * added images for tensorflow image * updated dl for tensorflow to 1.2 * updated pip command fixed linting removed debug lines in scripts * cleaning up residues from the travis script * fixed a broken link * cleaned up more residue * somehow openblas made it through merge * lint fix * updated documentation for using tensorboard * added warnings for using tensorboard not on chrome * changed to using bootbox.alert() * initial commit for googlenet implementation * mile stone on inception module * finished googlenet inference * finished googlenet and refactored a bit of other networks * commiting to test this at the office * fixed googlenet to get it working * somehow a bad version went through * switching to documentation * removed softmax before loss in googlenet * initial prototype fix * updated gitignore * updated googlenet with the best working model and add a note about the auxillary branches * lint * initial prototype fix * lint * removed debug prints * lint * Tf gans review (#9) * initial commit for googlenet implementation * mile stone on inception module * finished googlenet inference * finished googlenet and refactored a bit of other networks * commiting to test this at the office * fixed googlenet to get it working * somehow a bad version went through * switching to documentation * removed softmax before loss in googlenet * initial prototype fix * updated gitignore * a typo made through * edited the gan examples to be compatable with TF 1.2 * lint * added to test optimizers other than sgd * pointed the celeba dataset to its main page * removed a tf-events file * Googlenet implementation initial commit for googlenet implementation mile stone on inception module finished googlenet inference finished googlenet and refactored a bit of other networks commiting to test this at the office fixed googlenet to get it working somehow a bad version went through switching to documentation removed softmax before loss in googlenet initial prototype fix updated gitignore updated googlenet with the best working model and add a note about the auxillary branches * Updated tensorboard documentation updated documentation for using tensorboard added warnings for using tensorboard not on chrome changed to using bootbox.alert() * Fixed linting lint initial prototype fix lint removed debug prints lint * Tf gans review (#9) * initial commit for googlenet implementation * mile stone on inception module * finished googlenet inference * finished googlenet and refactored a bit of other networks * commiting to test this at the office * fixed googlenet to get it working * somehow a bad version went through * switching to documentation * removed softmax before loss in googlenet * initial prototype fix * updated gitignore * a typo made through * edited the gan examples to be compatable with TF 1.2 * lint * added to test optimizers other than sgd * pointed the celeba dataset to its main page * merged rebase changes from development repo * removing ADAM tests for caffe and torch due to incompatability * readded adam tests but commented out torch due to tuning issues * set version to 6.0 fixed linting and version number reverting back to 5.1-dev for version --- .gitignore | 13 + .travis.yml | 11 +- LICENSE | 2 +- README.md | 5 +- digits-devserver | 2 +- digits-lint | 6 +- digits-test | 2 +- digits-walkthrough | 240 ----------------- digits/__init__.py | 2 +- digits/__main__.py | 2 +- digits/config/__init__.py | 2 +- digits/config/caffe.py | 2 +- digits/config/gpu_list.py | 2 +- digits/config/jobs_dir.py | 2 +- digits/config/log_file.py | 2 +- digits/config/server_name.py | 2 +- digits/config/store_option.py | 4 +- digits/config/tensorflow.py | 2 +- digits/config/torch.py | 2 +- digits/dataset/__init__.py | 2 +- digits/dataset/forms.py | 2 +- digits/dataset/generic/__init__.py | 2 +- digits/dataset/generic/forms.py | 2 +- digits/dataset/generic/job.py | 4 +- digits/dataset/generic/test_views.py | 8 +- digits/dataset/generic/views.py | 8 +- digits/dataset/images/__init__.py | 2 +- .../dataset/images/classification/__init__.py | 2 +- digits/dataset/images/classification/forms.py | 2 +- digits/dataset/images/classification/job.py | 4 +- .../classification/test_imageset_creator.py | 2 +- .../images/classification/test_views.py | 2 +- digits/dataset/images/classification/views.py | 9 +- digits/dataset/images/forms.py | 2 +- digits/dataset/images/generic/__init__.py | 2 +- digits/dataset/images/generic/forms.py | 2 +- digits/dataset/images/generic/job.py | 4 +- .../images/generic/test_lmdb_creator.py | 2 +- digits/dataset/images/generic/test_views.py | 2 +- digits/dataset/images/generic/views.py | 2 +- digits/dataset/images/job.py | 4 +- digits/dataset/images/views.py | 2 +- digits/dataset/job.py | 4 +- digits/dataset/tasks/__init__.py | 2 +- digits/dataset/tasks/analyze_db.py | 4 +- digits/dataset/tasks/create_db.py | 86 ++++-- digits/dataset/tasks/create_generic_db.py | 4 +- digits/dataset/tasks/parse_folder.py | 4 +- digits/dataset/views.py | 2 +- digits/device_query.py | 2 +- digits/download_data/__main__.py | 2 +- digits/download_data/cifar10.py | 2 +- digits/download_data/cifar100.py | 2 +- digits/download_data/downloader.py | 2 +- digits/download_data/mnist.py | 2 +- digits/extensions/__init__.py | 2 +- digits/extensions/data/__init__.py | 2 +- .../data/imageProcessing/__init__.py | 2 +- .../extensions/data/imageProcessing/data.py | 4 +- .../extensions/data/imageProcessing/forms.py | 2 +- .../data/imageProcessing/template.html | 2 +- .../data/imageSegmentation/__init__.py | 2 +- .../extensions/data/imageSegmentation/data.py | 4 +- .../data/imageSegmentation/forms.py | 2 +- .../data/imageSegmentation/template.html | 2 +- digits/extensions/data/interface.py | 2 +- .../extensions/data/objectDetection/README.md | 2 +- .../data/objectDetection/__init__.py | 2 +- .../extensions/data/objectDetection/data.py | 4 +- .../extensions/data/objectDetection/forms.py | 2 +- .../data/objectDetection/template.html | 2 +- .../extensions/data/objectDetection/utils.py | 4 +- digits/extensions/view/__init__.py | 2 +- .../extensions/view/boundingBox/__init__.py | 2 +- .../view/boundingBox/app_begin_template.html | 2 +- .../view/boundingBox/app_end_template.html | 2 +- .../view/boundingBox/config_template.html | 2 +- digits/extensions/view/boundingBox/forms.py | 2 +- .../view/boundingBox/header_template.html | 2 +- digits/extensions/view/boundingBox/view.py | 2 +- .../view/boundingBox/view_template.html | 2 +- .../extensions/view/imageOutput/__init__.py | 2 +- .../view/imageOutput/config_template.html | 2 +- digits/extensions/view/imageOutput/forms.py | 2 +- digits/extensions/view/imageOutput/view.py | 2 +- .../view/imageOutput/view_template.html | 2 +- .../view/imageSegmentation/__init__.py | 2 +- .../imageSegmentation/app_begin_template.html | 2 +- .../imageSegmentation/app_end_template.html | 2 +- .../imageSegmentation/config_template.html | 2 +- .../view/imageSegmentation/forms.py | 2 +- .../imageSegmentation/header_template.html | 2 +- .../view/imageSegmentation/static/css/app.css | 2 +- .../view/imageSegmentation/static/js/app.js | 2 +- .../extensions/view/imageSegmentation/view.py | 44 ++-- .../view/imageSegmentation/view_template.html | 2 +- digits/extensions/view/interface.py | 2 +- digits/extensions/view/rawData/__init__.py | 2 +- .../view/rawData/config_template.html | 2 +- digits/extensions/view/rawData/forms.py | 2 +- .../view/rawData/header_template.html | 7 + digits/extensions/view/rawData/view.py | 2 +- .../view/rawData/view_template.html | 2 +- digits/frameworks/__init__.py | 2 +- digits/frameworks/caffe_framework.py | 2 +- digits/frameworks/errors.py | 2 +- digits/frameworks/framework.py | 2 +- digits/frameworks/torch_framework.py | 2 +- digits/inference/__init__.py | 2 +- digits/inference/errors.py | 2 +- digits/inference/images/__init__.py | 2 +- digits/inference/images/job.py | 2 +- digits/inference/job.py | 2 +- digits/inference/tasks/__init__.py | 2 +- digits/inference/tasks/inference.py | 2 +- digits/job.py | 4 +- digits/log.py | 2 +- digits/model/__init__.py | 2 +- digits/model/forms.py | 10 +- digits/model/images/__init__.py | 2 +- .../model/images/classification/__init__.py | 2 +- digits/model/images/classification/forms.py | 2 +- digits/model/images/classification/job.py | 4 +- .../model/images/classification/test_views.py | 18 +- digits/model/images/classification/views.py | 4 +- digits/model/images/forms.py | 2 +- digits/model/images/generic/__init__.py | 2 +- digits/model/images/generic/forms.py | 2 +- digits/model/images/generic/job.py | 4 +- digits/model/images/generic/test_views.py | 2 +- digits/model/images/generic/views.py | 2 +- digits/model/images/job.py | 4 +- digits/model/images/views.py | 2 +- digits/model/job.py | 4 +- digits/model/tasks/__init__.py | 2 +- digits/model/tasks/caffe_train.py | 20 +- .../model/tasks/test_caffe_sanity_checks.py | 2 +- digits/model/tasks/test_caffe_train.py | 2 +- digits/model/tasks/torch_train.py | 4 +- digits/model/tasks/train.py | 8 +- digits/model/views.py | 12 +- digits/pretrained_model/__init__.py | 2 +- digits/pretrained_model/job.py | 2 +- digits/pretrained_model/tasks/__init__.py | 2 +- digits/pretrained_model/tasks/caffe_upload.py | 2 +- digits/pretrained_model/tasks/torch_upload.py | 2 +- .../tasks/upload_pretrained.py | 2 +- digits/pretrained_model/test_views.py | 2 +- digits/pretrained_model/views.py | 2 +- digits/scheduler.py | 2 +- .../standard-networks/tensorflow/alexnet.py | 7 +- .../tensorflow/alexnet_slim.py | 7 +- .../tensorflow/binary_segmentation.py | 23 -- .../standard-networks/tensorflow/googlenet.py | 201 +++++++++++++++ digits/standard-networks/tensorflow/lenet.py | 7 +- .../tensorflow/lenet_slim.py | 7 +- .../standard-networks/tensorflow/rnn_mnist.py | 53 ---- .../standard-networks/tensorflow/siamese.py | 38 --- .../tensorflow/siamese_simple.py | 38 --- digits/standard-networks/tensorflow/vgg16.py | 2 +- .../torch/ImageNet-Training/googlenet.lua | 2 +- digits/static/css/style.css | 6 +- digits/static/js/PretrainedModel.js | 2 +- digits/static/js/digits.js | 2 +- digits/static/js/file_field.js | 2 +- digits/static/js/home_app.js | 2 +- digits/static/js/model-graphs.js | 2 +- digits/static/js/store.js | 2 +- digits/static/js/time_filters.js | 2 +- digits/status.py | 2 +- digits/store/views.py | 2 +- digits/task.py | 4 +- digits/templates/datasets/generic/new.html | 2 +- digits/templates/datasets/generic/show.html | 2 +- .../templates/datasets/generic/summary.html | 2 +- .../datasets/images/classification/new.html | 2 +- .../datasets/images/classification/show.html | 7 +- .../images/classification/summary.html | 2 +- digits/templates/datasets/images/explore.html | 2 +- .../datasets/images/generic/new.html | 2 +- .../datasets/images/generic/show.html | 2 +- .../datasets/images/generic/summary.html | 2 +- digits/templates/error.html | 2 +- digits/templates/helper.html | 2 +- digits/templates/home.html | 2 +- digits/templates/job.html | 2 +- digits/templates/layout.html | 2 +- digits/templates/login.html | 2 +- .../templates/models/data_augmentation.html | 2 +- digits/templates/models/gpu_utilization.html | 2 +- .../images/classification/classify_many.html | 2 +- .../images/classification/classify_one.html | 2 +- .../custom_network_explanation.html | 9 +- .../models/images/classification/new.html | 28 +- .../partials/new/network_tab_pretrained.html | 2 +- .../partials/new/network_tab_previous.html | 2 +- .../partials/new/network_tab_standard.html | 2 +- .../models/images/classification/show.html | 2 +- .../models/images/classification/top_n.html | 2 +- .../generic/custom_network_explanation.html | 9 +- .../models/images/generic/infer_db.html | 2 +- .../images/generic/infer_extension.html | 2 +- .../models/images/generic/infer_many.html | 2 +- .../models/images/generic/infer_one.html | 2 +- .../models/images/generic/large_graph.html | 39 +++ .../templates/models/images/generic/new.html | 28 +- .../partials/new/network_tab_pretrained.html | 2 +- .../partials/new/network_tab_previous.html | 2 +- .../partials/new/network_tab_standard.html | 2 +- .../templates/models/images/generic/show.html | 2 +- digits/templates/models/large_graph.html | 2 +- .../models/python_layer_explanation.html | 2 +- .../partials/home/datasets_pane.html | 2 +- .../templates/partials/home/model_pane.html | 2 +- .../partials/home/pretrained_model_pane.html | 2 +- .../home/upload_pretrained_model.html | 2 +- digits/templates/socketio.html | 2 +- digits/templates/status_updates.html | 2 +- digits/templates/store.html | 2 +- digits/test_device_query.py | 2 +- digits/test_scheduler.py | 2 +- digits/test_status.py | 2 +- digits/test_utils.py | 2 +- digits/test_version.py | 2 +- digits/test_views.py | 2 +- digits/tools/analyze_db.py | 2 +- digits/tools/create_db.py | 4 +- digits/tools/create_generic_db.py | 2 +- digits/tools/inference.py | 2 +- digits/tools/parse_folder.py | 2 +- digits/tools/resize_image.py | 2 +- digits/tools/tensorflow/gan_grid.py | 26 +- digits/tools/tensorflow/gandisplay.py | 69 ++--- digits/tools/tensorflow/main.py | 4 +- digits/tools/tensorflow/model.py | 76 +++--- digits/tools/tensorflow/tf_data.py | 12 +- digits/tools/tensorflow/utils.py | 8 +- digits/tools/test_analyze_db.py | 2 +- digits/tools/test_create_db.py | 2 +- digits/tools/test_create_generic_db.py | 5 +- digits/tools/test_parse_folder.py | 2 +- digits/tools/test_resize_image.py | 2 +- digits/tools/torch/LRPolicy.lua | 4 +- digits/tools/torch/Optimizer.lua | 2 +- digits/tools/torch/data.lua | 6 +- digits/tools/torch/datum.proto | 2 +- digits/tools/torch/logmessage.lua | 2 +- digits/tools/torch/main.lua | 2 +- digits/tools/torch/test.lua | 2 +- digits/tools/torch/utils.lua | 2 +- digits/tools/torch/wrapper.lua | 2 +- digits/utils/__init__.py | 2 +- digits/utils/auth.py | 2 +- digits/utils/constants.py | 2 +- digits/utils/errors.py | 2 +- digits/utils/filesystem.py | 2 +- digits/utils/forms.py | 2 +- digits/utils/image.py | 8 +- digits/utils/lmdbreader.py | 2 +- digits/utils/routing.py | 2 +- digits/utils/store.py | 2 +- digits/utils/test_filesystem.py | 2 +- digits/utils/test_image.py | 2 +- digits/utils/test_time_filters.py | 2 +- digits/utils/test_utils.py | 2 +- digits/utils/time_filters.py | 2 +- digits/version.py | 2 +- digits/views.py | 2 +- digits/webapp.py | 2 +- docs/BuildCaffe.md | 2 +- docs/BuildDigits.md | 4 + docs/BuildTensorflow.md | 51 +++- docs/BuildTorch.md | 2 +- docs/Configuration.md | 2 +- docs/DevelopmentSetup.md | 36 +++ docs/GettingStartedTensorflow.md | 244 ++++++++++++++++++ docs/GettingStartedTorch.md | 3 - docs/ModelStore.md | 122 ++++++--- docs/StandardDatasets.md | 8 +- docs/UbuntuInstall.md | 87 +++++-- docs/images/Select_TensorFlow.png | Bin 0 -> 36045 bytes docs/images/TensorBoard.png | Bin 0 -> 56199 bytes docs/images/job-dir.png | Bin 0 -> 29935 bytes docs/images/model-store-import.png | Bin 58989 -> 0 bytes docs/images/model-store-list.png | Bin 71360 -> 0 bytes docs/images/model-store/custom.jpg | Bin 0 -> 52678 bytes docs/images/model-store/home.jpg | Bin 0 -> 61000 bytes docs/images/model-store/official.png | Bin 0 -> 42573 bytes docs/images/visualize-btn.png | Bin 0 -> 46115 bytes docs/images/visualize_button.png | Bin 0 -> 15731 bytes examples/autoencoder/README.md | 47 +++- .../autoencoder/autoencoder-TF.py | 0 examples/binary-segmentation/README.md | 14 +- .../binary_segmentation-TF.py | 22 ++ examples/binary-segmentation/create_images.py | 2 +- .../segmentation-model.lua | 2 +- examples/classification/example.py | 2 +- examples/classification/use_archive.py | 2 +- examples/fine-tuning/README.md | 6 + examples/fine-tuning/create_dataset.sh | 9 +- examples/fine-tuning/lenet-fine-tune-tf.py | 84 ++++++ examples/fine-tuning/lenet-fine-tune.lua | 8 +- examples/gan/README.md | 4 +- examples/gan/gan_embeddings.py | 4 +- examples/gan/network-celebA-encoder.py | 46 ++-- examples/gan/network-celebA.py | 59 ++--- examples/gan/network-mnist-encoder.py | 36 ++- examples/gan/network-mnist.py | 44 ++-- examples/medical-imaging/README.md | 4 +- examples/object-detection/README.md | 12 +- .../object-detection/display-options-menu.jpg | Bin 0 -> 29067 bytes .../object-detection/prepare_kitti_data.py | 2 +- .../object-detection/select-visualization.jpg | Bin 38872 -> 135235 bytes examples/question-answering/memn2n.py | 14 +- examples/regression/README.md | 41 ++- examples/regression/regression_mnist-TF.py | 33 +++ examples/semantic-segmentation/net_surgery.py | 2 +- .../prepare_pascal_voc_data.sh | 36 +-- examples/siamese/README.md | 4 + examples/siamese/create_db.py | 2 +- examples/siamese/mnist_siamese.lua | 3 +- examples/siamese/siamese-TF.py | 40 +++ examples/text-classification/README.md | 17 ++ .../text-classification/create_dataset.py | 2 +- packaging/deb/build.sh | 47 ++-- packaging/deb/extras/digits.nginx-site | 3 +- packaging/deb/templates/control | 4 +- .../__init__.py | 2 +- .../digitsDataPluginImageGradients/data.py | 2 +- .../digitsDataPluginImageGradients/forms.py | 2 +- .../templates/inference_template.html | 2 +- .../templates/template.html | 2 +- plugins/data/imageGradients/setup.py | 2 +- .../digitsDataPluginSunnybrook/__init__.py | 2 +- .../digitsDataPluginSunnybrook/data.py | 2 +- .../digitsDataPluginSunnybrook/forms.py | 2 +- .../templates/dataset_template.html | 2 +- .../templates/inference_template.html | 4 +- .../templates/template.html | 2 +- plugins/data/sunnybrook/setup.py | 2 +- .../__init__.py | 2 +- .../data.py | 2 +- .../forms.py | 2 +- .../templates/dataset_template.html | 2 +- .../templates/inference_template.html | 2 +- .../templates/template.html | 2 +- plugins/data/textClassification/setup.py | 2 +- plugins/view/gan/digitsViewPluginGan/forms.py | 7 +- plugins/view/gan/digitsViewPluginGan/view.py | 18 +- .../__init__.py | 2 +- .../digitsViewPluginImageGradients/forms.py | 2 +- .../templates/config_template.html | 2 +- .../templates/view_template.html | 2 +- .../digitsViewPluginImageGradients/view.py | 2 +- plugins/view/imageGradients/setup.py | 2 +- .../__init__.py | 2 +- .../forms.py | 2 +- .../templates/config_template.html | 2 +- .../templates/view_template.html | 2 +- .../view.py | 2 +- plugins/view/textClassification/setup.py | 2 +- requirements.txt | 2 +- scripts/travis/bust-cache.sh | 7 +- scripts/travis/install-caffe.sh | 21 +- scripts/travis/install-tensorflow.sh | 3 +- scripts/travis/install-torch.sh | 30 +-- scripts/travis/ppa-upload.sh | 22 +- scripts/travis/pypi-upload.sh | 6 +- setup.py | 2 +- 369 files changed, 1918 insertions(+), 1212 deletions(-) delete mode 100755 digits-walkthrough create mode 100644 digits/extensions/view/rawData/header_template.html delete mode 100644 digits/standard-networks/tensorflow/binary_segmentation.py create mode 100644 digits/standard-networks/tensorflow/googlenet.py delete mode 100644 digits/standard-networks/tensorflow/rnn_mnist.py delete mode 100644 digits/standard-networks/tensorflow/siamese.py delete mode 100644 digits/standard-networks/tensorflow/siamese_simple.py create mode 100644 digits/templates/models/images/generic/large_graph.html create mode 100644 docs/DevelopmentSetup.md create mode 100644 docs/GettingStartedTensorflow.md create mode 100644 docs/images/Select_TensorFlow.png create mode 100644 docs/images/TensorBoard.png create mode 100644 docs/images/job-dir.png delete mode 100644 docs/images/model-store-import.png delete mode 100644 docs/images/model-store-list.png create mode 100644 docs/images/model-store/custom.jpg create mode 100644 docs/images/model-store/home.jpg create mode 100644 docs/images/model-store/official.png create mode 100644 docs/images/visualize-btn.png create mode 100644 docs/images/visualize_button.png rename digits/standard-networks/tensorflow/autoencoder.py => examples/autoencoder/autoencoder-TF.py (100%) create mode 100644 examples/binary-segmentation/binary_segmentation-TF.py create mode 100644 examples/fine-tuning/lenet-fine-tune-tf.py create mode 100644 examples/object-detection/display-options-menu.jpg create mode 100644 examples/regression/regression_mnist-TF.py create mode 100644 examples/siamese/siamese-TF.py diff --git a/.gitignore b/.gitignore index 3107ad093..3aa64db37 100644 --- a/.gitignore +++ b/.gitignore @@ -17,3 +17,16 @@ TAGS /build/ /dist/ *.egg-info/ + +#Intellij files +.idea/ + +#vscode +.vscode/ + +#.project +.project +/.project + +#.tb +.tb/ \ No newline at end of file diff --git a/.travis.yml b/.travis.yml index 2eec43de8..a5afee2ef 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. os: linux dist: trusty @@ -10,7 +10,6 @@ env: global: - CAFFE_ROOT=~/caffe - TORCH_ROOT=~/torch - # Fixes for Torch and OpenBLAS - OMP_NUM_THREADS=1 - OPENBLAS_MAIN_FREE=1 - secure: "WSqrE+PQm76DdoRLRGKTK6fRWfXZjIb0BWCZm3IgHgFO7OE6fcK2tBnpDNNw4XQjmo27FFWlEhxN32g18P84n5PvErHaH65IuS9Nv6FkLlPXZlVqGNxbPmEA4oTkD/6Y6kZyZWZtLh2+/1ijuzQAPnIy/4BEuL8pdO+PsoJ9hYM=" @@ -20,6 +19,7 @@ env: - DIGITS_TEST_FRAMEWORK=torch - DIGITS_TEST_FRAMEWORK=tensorflow - DIGITS_TEST_FRAMEWORK=none + - DIGITS_TEST_FRAMEWORK=none WITH_PLUGINS=false matrix: include: @@ -43,6 +43,7 @@ matrix: - dput - gnupg install: + - git fetch --unshallow - git remote add nvidia-digits-upstream https://github.com/NVIDIA/DIGITS.git # for forks - git fetch nvidia-digits-upstream --tags - pip install twine @@ -130,13 +131,11 @@ install: - echo "backend:agg" > ~/.config/matplotlib/matplotlibrc - ./scripts/travis/install-caffe.sh $CAFFE_ROOT - if [ "$DIGITS_TEST_FRAMEWORK" == "torch" ]; then travis_wait ./scripts/travis/install-torch.sh $TORCH_ROOT; else unset TORCH_ROOT; fi + - pip install -r ./requirements.txt --force-reinstall - if [ "$DIGITS_TEST_FRAMEWORK" == "tensorflow" ]; then travis_wait ./scripts/travis/install-tensorflow.sh; fi - - pip install -r ./requirements.txt - pip install -r ./requirements_test.txt - pip install -e . - - pip install -e ./plugins/data/imageGradients - - pip install -e ./plugins/view/imageGradients + - if [ "$WITH_PLUGINS" != "false" ]; then find ./plugins/*/* -maxdepth 0 -type d | xargs -n1 pip install -e; fi script: - ./digits-test -v - diff --git a/LICENSE b/LICENSE index aa450a092..61e21d27a 100644 --- a/LICENSE +++ b/LICENSE @@ -1,4 +1,4 @@ -Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions diff --git a/README.md b/README.md index a7dcbdb13..38420269e 100644 --- a/README.md +++ b/README.md @@ -4,11 +4,13 @@ DIGITS (the **D**eep Learning **G**PU **T**raining **S**ystem) is a webapp for training deep learning models. +The currently supported frameworks are: Caffe 1, Torch, and Tensorflow + # Installation | Installation method | Supported platform[s] | Available versions | Instructions | | --- | --- | --- | --- | -| Deb packages | Ubuntu 14.04 | [14.04 repo](http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1404/x86_64) | [docs/UbuntuInstall.md](docs/UbuntuInstall.md) | +| Deb packages | Ubuntu 14.04, 16.04 | [14.04 repo](http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1404/x86_64), [16.04 repo](http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64) | [docs/UbuntuInstall.md](docs/UbuntuInstall.md) | | Docker | Linux | [DockerHub tags](https://hub.docker.com/r/nvidia/digits/tags/) | [nvidia-docker wiki](https://github.com/NVIDIA/nvidia-docker/wiki/DIGITS) | | Source | Ubuntu 14.04, 16.04 | [GitHub tags](https://github.com/NVIDIA/DIGITS/releases) | [docs/BuildDigits.md](docs/BuildDigits.md) | @@ -18,6 +20,7 @@ Once you have installed DIGITS, visit [docs/GettingStarted.md](docs/GettingStart Then, take a look at some of the other documentation at [docs/](docs/) and [examples/](examples/): +* [Getting started with TensorFlow](docs/GettingStartedTensorflow.md) * [Getting started with Torch](docs/GettingStartedTorch.md) * [Fine-tune a pretrained model](examples/fine-tuning/README.md) * [Train an autoencoder network](examples/autoencoder/README.md) diff --git a/digits-devserver b/digits-devserver index 604f07af7..642fdd0ca 100755 --- a/digits-devserver +++ b/digits-devserver @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. set -e diff --git a/digits-lint b/digits-lint index e70be10f7..fc5f3e892 100755 --- a/digits-lint +++ b/digits-lint @@ -1,13 +1,13 @@ #!/bin/bash -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. set -e echo "=== Checking for Python lint ..." if which flake8 >/dev/null 2>&1; then - python2 `which flake8` . + python2 `which flake8` --exclude ./examples,./digits/standard-networks/tensorflow,./digits/jobs . else - python2 -m flake8 . + python2 -m flake8 --exclude ./examples,./digits/standard-networks/tensorflow,./digits/jobs . fi echo "=== Checking for JavaScript lint ..." diff --git a/digits-test b/digits-test index a4aae59c5..3dbc8005c 100755 --- a/digits-test +++ b/digits-test @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. set -e diff --git a/digits-walkthrough b/digits-walkthrough deleted file mode 100755 index 290baccfa..000000000 --- a/digits-walkthrough +++ /dev/null @@ -1,240 +0,0 @@ -#!/usr/bin/env python2 -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. - -import argparse -import json -import os -import requests -import socket -import sys -import time -from urlparse import urlparse - -from selenium import webdriver -from selenium.webdriver.common.action_chains import ActionChains -from selenium.webdriver.common.keys import Keys - -wait_time = 2 - - -def wait(s=wait_time): - time.sleep(s) - - -def get_page(driver, url): - driver.get(url) - wait() - - -def create_dataset(driver, name, folder): - dropdown_elements = driver.find_elements_by_class_name('dropdown-toggle') - dropdown_dataset = dropdown_elements[0] - dropdown_dataset.click() - wait() - - dropdown_menu = driver.find_elements_by_class_name('dropdown-menu') - dropdown_menu_dataset = dropdown_menu[0] - dataset_link = dropdown_menu_dataset.find_element_by_tag_name('a') - dataset_link.click() - wait() - - folder_train_tooltip = driver.find_element_by_name('folder_train_explanation') - folder_train_tooltip.click() - wait() - - folder_train = driver.find_element_by_name('folder_train') - folder_train.send_keys(folder) - wait() - - resize_channels_tooltip = driver.find_element_by_name('resize_channels_explanation') - resize_channels_tooltip.click() - wait() - - image_type = driver.find_element_by_name('resize_channels') - image_type.click() - wait() - for option in image_type.find_elements_by_tag_name('option'): - if option.text == 'Grayscale': - image_type.click() - option.click() - break - wait() - - resize_width_tooltip = driver.find_element_by_name('resize_dims_explanation') - resize_width_tooltip.click() - wait() - - resize_width = driver.find_element_by_name('resize_width') - resize_width.clear() - resize_width.send_keys('28') - wait() - - resize_height = driver.find_element_by_name('resize_height') - resize_height.clear() - resize_height.send_keys('28') - wait() - - dataset_name = driver.find_element_by_name('dataset_name') - dataset_name.click() - dataset_name.send_keys(name) - wait() - - create_button = driver.find_element_by_name('create-dataset') - create_button.click() - - job_url = driver.current_url.replace('datasets', 'jobs') - status_url = job_url + '/status' - done = False - while not done: - r = requests.get(status_url) - status = json.loads(r.content) - done = status['status'] == 'Done' - wait() - wait() - - -def create_model(driver, name, dataset_name, test_image): - dropdown_elements = driver.find_elements_by_class_name('dropdown-toggle') - dropdown_model = dropdown_elements[1] - dropdown_model.click() - wait() - - dropdown_menu = driver.find_elements_by_class_name('dropdown-menu') - dropdown_menu_model = dropdown_menu[1] - model_link = dropdown_menu_model.find_element_by_tag_name('a') - model_link.click() - # move to 0,0 so we don't accidentally select a hover element - body = driver.find_element_by_css_selector('body') - body.click() - wait() - - dataset_tooltip = driver.find_element_by_name('dataset_explanation') - dataset_tooltip.click() - wait() - - datasets = driver.find_element_by_name('dataset') - for option in datasets.find_elements_by_tag_name('option'): - if option.text == dataset_name: - option.click() - break - wait() - - standard_networks = driver.find_elements_by_name('standard_networks') - lenet = standard_networks[0] - lenet.click() - wait() - - model_name = driver.find_element_by_name('model_name') - model_name.click() - model_name.send_keys(name) - wait() - - create_button = driver.find_element_by_name('create-model') - create_button.click() - - job_url = driver.current_url.replace('models', 'jobs') - status_url = job_url + '/status' - done = False - while not done: - r = requests.get(status_url) - status = json.loads(r.content) - done = status['status'] == 'Done' - wait() - # driver.refresh() - - # test image - print 'Testing...' - image_path = driver.find_element_by_name('image_url') - image_path.send_keys(test_image) - wait() - - show_visualizations_tooltip = driver.find_element_by_name('show_visualizations_explanation') - show_visualizations_tooltip.click() - wait() - - show_visualizations = driver.find_element_by_name('show_visualizations') - show_visualizations.click() - wait() - - test_button = driver.find_element_by_name('classify-one-btn') - test_button.click() - - # Opens in a new window - switch to it - driver.close() - driver.switch_to_window(driver.window_handles[-1]) - - -def main(argv): - parser = argparse.ArgumentParser(description='Run a Selenium demo of DIGITS') - - # Positional arguments - - parser.add_argument('mnist_image_folder', - type=str, - help='Path to the MNIST dataset folder') - parser.add_argument('test_image', - type=str, - help='Image to test with') - - # Optional arguments - - parser.add_argument('-p', '--port', - type=int, - default=80, - help='Port the server is running on (default 80)') - - args = vars(parser.parse_args()) - - home_page = 'http://0.0.0.0:%d/' % args['port'] - dataset_path = args['mnist_image_folder'] - dataset_name = 'MNIST Dataset' - model_name = 'MNIST Model' - test_image = args['test_image'] - - r = requests.get(home_page) - assert r.status_code == requests.codes.ok, 'page "%s" does not exist - are you looking on the wrong port?' % home_page - - # Start selenium driver. - driver = webdriver.Firefox() - print 'Firefox webdriver started.' - mouse = webdriver.ActionChains(driver) - - try: - driver.maximize_window() - - get_page(driver, home_page) - - print 'Creating dataset...' - create_dataset(driver, dataset_name, dataset_path) - - get_page(driver, home_page) - - print 'Creating model...' - create_model(driver, model_name, dataset_name, test_image) - - print 'Done.' - - # display an alert message - get_page(driver, "javascript:alert('Completed Walkthrough!');void(0);") - wait() - driver.switch_to_alert().accept() - - # wait until the window is closed - while True: - try: - get_page(driver, "javascript:console.log('Waiting...');void(0);") - except Exception as e: - print e - break - wait() - - except KeyboardInterrupt: - pass - finally: - try: - driver.quit() - except socket.error: - pass - -if __name__ == '__main__': - main(sys.argv) diff --git a/digits/__init__.py b/digits/__init__.py index c5b2642fd..0eb5e59f9 100644 --- a/digits/__init__.py +++ b/digits/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .version import __version__ diff --git a/digits/__main__.py b/digits/__main__.py index 95c453424..0bbfac5bf 100644 --- a/digits/__main__.py +++ b/digits/__main__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. import argparse import os.path diff --git a/digits/config/__init__.py b/digits/config/__init__.py index 3495ed6c2..060d7b36c 100644 --- a/digits/config/__init__.py +++ b/digits/config/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import # Create this object before importing the following imports, since they edit the list diff --git a/digits/config/caffe.py b/digits/config/caffe.py index d389397ce..db8aedc94 100644 --- a/digits/config/caffe.py +++ b/digits/config/caffe.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import imp diff --git a/digits/config/gpu_list.py b/digits/config/gpu_list.py index 9cc9f37de..32c9b6e41 100644 --- a/digits/config/gpu_list.py +++ b/digits/config/gpu_list.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from . import option_list diff --git a/digits/config/jobs_dir.py b/digits/config/jobs_dir.py index cf654fb45..e28d5ea68 100644 --- a/digits/config/jobs_dir.py +++ b/digits/config/jobs_dir.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os diff --git a/digits/config/log_file.py b/digits/config/log_file.py index edbb35ef5..fe469052a 100644 --- a/digits/config/log_file.py +++ b/digits/config/log_file.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import logging diff --git a/digits/config/server_name.py b/digits/config/server_name.py index 44a9fa54d..169465883 100644 --- a/digits/config/server_name.py +++ b/digits/config/server_name.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os diff --git a/digits/config/store_option.py b/digits/config/store_option.py index 87df162c0..474152e5d 100644 --- a/digits/config/store_option.py +++ b/digits/config/store_option.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os @@ -29,7 +29,7 @@ def load_url_list(): if 'DIGITS_MODEL_STORE_URL' in os.environ: url_list = os.environ['DIGITS_MODEL_STORE_URL'] else: - url_list = "" + url_list = "http://developer.download.nvidia.com/compute/machine-learning/modelstore/5.0" return validate(url_list).split(',') diff --git a/digits/config/tensorflow.py b/digits/config/tensorflow.py index bbf6f46b7..10a4465fe 100644 --- a/digits/config/tensorflow.py +++ b/digits/config/tensorflow.py @@ -34,7 +34,7 @@ def test_tf_import(python_exe): if not tf_enabled: print('Tensorflow support disabled.') -# print('Failed importing Tensorflow with python executable "%s"\n%s' % (tf_python_exe, err)) +# print('Failed importing Tensorflow with python executable "%s"\n%s' % (tf_python_exe, err)) if tf_enabled: option_list['tensorflow'] = { diff --git a/digits/config/torch.py b/digits/config/torch.py index 36862ad3f..2d04fe5e0 100644 --- a/digits/config/torch.py +++ b/digits/config/torch.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os diff --git a/digits/dataset/__init__.py b/digits/dataset/__init__.py index 3e3f880f0..218933bc8 100644 --- a/digits/dataset/__init__.py +++ b/digits/dataset/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .images import ImageClassificationDatasetJob, GenericImageDatasetJob diff --git a/digits/dataset/forms.py b/digits/dataset/forms.py index aca2a4539..e8133ff75 100644 --- a/digits/dataset/forms.py +++ b/digits/dataset/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from flask.ext.wtf import Form diff --git a/digits/dataset/generic/__init__.py b/digits/dataset/generic/__init__.py index 9dce4cff5..850b095b4 100644 --- a/digits/dataset/generic/__init__.py +++ b/digits/dataset/generic/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .job import GenericDatasetJob diff --git a/digits/dataset/generic/forms.py b/digits/dataset/generic/forms.py index 73f4f5905..5ade5301f 100644 --- a/digits/dataset/generic/forms.py +++ b/digits/dataset/generic/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import wtforms diff --git a/digits/dataset/generic/job.py b/digits/dataset/generic/job.py index 5dea7b0a5..e40b0d44d 100644 --- a/digits/dataset/generic/job.py +++ b/digits/dataset/generic/job.py @@ -1,11 +1,11 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from ..job import DatasetJob from digits.dataset import tasks from digits.utils import subclass, override, constants -# NOTE: Increment this everytime the pickled object changes +# NOTE: Increment this every time the pickled object changes PICKLE_VERSION = 1 diff --git a/digits/dataset/generic/test_views.py b/digits/dataset/generic/test_views.py index 89c6eb31d..b80148baf 100644 --- a/digits/dataset/generic/test_views.py +++ b/digits/dataset/generic/test_views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import json @@ -199,6 +199,12 @@ def setUpClass(cls, **kwargs): class GenericViewsTest(BaseViewsTest): + @classmethod + def setUpClass(cls, **kwargs): + if extensions.data.get_extension(cls.EXTENSION_ID) is None: + raise unittest.SkipTest('Extension "%s" is not installed' % cls.EXTENSION_ID) + super(GenericViewsTest, cls).setUpClass() + def test_page_dataset_new(self): rv = self.app.get('/datasets/generic/new/%s' % self.EXTENSION_ID) print rv.data diff --git a/digits/dataset/generic/views.py b/digits/dataset/generic/views.py index 1669c17ae..9ef3ac87e 100644 --- a/digits/dataset/generic/views.py +++ b/digits/dataset/generic/views.py @@ -1,6 +1,7 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import +import os # Find the best implementation available try: from cStringIO import StringIO @@ -147,8 +148,9 @@ def explore(): db = job.path(flask.request.args.get('db')) db_path = job.path(db) - if COLOR_PALETTE_ATTRIBUTE in job.extension_userdata \ - and job.extension_userdata[COLOR_PALETTE_ATTRIBUTE]: + if (os.path.basename(db_path) == 'labels' and + COLOR_PALETTE_ATTRIBUTE in job.extension_userdata and + job.extension_userdata[COLOR_PALETTE_ATTRIBUTE]): # assume single-channel 8-bit palette palette = job.extension_userdata[COLOR_PALETTE_ATTRIBUTE] palette = np.array(palette).reshape((len(palette) / 3, 3)) / 255. diff --git a/digits/dataset/images/__init__.py b/digits/dataset/images/__init__.py index 68614e717..10592e541 100644 --- a/digits/dataset/images/__init__.py +++ b/digits/dataset/images/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .classification import * # noqa diff --git a/digits/dataset/images/classification/__init__.py b/digits/dataset/images/classification/__init__.py index f5dc36b05..71bb756d4 100644 --- a/digits/dataset/images/classification/__init__.py +++ b/digits/dataset/images/classification/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .job import ImageClassificationDatasetJob diff --git a/digits/dataset/images/classification/forms.py b/digits/dataset/images/classification/forms.py index 0d84f50cd..6b31fcec0 100644 --- a/digits/dataset/images/classification/forms.py +++ b/digits/dataset/images/classification/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os.path diff --git a/digits/dataset/images/classification/job.py b/digits/dataset/images/classification/job.py index 8e69b879c..eb61e7e41 100644 --- a/digits/dataset/images/classification/job.py +++ b/digits/dataset/images/classification/job.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os @@ -8,7 +8,7 @@ from digits.status import Status from digits.utils import subclass, override, constants -# NOTE: Increment this everytime the pickled object changes +# NOTE: Increment this every time the pickled object changes PICKLE_VERSION = 2 diff --git a/digits/dataset/images/classification/test_imageset_creator.py b/digits/dataset/images/classification/test_imageset_creator.py index e1e621693..1c5fcbd2b 100755 --- a/digits/dataset/images/classification/test_imageset_creator.py +++ b/digits/dataset/images/classification/test_imageset_creator.py @@ -1,5 +1,5 @@ #!/usr/bin/env python2 -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. """ Functions for creating temporary datasets Used in test_views diff --git a/digits/dataset/images/classification/test_views.py b/digits/dataset/images/classification/test_views.py index ee5cc07cb..b3d7a1205 100644 --- a/digits/dataset/images/classification/test_views.py +++ b/digits/dataset/images/classification/test_views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import json diff --git a/digits/dataset/images/classification/views.py b/digits/dataset/images/classification/views.py index 76948807d..42b201029 100644 --- a/digits/dataset/images/classification/views.py +++ b/digits/dataset/images/classification/views.py @@ -1,7 +1,8 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os +import shutil # Find the best implementation available try: @@ -156,12 +157,14 @@ def from_files(job, form): """ # labels if form.textfile_use_local_files.data: - job.labels_file = form.textfile_local_labels_file.data.strip() + labels_file_from = form.textfile_local_labels_file.data.strip() + labels_file_to = os.path.join(job.dir(), utils.constants.LABELS_FILE) + shutil.copyfile(labels_file_from, labels_file_to) else: flask.request.files[form.textfile_labels_file.name].save( os.path.join(job.dir(), utils.constants.LABELS_FILE) ) - job.labels_file = utils.constants.LABELS_FILE + job.labels_file = utils.constants.LABELS_FILE shuffle = bool(form.textfile_shuffle.data) backend = form.backend.data diff --git a/digits/dataset/images/forms.py b/digits/dataset/images/forms.py index b7032bdaa..72eac5a40 100644 --- a/digits/dataset/images/forms.py +++ b/digits/dataset/images/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import wtforms diff --git a/digits/dataset/images/generic/__init__.py b/digits/dataset/images/generic/__init__.py index 3804042dd..3145d6c9a 100644 --- a/digits/dataset/images/generic/__init__.py +++ b/digits/dataset/images/generic/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .job import GenericImageDatasetJob diff --git a/digits/dataset/images/generic/forms.py b/digits/dataset/images/generic/forms.py index 8783566d1..509c79aa0 100644 --- a/digits/dataset/images/generic/forms.py +++ b/digits/dataset/images/generic/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os.path diff --git a/digits/dataset/images/generic/job.py b/digits/dataset/images/generic/job.py index 14eec9e67..afb40bba6 100644 --- a/digits/dataset/images/generic/job.py +++ b/digits/dataset/images/generic/job.py @@ -1,11 +1,11 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from ..job import ImageDatasetJob from digits.dataset import tasks from digits.utils import subclass, override, constants -# NOTE: Increment this everytime the pickled object changes +# NOTE: Increment this every time the pickled object changes PICKLE_VERSION = 1 diff --git a/digits/dataset/images/generic/test_lmdb_creator.py b/digits/dataset/images/generic/test_lmdb_creator.py index 19a90a210..b7b0a2145 100755 --- a/digits/dataset/images/generic/test_lmdb_creator.py +++ b/digits/dataset/images/generic/test_lmdb_creator.py @@ -1,5 +1,5 @@ #!/usr/bin/env python2 -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. """ Functions for creating temporary LMDBs Used in test_views diff --git a/digits/dataset/images/generic/test_views.py b/digits/dataset/images/generic/test_views.py index 81fe0cf6f..85b4ec593 100644 --- a/digits/dataset/images/generic/test_views.py +++ b/digits/dataset/images/generic/test_views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import json diff --git a/digits/dataset/images/generic/views.py b/digits/dataset/images/generic/views.py index 651c8cc0f..e7a0c944f 100644 --- a/digits/dataset/images/generic/views.py +++ b/digits/dataset/images/generic/views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import flask diff --git a/digits/dataset/images/job.py b/digits/dataset/images/job.py index beb76ec96..6b351dce3 100644 --- a/digits/dataset/images/job.py +++ b/digits/dataset/images/job.py @@ -1,9 +1,9 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from ..job import DatasetJob -# NOTE: Increment this everytime the pickled object changes +# NOTE: Increment this every time the pickled object changes PICKLE_VERSION = 1 diff --git a/digits/dataset/images/views.py b/digits/dataset/images/views.py index 9f9a17731..fdef36c36 100644 --- a/digits/dataset/images/views.py +++ b/digits/dataset/images/views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os.path diff --git a/digits/dataset/job.py b/digits/dataset/job.py index e75a39d0e..12aa79112 100644 --- a/digits/dataset/job.py +++ b/digits/dataset/job.py @@ -1,10 +1,10 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from digits.job import Job from digits.utils import subclass -# NOTE: Increment this everytime the pickled object changes +# NOTE: Increment this every time the pickled object changes PICKLE_VERSION = 1 diff --git a/digits/dataset/tasks/__init__.py b/digits/dataset/tasks/__init__.py index 52b984bc4..7b7831334 100644 --- a/digits/dataset/tasks/__init__.py +++ b/digits/dataset/tasks/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .analyze_db import AnalyzeDbTask diff --git a/digits/dataset/tasks/analyze_db.py b/digits/dataset/tasks/analyze_db.py index 9c197b675..360af2a40 100644 --- a/digits/dataset/tasks/analyze_db.py +++ b/digits/dataset/tasks/analyze_db.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os.path @@ -9,7 +9,7 @@ from digits.task import Task from digits.utils import subclass, override -# NOTE: Increment this everytime the pickled object +# NOTE: Increment this every time the pickled object PICKLE_VERSION = 1 diff --git a/digits/dataset/tasks/create_db.py b/digits/dataset/tasks/create_db.py index 81777def0..48538bee5 100644 --- a/digits/dataset/tasks/create_db.py +++ b/digits/dataset/tasks/create_db.py @@ -1,7 +1,6 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import -import operator import os.path import re import sys @@ -11,7 +10,7 @@ from digits.task import Task from digits.utils import subclass, override -# NOTE: Increment this everytime the pickled version changes +# NOTE: Increment this every time the pickled version changes PICKLE_VERSION = 3 @@ -62,6 +61,7 @@ def __init__(self, input_file, db_name, backend, image_dims, **kwargs): self.image_channel_order = None self.entries_count = None + self.entries_error = None self.distribution = None self.create_db_log_file = "create_%s.log" % db_name @@ -100,6 +100,14 @@ def __setstate__(self, state): if not hasattr(self, 'compression') or self.compression is None: self.compression = 'none' + if not hasattr(self, 'entries_error'): + self.entries_error = 0 + for key in self.distribution.keys(): + self.distribution[key] = { + 'count': self.distribution[key], + 'error_count': 0 + } + @override def name(self): if self.db_name == utils.constants.TRAIN_DB or 'train' in self.db_name.lower(): @@ -170,8 +178,6 @@ def task_arguments(self, resources, env): @override def process_output(self, line): - from digits.webapp import socketio - self.create_db_log.write('%s\n' % line) self.create_db_log.flush() @@ -192,19 +198,22 @@ def process_output(self, line): if not hasattr(self, 'distribution') or self.distribution is None: self.distribution = {} - self.distribution[match.group(1)] = int(match.group(2)) - - data = self.distribution_data() - if data: - socketio.emit('task update', - { - 'task': self.html_id(), - 'update': 'distribution', - 'data': data, - }, - namespace='/jobs', - room=self.job_id, - ) + self.distribution[match.group(1)] = { + 'count': int(match.group(2)), + 'error_count': 0 + } + self.update_distribution_graph() + return True + + # add errors to the distribution + match = re.match(r'\[(.+) (\d+)\] LoadImageError: (.+)', message) + if match: + self.distribution[match.group(2)]['count'] -= 1 + self.distribution[match.group(2)]['error_count'] += 1 + if self.entries_error is None: + self.entries_error = 0 + self.entries_error += 1 + self.update_distribution_graph() return True # result @@ -302,20 +311,32 @@ def distribution_data(self): if len(self.distribution.keys()) != len(labels): return None - values = ['Count'] + label_count = 'Count' + label_error = 'LoadImageError' + + error_values = [label_error] + count_values = [label_count] titles = [] for key, value in sorted( self.distribution.items(), - key=operator.itemgetter(1), + key=lambda item: item[1]['count'], reverse=True): - values.append(value) + count_values.append(value['count']) + error_values.append(value['error_count']) titles.append(labels[int(key)]) + # distribution graph always displays the Count data + data = {'columns': [count_values], 'type': 'bar'} + + # only display error data if any error occurred + if sum(error_values[1:]) > 0: + data['columns'] = [count_values, error_values] + data['groups'] = [[label_count, label_error]] + data['colors'] = {label_count: '#1F77B4', label_error: '#B73540'} + data['order'] = 'false' + return { - 'data': { - 'columns': [values], - 'type': 'bar' - }, + 'data': data, 'axis': { 'x': { 'type': 'category', @@ -323,3 +344,18 @@ def distribution_data(self): } }, } + + def update_distribution_graph(self): + from digits.webapp import socketio + data = self.distribution_data() + + if data: + socketio.emit('task update', + { + 'task': self.html_id(), + 'update': 'distribution', + 'data': data, + }, + namespace='/jobs', + room=self.job_id, + ) diff --git a/digits/dataset/tasks/create_generic_db.py b/digits/dataset/tasks/create_generic_db.py index 7d890096c..1ec8a96f8 100644 --- a/digits/dataset/tasks/create_generic_db.py +++ b/digits/dataset/tasks/create_generic_db.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os @@ -9,7 +9,7 @@ from digits.task import Task from digits.utils import subclass, override -# NOTE: Increment this everytime the pickled version changes +# NOTE: Increment this every time the pickled version changes PICKLE_VERSION = 1 diff --git a/digits/dataset/tasks/parse_folder.py b/digits/dataset/tasks/parse_folder.py index 8181e89e5..1450801a6 100644 --- a/digits/dataset/tasks/parse_folder.py +++ b/digits/dataset/tasks/parse_folder.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os.path @@ -10,7 +10,7 @@ from digits.task import Task from digits.utils import subclass, override -# NOTE: Increment this everytime the pickled object +# NOTE: Increment this every time the pickled object PICKLE_VERSION = 1 diff --git a/digits/dataset/views.py b/digits/dataset/views.py index cb0a42244..7a0751e9c 100644 --- a/digits/dataset/views.py +++ b/digits/dataset/views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import flask diff --git a/digits/device_query.py b/digits/device_query.py index f7d140085..9f13a09cd 100755 --- a/digits/device_query.py +++ b/digits/device_query.py @@ -1,5 +1,5 @@ #!/usr/bin/env python2 -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import argparse diff --git a/digits/download_data/__main__.py b/digits/download_data/__main__.py index 38a67aea3..83498e249 100644 --- a/digits/download_data/__main__.py +++ b/digits/download_data/__main__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. import argparse import sys diff --git a/digits/download_data/cifar10.py b/digits/download_data/cifar10.py index 30290206a..0d50b3787 100644 --- a/digits/download_data/cifar10.py +++ b/digits/download_data/cifar10.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. import cPickle import os diff --git a/digits/download_data/cifar100.py b/digits/download_data/cifar100.py index 8c17b9579..1ede1ce76 100644 --- a/digits/download_data/cifar100.py +++ b/digits/download_data/cifar100.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. import cPickle import os diff --git a/digits/download_data/downloader.py b/digits/download_data/downloader.py index e481e1b5b..ea3157906 100644 --- a/digits/download_data/downloader.py +++ b/digits/download_data/downloader.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. import os import shutil diff --git a/digits/download_data/mnist.py b/digits/download_data/mnist.py index fa83fc41a..e858b5ddb 100644 --- a/digits/download_data/mnist.py +++ b/digits/download_data/mnist.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. import gzip import os diff --git a/digits/extensions/__init__.py b/digits/extensions/__init__.py index 680b3dfe2..dc4cfe3a2 100644 --- a/digits/extensions/__init__.py +++ b/digits/extensions/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .data import * # noqa diff --git a/digits/extensions/data/__init__.py b/digits/extensions/data/__init__.py index 305eefca1..56f3b8a37 100644 --- a/digits/extensions/data/__init__.py +++ b/digits/extensions/data/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import copy diff --git a/digits/extensions/data/imageProcessing/__init__.py b/digits/extensions/data/imageProcessing/__init__.py index 79071170e..9bf25978c 100644 --- a/digits/extensions/data/imageProcessing/__init__.py +++ b/digits/extensions/data/imageProcessing/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .data import DataIngestion diff --git a/digits/extensions/data/imageProcessing/data.py b/digits/extensions/data/imageProcessing/data.py index cd3bc72a6..cb03a19be 100644 --- a/digits/extensions/data/imageProcessing/data.py +++ b/digits/extensions/data/imageProcessing/data.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import math @@ -148,7 +148,7 @@ def make_image_list(self, folder): for dirpath, dirnames, filenames in os.walk(folder, followlinks=True): for filename in filenames: if filename.lower().endswith(image.SUPPORTED_EXTENSIONS): - image_files.append('%s' % os.path.join(folder, filename)) + image_files.append('%s' % os.path.join(dirpath, filename)) if len(image_files) == 0: raise ValueError("Unable to find supported images in %s" % folder) return sorted(image_files) diff --git a/digits/extensions/data/imageProcessing/forms.py b/digits/extensions/data/imageProcessing/forms.py index 9a9aa574e..257ce7679 100644 --- a/digits/extensions/data/imageProcessing/forms.py +++ b/digits/extensions/data/imageProcessing/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os diff --git a/digits/extensions/data/imageProcessing/template.html b/digits/extensions/data/imageProcessing/template.html index 86b4799c5..d6c99da54 100644 --- a/digits/extensions/data/imageProcessing/template.html +++ b/digits/extensions/data/imageProcessing/template.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes %} {% from "helper.html" import print_errors %} diff --git a/digits/extensions/data/imageSegmentation/__init__.py b/digits/extensions/data/imageSegmentation/__init__.py index 79071170e..9bf25978c 100644 --- a/digits/extensions/data/imageSegmentation/__init__.py +++ b/digits/extensions/data/imageSegmentation/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .data import DataIngestion diff --git a/digits/extensions/data/imageSegmentation/data.py b/digits/extensions/data/imageSegmentation/data.py index f013b0f63..fca14f4a1 100644 --- a/digits/extensions/data/imageSegmentation/data.py +++ b/digits/extensions/data/imageSegmentation/data.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import math @@ -213,7 +213,7 @@ def make_image_list(self, folder): for dirpath, dirnames, filenames in os.walk(folder, followlinks=True): for filename in filenames: if filename.lower().endswith(image.SUPPORTED_EXTENSIONS): - image_files.append('%s' % os.path.join(folder, filename)) + image_files.append('%s' % os.path.join(dirpath, filename)) if len(image_files) == 0: raise ValueError("Unable to find supported images in %s" % folder) return sorted(image_files) diff --git a/digits/extensions/data/imageSegmentation/forms.py b/digits/extensions/data/imageSegmentation/forms.py index 39f18fe1b..6cc50aeba 100644 --- a/digits/extensions/data/imageSegmentation/forms.py +++ b/digits/extensions/data/imageSegmentation/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os diff --git a/digits/extensions/data/imageSegmentation/template.html b/digits/extensions/data/imageSegmentation/template.html index 1cb27276f..ae93330ea 100644 --- a/digits/extensions/data/imageSegmentation/template.html +++ b/digits/extensions/data/imageSegmentation/template.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes %} {% from "helper.html" import print_errors %} diff --git a/digits/extensions/data/interface.py b/digits/extensions/data/interface.py index 9c17b65e4..3de7bc12b 100644 --- a/digits/extensions/data/interface.py +++ b/digits/extensions/data/interface.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import diff --git a/digits/extensions/data/objectDetection/README.md b/digits/extensions/data/objectDetection/README.md index ca3bae154..f3c4974e5 100644 --- a/digits/extensions/data/objectDetection/README.md +++ b/digits/extensions/data/objectDetection/README.md @@ -150,7 +150,7 @@ All classes which don't exist in the provided mapping are implicitly mapped to 0 DetectNet is a single-class object detection network, and only cares about the "Car" class, which is expected to be ID 1. You can change the mapping in the DetectNet prototxt, but it's simplest to just make the class you care about map to 1. -Custom class mappings may be used by specifiying a comma-separated list of class names in the Object Detection dataset creation form. +Custom class mappings may be used by specifying a comma-separated list of class names in the Object Detection dataset creation form. All labels are converted to lower-case, so the matching is case-insensitive. For example, if you only want to detect pedestrians, enter `dontcare,pedestrian` in the "Custom classes" field to generate this mapping: diff --git a/digits/extensions/data/objectDetection/__init__.py b/digits/extensions/data/objectDetection/__init__.py index 79071170e..9bf25978c 100644 --- a/digits/extensions/data/objectDetection/__init__.py +++ b/digits/extensions/data/objectDetection/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .data import DataIngestion diff --git a/digits/extensions/data/objectDetection/data.py b/digits/extensions/data/objectDetection/data.py index e558bbafb..31cdc1859 100644 --- a/digits/extensions/data/objectDetection/data.py +++ b/digits/extensions/data/objectDetection/data.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import csv @@ -223,7 +223,7 @@ def make_image_list(self, folder): for dirpath, dirnames, filenames in os.walk(folder, followlinks=True): for filename in filenames: if filename.lower().endswith(digits.utils.image.SUPPORTED_EXTENSIONS): - image_files.append('%s' % os.path.join(folder, filename)) + image_files.append('%s' % os.path.join(dirpath, filename)) if len(image_files) == 0: raise ValueError("Unable to find supported images in %s" % folder) # shuffle diff --git a/digits/extensions/data/objectDetection/forms.py b/digits/extensions/data/objectDetection/forms.py index c57225348..68333a489 100644 --- a/digits/extensions/data/objectDetection/forms.py +++ b/digits/extensions/data/objectDetection/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from flask.ext.wtf import Form diff --git a/digits/extensions/data/objectDetection/template.html b/digits/extensions/data/objectDetection/template.html index 188444439..667aa9833 100644 --- a/digits/extensions/data/objectDetection/template.html +++ b/digits/extensions/data/objectDetection/template.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes %} {% from "helper.html" import print_errors %} diff --git a/digits/extensions/data/objectDetection/utils.py b/digits/extensions/data/objectDetection/utils.py index 9695f42f4..9312c92e5 100644 --- a/digits/extensions/data/objectDetection/utils.py +++ b/digits/extensions/data/objectDetection/utils.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. import csv import os @@ -49,7 +49,7 @@ class GroundTruthObj: truncated refers to the object leaving image boundaries. -1 corresponds to a don't care region. 1 occluded Integer (-1,0,1,2) indicating occlusion state: - -1 = unkown, 0 = fully visible, + -1 = unknown, 0 = fully visible, 1 = partly occluded, 2 = largely occluded 1 alpha Observation angle of object, ranging [-pi..pi] 4 bbox 2D bounding box of object in the image (0-based index): diff --git a/digits/extensions/view/__init__.py b/digits/extensions/view/__init__.py index f57fe754b..28d4e9b87 100644 --- a/digits/extensions/view/__init__.py +++ b/digits/extensions/view/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import copy diff --git a/digits/extensions/view/boundingBox/__init__.py b/digits/extensions/view/boundingBox/__init__.py index af82aa2f8..2802b262c 100644 --- a/digits/extensions/view/boundingBox/__init__.py +++ b/digits/extensions/view/boundingBox/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .view import Visualization diff --git a/digits/extensions/view/boundingBox/app_begin_template.html b/digits/extensions/view/boundingBox/app_begin_template.html index 3a4ce6f17..c59ef717e 100644 --- a/digits/extensions/view/boundingBox/app_begin_template.html +++ b/digits/extensions/view/boundingBox/app_begin_template.html @@ -1,4 +1,4 @@ - + diff --git a/digits/extensions/view/imageSegmentation/static/css/app.css b/digits/extensions/view/imageSegmentation/static/css/app.css index 0cfd8aed2..91573587b 100644 --- a/digits/extensions/view/imageSegmentation/static/css/app.css +++ b/digits/extensions/view/imageSegmentation/static/css/app.css @@ -1,4 +1,4 @@ -/* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. */ +/* Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. */ div.vis-div { position: relative; diff --git a/digits/extensions/view/imageSegmentation/static/js/app.js b/digits/extensions/view/imageSegmentation/static/js/app.js index 3cb47637d..7560c59ca 100644 --- a/digits/extensions/view/imageSegmentation/static/js/app.js +++ b/digits/extensions/view/imageSegmentation/static/js/app.js @@ -1,4 +1,4 @@ -// Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +// Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. // Angularjs app, visualization_app var app = angular.module('visualization_app', ['ngStorage']); diff --git a/digits/extensions/view/imageSegmentation/view.py b/digits/extensions/view/imageSegmentation/view.py index 26aa78b80..eb425d57f 100644 --- a/digits/extensions/view/imageSegmentation/view.py +++ b/digits/extensions/view/imageSegmentation/view.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +"""Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved.""" from __future__ import absolute_import import json @@ -25,11 +25,16 @@ @subclass class Visualization(VisualizationInterface): - """ - A visualization extension to display the network output as an image - """ + """A visualization extension to display the network output as an image.""" def __init__(self, dataset, **kwargs): + """Constructor for Visualization class. + + :param dataset: + :type dataset: + :param kwargs: + :type kwargs: + """ # memorize view template for later use extension_dir = os.path.dirname(os.path.abspath(__file__)) self.view_template = open( @@ -64,11 +69,16 @@ def __init__(self, dataset, **kwargs): @staticmethod def get_config_form(): + """Utility function. + + returns: ConfigForm(). + """ return ConfigForm() @staticmethod def get_config_template(form): - """ + """Get the template and context. + parameters: - form: form returned by get_config_form(). This may be populated with values if the job was cloned @@ -84,8 +94,8 @@ def get_config_template(form): return (template, {'form': form}) def get_legend_for(self, found_classes, skip_classes=[]): - """ - Return the legend color image squares and text for each class + """Return the legend color image squares and text for each class. + :param found_classes: list of class indices :param skip_classes: list of class indices to skip :return: list of dicts of text hex_color for each class @@ -111,9 +121,7 @@ def get_legend_for(self, found_classes, skip_classes=[]): @override def get_header_template(self): - """ - Implements get_header_template() method from view extension interface - """ + """Implement get_header_template method from view extension interface.""" extension_dir = os.path.dirname(os.path.abspath(__file__)) template = open( os.path.join(extension_dir, HEADER_TEMPLATE), "r").read() @@ -122,9 +130,7 @@ def get_header_template(self): @override def get_ng_templates(self): - """ - Implements get_ng_templates() method from view extension interface - """ + """Implement get_ng_templates method from view extension interface.""" extension_dir = os.path.dirname(os.path.abspath(__file__)) header = open(os.path.join(extension_dir, APP_BEGIN_TEMPLATE), "r").read() footer = open(os.path.join(extension_dir, APP_END_TEMPLATE), "r").read() @@ -132,19 +138,23 @@ def get_ng_templates(self): @staticmethod def get_id(): + """returns: id string that identifies the extension.""" return 'image-segmentation' @staticmethod def get_title(): + """returns: name string to display in html.""" return 'Image Segmentation' @staticmethod def get_dirname(): + """returns: extension dir name to locate static dir.""" return 'imageSegmentation' @override def get_view_template(self, data): - """ + """Get the view template. + returns: - (template, context) tuple - template is a Jinja template to use for rendering config options @@ -165,9 +175,7 @@ def get_view_template(self, data): @override def process_data(self, input_id, input_data, output_data): - """ - Process one inference and return data to visualize - """ + """Process one inference and return data to visualize.""" # assume the only output is a CHW image where C is the number # of classes, H and W are the height and width of the image class_data = output_data[output_data.keys()[0]].astype('float32') @@ -226,6 +234,8 @@ def normalize(array): max_distance = np.maximum(max_distance, distance + 128) line_data[:, :, 3] = line_mask * 255 + max_distance = np.maximum(max_distance, np.zeros(max_distance.shape, dtype=float)) + max_distance = np.minimum(max_distance, np.zeros(max_distance.shape, dtype=float) + 255) seg_data[:, :, 3] = max_distance # Input image with outlines diff --git a/digits/extensions/view/imageSegmentation/view_template.html b/digits/extensions/view/imageSegmentation/view_template.html index 977823ca8..b586af8b6 100644 --- a/digits/extensions/view/imageSegmentation/view_template.html +++ b/digits/extensions/view/imageSegmentation/view_template.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #} {[ set_binary('{{ is_binary }}' == 'True');'' ]}
diff --git a/digits/extensions/view/interface.py b/digits/extensions/view/interface.py index 58661137e..1211b0228 100644 --- a/digits/extensions/view/interface.py +++ b/digits/extensions/view/interface.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import diff --git a/digits/extensions/view/rawData/__init__.py b/digits/extensions/view/rawData/__init__.py index af82aa2f8..2802b262c 100644 --- a/digits/extensions/view/rawData/__init__.py +++ b/digits/extensions/view/rawData/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .view import Visualization diff --git a/digits/extensions/view/rawData/config_template.html b/digits/extensions/view/rawData/config_template.html index b84e40954..306c9842e 100644 --- a/digits/extensions/view/rawData/config_template.html +++ b/digits/extensions/view/rawData/config_template.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes %} {% from "helper.html" import print_errors %} diff --git a/digits/extensions/view/rawData/forms.py b/digits/extensions/view/rawData/forms.py index d4489c217..b4513f5b6 100644 --- a/digits/extensions/view/rawData/forms.py +++ b/digits/extensions/view/rawData/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from digits.utils import subclass diff --git a/digits/extensions/view/rawData/header_template.html b/digits/extensions/view/rawData/header_template.html new file mode 100644 index 000000000..fcd137d9d --- /dev/null +++ b/digits/extensions/view/rawData/header_template.html @@ -0,0 +1,7 @@ +{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} + +{% from "helper.html" import print_flashes %} +{% from "helper.html" import print_errors %} +{% from "helper.html" import mark_errors %} + +{{data}} diff --git a/digits/extensions/view/rawData/view.py b/digits/extensions/view/rawData/view.py index 4a6c89627..526ec046a 100644 --- a/digits/extensions/view/rawData/view.py +++ b/digits/extensions/view/rawData/view.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os diff --git a/digits/extensions/view/rawData/view_template.html b/digits/extensions/view/rawData/view_template.html index cf010623e..077ce6fdf 100644 --- a/digits/extensions/view/rawData/view_template.html +++ b/digits/extensions/view/rawData/view_template.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes %} {% from "helper.html" import print_errors %} diff --git a/digits/frameworks/__init__.py b/digits/frameworks/__init__.py index 41c09dfb5..a6eb057f4 100644 --- a/digits/frameworks/__init__.py +++ b/digits/frameworks/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .caffe_framework import CaffeFramework diff --git a/digits/frameworks/caffe_framework.py b/digits/frameworks/caffe_framework.py index 0b241b367..2a5753e91 100644 --- a/digits/frameworks/caffe_framework.py +++ b/digits/frameworks/caffe_framework.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os diff --git a/digits/frameworks/errors.py b/digits/frameworks/errors.py index a04ed8901..221cf517b 100644 --- a/digits/frameworks/errors.py +++ b/digits/frameworks/errors.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from digits.utils import subclass diff --git a/digits/frameworks/framework.py b/digits/frameworks/framework.py index 420d0a420..ba6d469ee 100644 --- a/digits/frameworks/framework.py +++ b/digits/frameworks/framework.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from digits.inference.tasks import InferenceTask diff --git a/digits/frameworks/torch_framework.py b/digits/frameworks/torch_framework.py index fc705e147..8ce2d4976 100644 --- a/digits/frameworks/torch_framework.py +++ b/digits/frameworks/torch_framework.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os diff --git a/digits/inference/__init__.py b/digits/inference/__init__.py index 5a59bbf41..345f40019 100644 --- a/digits/inference/__init__.py +++ b/digits/inference/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .images import ImageInferenceJob diff --git a/digits/inference/errors.py b/digits/inference/errors.py index a89bb9b42..cdb5ad4eb 100644 --- a/digits/inference/errors.py +++ b/digits/inference/errors.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from digits.utils import subclass, override diff --git a/digits/inference/images/__init__.py b/digits/inference/images/__init__.py index e4fc4707f..8abe48ea6 100644 --- a/digits/inference/images/__init__.py +++ b/digits/inference/images/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .job import ImageInferenceJob diff --git a/digits/inference/images/job.py b/digits/inference/images/job.py index 452c195df..1251c7226 100644 --- a/digits/inference/images/job.py +++ b/digits/inference/images/job.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from ..job import InferenceJob diff --git a/digits/inference/job.py b/digits/inference/job.py index 052788b96..08eb6fd62 100644 --- a/digits/inference/job.py +++ b/digits/inference/job.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from . import tasks diff --git a/digits/inference/tasks/__init__.py b/digits/inference/tasks/__init__.py index 1ee0d7b8c..312dbf98c 100644 --- a/digits/inference/tasks/__init__.py +++ b/digits/inference/tasks/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .inference import InferenceTask diff --git a/digits/inference/tasks/inference.py b/digits/inference/tasks/inference.py index 72ea56a6b..0b8d0e097 100644 --- a/digits/inference/tasks/inference.py +++ b/digits/inference/tasks/inference.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import base64 diff --git a/digits/job.py b/digits/job.py index 60d4d13de..fabe5b30b 100644 --- a/digits/job.py +++ b/digits/job.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os @@ -14,7 +14,7 @@ from digits.config import config_value from digits.utils import sizeof_fmt, filesystem as fs -# NOTE: Increment this everytime the pickled object changes +# NOTE: Increment this every time the pickled object changes PICKLE_VERSION = 2 diff --git a/digits/log.py b/digits/log.py index 8819d456e..c4ed8acf1 100644 --- a/digits/log.py +++ b/digits/log.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import logging diff --git a/digits/model/__init__.py b/digits/model/__init__.py index 5dc431193..4bfbc886d 100644 --- a/digits/model/__init__.py +++ b/digits/model/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .images import ( diff --git a/digits/model/forms.py b/digits/model/forms.py index 86c9d3224..dcb3e1a11 100644 --- a/digits/model/forms.py +++ b/digits/model/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. import os @@ -313,10 +313,10 @@ def validate_lr_multistep_values(form, field): def validate_custom_network_snapshot(form, field): pass - #if form.method.data == 'custom': - # for filename in field.data.strip().split(os.path.pathsep): - # if filename and not os.path.exists(filename): - # raise validators.ValidationError('File "%s" does not exist' % filename) +# if form.method.data == 'custom': +# for filename in field.data.strip().split(os.path.pathsep): +# if filename and not os.path.exists(filename): +# raise validators.ValidationError('File "%s" does not exist' % filename) # Select one of several GPUs select_gpu = wtforms.RadioField( diff --git a/digits/model/images/__init__.py b/digits/model/images/__init__.py index 6a515d43b..76ce1476c 100644 --- a/digits/model/images/__init__.py +++ b/digits/model/images/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .classification import ImageClassificationModelJob diff --git a/digits/model/images/classification/__init__.py b/digits/model/images/classification/__init__.py index 46d16568e..8a144af73 100644 --- a/digits/model/images/classification/__init__.py +++ b/digits/model/images/classification/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .job import ImageClassificationModelJob diff --git a/digits/model/images/classification/forms.py b/digits/model/images/classification/forms.py index 1cfa4d87f..a8b58563c 100644 --- a/digits/model/images/classification/forms.py +++ b/digits/model/images/classification/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from ..forms import ImageModelForm diff --git a/digits/model/images/classification/job.py b/digits/model/images/classification/job.py index 4797e4c7f..e807e7351 100644 --- a/digits/model/images/classification/job.py +++ b/digits/model/images/classification/job.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os.path @@ -6,7 +6,7 @@ from ..job import ImageModelJob from digits.utils import subclass, override -# NOTE: Increment this everytime the pickled object changes +# NOTE: Increment this every time the pickled object changes PICKLE_VERSION = 1 diff --git a/digits/model/images/classification/test_views.py b/digits/model/images/classification/test_views.py index 1c33a5e83..b9e3e046a 100644 --- a/digits/model/images/classification/test_views.py +++ b/digits/model/images/classification/test_views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import itertools @@ -174,6 +174,7 @@ class BaseViewsTestWithDataset(BaseViewsTest, AUG_HSV_H = None AUG_HSV_S = None AUG_HSV_V = None + OPTIMIZER = None @classmethod def setUpClass(cls): @@ -242,6 +243,8 @@ def create_model(cls, network=None, **kwargs): data['aug_hsv_s'] = cls.AUG_HSV_S if cls.AUG_HSV_V is not None: data['aug_hsv_v'] = cls.AUG_HSV_V + if cls.OPTIMIZER is not None: + data['solver_type'] = cls.OPTIMIZER data.update(kwargs) @@ -1158,6 +1161,10 @@ class TestCaffeLeNet(BaseTestCreated, test_utils.CaffeMixin): ).read() +class TestCaffeLeNetADAMOptimizer(TestCaffeLeNet): + OPTIMIZER = 'ADAM' + + class TestTorchCreatedCropInForm(BaseTestCreatedCropInForm, test_utils.TorchMixin): pass @@ -1196,6 +1203,11 @@ def test_inference_while_training(self): raise unittest.SkipTest('Torch CPU inference on CuDNN-trained model not supported') +# test disabled because it requires tuning to get a passing result +# class TestTorchLeNetADAMOptimizer(TestTorchLeNet): +# OPTIMIZER = 'ADAM' + + class TestTorchLeNetHdf5Shuffle(TestTorchLeNet): BACKEND = 'hdf5' SHUFFLE = True @@ -1366,6 +1378,10 @@ class TestTensorflowLeNet(BaseTestCreated, test_utils.TensorflowMixin): 'lenet.py')).read() +class TestTensorflowLeNetADAMOptimizer(TestTensorflowLeNet): + OPTIMIZER = 'ADAM' + + class TestTensorflowLeNetSlim(BaseTestCreated, test_utils.TensorflowMixin): IMAGE_WIDTH = 28 IMAGE_HEIGHT = 28 diff --git a/digits/model/images/classification/views.py b/digits/model/images/classification/views.py index f63f00753..0b7be6f0f 100644 --- a/digits/model/images/classification/views.py +++ b/digits/model/images/classification/views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os @@ -675,7 +675,7 @@ def top_n(): scores = last_output_data if scores is None: - raise RuntimeError('An error occured while processing the images') + raise RuntimeError('An error occurred while processing the images') labels = model_job.train_task().get_labels() images = inputs['data'] diff --git a/digits/model/images/forms.py b/digits/model/images/forms.py index 99a4375bf..da3407aee 100644 --- a/digits/model/images/forms.py +++ b/digits/model/images/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from wtforms import validators diff --git a/digits/model/images/generic/__init__.py b/digits/model/images/generic/__init__.py index 4bfb75830..2fe7d6c96 100644 --- a/digits/model/images/generic/__init__.py +++ b/digits/model/images/generic/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .job import GenericImageModelJob diff --git a/digits/model/images/generic/forms.py b/digits/model/images/generic/forms.py index 6fccfc9a7..6a9c23a79 100644 --- a/digits/model/images/generic/forms.py +++ b/digits/model/images/generic/forms.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from ..forms import ImageModelForm diff --git a/digits/model/images/generic/job.py b/digits/model/images/generic/job.py index 01af6a02c..7bacce97a 100644 --- a/digits/model/images/generic/job.py +++ b/digits/model/images/generic/job.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os.path @@ -6,7 +6,7 @@ from ..job import ImageModelJob from digits.utils import subclass, override -# NOTE: Increment this everytime the pickled object changes +# NOTE: Increment this every time the pickled object changes PICKLE_VERSION = 1 diff --git a/digits/model/images/generic/test_views.py b/digits/model/images/generic/test_views.py index 08f3de324..f21b4a976 100644 --- a/digits/model/images/generic/test_views.py +++ b/digits/model/images/generic/test_views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import itertools diff --git a/digits/model/images/generic/views.py b/digits/model/images/generic/views.py index 913dd0890..1d0d72bd3 100644 --- a/digits/model/images/generic/views.py +++ b/digits/model/images/generic/views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os diff --git a/digits/model/images/job.py b/digits/model/images/job.py index 77343b808..f618448d0 100644 --- a/digits/model/images/job.py +++ b/digits/model/images/job.py @@ -1,11 +1,11 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import datetime from ..job import ModelJob from digits.utils import subclass, override -# NOTE: Increment this everytime the pickled object changes +# NOTE: Increment this every time the pickled object changes PICKLE_VERSION = 1 diff --git a/digits/model/images/views.py b/digits/model/images/views.py index 70cee5762..66d0d3645 100644 --- a/digits/model/images/views.py +++ b/digits/model/images/views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import flask diff --git a/digits/model/job.py b/digits/model/job.py index 95e78db06..af4ce533e 100644 --- a/digits/model/job.py +++ b/digits/model/job.py @@ -1,11 +1,11 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from . import tasks from digits.job import Job from digits.utils import override -# NOTE: Increment this everytime the pickled object changes +# NOTE: Increment this every time the pickled object changes PICKLE_VERSION = 1 diff --git a/digits/model/tasks/__init__.py b/digits/model/tasks/__init__.py index 505430044..1a4ac7a8e 100644 --- a/digits/model/tasks/__init__.py +++ b/digits/model/tasks/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .caffe_train import CaffeTrainTask diff --git a/digits/model/tasks/caffe_train.py b/digits/model/tasks/caffe_train.py index c1001a694..6c7ca0191 100644 --- a/digits/model/tasks/caffe_train.py +++ b/digits/model/tasks/caffe_train.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from collections import OrderedDict @@ -27,7 +27,7 @@ import caffe import caffe_pb2 -# NOTE: Increment this everytime the pickled object changes +# NOTE: Increment this every time the pickled object changes PICKLE_VERSION = 5 # Constants @@ -1362,13 +1362,17 @@ def get_layer_visualizations(self, net, layers='all'): if top in net.blobs and top not in added_activations: data = net.blobs[top].data[0] normalize = True - # don't normalize softmax layers + # don't normalize softmax layers but scale by 255 to fill image range if layer.type == 'Softmax': - normalize = False - vis = utils.image.get_layer_vis_square(data, - normalize=normalize, - allow_heatmap=bool(top != 'data'), - channel_order='BGR') + vis = utils.image.get_layer_vis_square(data * 255, + normalize=False, + allow_heatmap=bool(top != 'data'), + channel_order='BGR') + else: + vis = utils.image.get_layer_vis_square(data, + normalize=normalize, + allow_heatmap=bool(top != 'data'), + channel_order='BGR') mean, std, hist = self.get_layer_statistics(data) visualizations.append( { diff --git a/digits/model/tasks/test_caffe_sanity_checks.py b/digits/model/tasks/test_caffe_sanity_checks.py index 09dc08ad6..8666f6fa4 100644 --- a/digits/model/tasks/test_caffe_sanity_checks.py +++ b/digits/model/tasks/test_caffe_sanity_checks.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .caffe_train import CaffeTrainTask, CaffeTrainSanityCheckError diff --git a/digits/model/tasks/test_caffe_train.py b/digits/model/tasks/test_caffe_train.py index 77353c8eb..b0929ca69 100644 --- a/digits/model/tasks/test_caffe_train.py +++ b/digits/model/tasks/test_caffe_train.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from digits import test_utils diff --git a/digits/model/tasks/torch_train.py b/digits/model/tasks/torch_train.py index bf9de88d9..64cedfa25 100644 --- a/digits/model/tasks/torch_train.py +++ b/digits/model/tasks/torch_train.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import operator @@ -22,7 +22,7 @@ # Must import after importing digit.config import caffe_pb2 -# NOTE: Increment this everytime the pickled object changes +# NOTE: Increment this every time the pickled object changes PICKLE_VERSION = 1 # Constants diff --git a/digits/model/tasks/train.py b/digits/model/tasks/train.py index 9c5cf4c07..b8d9eea00 100644 --- a/digits/model/tasks/train.py +++ b/digits/model/tasks/train.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from collections import OrderedDict, namedtuple @@ -13,7 +13,7 @@ from digits.task import Task from digits.utils import subclass, override -# NOTE: Increment this everytime the picked object changes +# NOTE: Increment this every time the picked object changes PICKLE_VERSION = 2 # Used to store network outputs @@ -474,7 +474,9 @@ def get_labels(self): assert hasattr(self.dataset, 'labels_file'), 'labels_file not set' assert self.dataset.labels_file, 'labels_file not set' - assert os.path.exists(self.dataset.path(self.dataset.labels_file)), 'labels_file does not exist' + assert os.path.exists(self.dataset.path(self.dataset.labels_file)), 'labels_file does not exist: {}'.format( + self.dataset.path(self.dataset.labels_file) + ) labels = [] with open(self.dataset.path(self.dataset.labels_file)) as infile: diff --git a/digits/model/views.py b/digits/model/views.py index ce5b7bff4..28df10231 100644 --- a/digits/model/views.py +++ b/digits/model/views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import io @@ -295,13 +295,13 @@ def download(job_id, extension): mode = 'gz' elif extension in ['tar.bz2']: mode = 'bz2' - with tarfile.open(fileobj=b, mode='w:%s' % mode) as tf: + with tarfile.open(fileobj=b, mode='w:%s' % mode) as tar: for path, name in job.download_files(epoch): - tf.add(path, arcname=name) - tf_info = tarfile.TarInfo("info.json") - tf_info.size = len(info_io.getvalue()) + tar.add(path, arcname=name) + tar_info = tarfile.TarInfo("info.json") + tar_info.size = len(info_io.getvalue()) info_io.seek(0) - tf.addfile(tf_info, info_io) + tar.addfile(tar_info, info_io) elif extension in ['zip']: with zipfile.ZipFile(b, 'w') as zf: for path, name in job.download_files(epoch): diff --git a/digits/pretrained_model/__init__.py b/digits/pretrained_model/__init__.py index e3435dbdf..1ae9a56a6 100644 --- a/digits/pretrained_model/__init__.py +++ b/digits/pretrained_model/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .job import PretrainedModelJob diff --git a/digits/pretrained_model/job.py b/digits/pretrained_model/job.py index 18d567311..bd74e6205 100644 --- a/digits/pretrained_model/job.py +++ b/digits/pretrained_model/job.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os diff --git a/digits/pretrained_model/tasks/__init__.py b/digits/pretrained_model/tasks/__init__.py index be6c05c08..3f96568e9 100644 --- a/digits/pretrained_model/tasks/__init__.py +++ b/digits/pretrained_model/tasks/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from .upload_pretrained import UploadPretrainedModelTask diff --git a/digits/pretrained_model/tasks/caffe_upload.py b/digits/pretrained_model/tasks/caffe_upload.py index 1537f71d8..575e41a58 100644 --- a/digits/pretrained_model/tasks/caffe_upload.py +++ b/digits/pretrained_model/tasks/caffe_upload.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os from digits.utils import subclass, override diff --git a/digits/pretrained_model/tasks/torch_upload.py b/digits/pretrained_model/tasks/torch_upload.py index b54f80579..5ed3029ee 100644 --- a/digits/pretrained_model/tasks/torch_upload.py +++ b/digits/pretrained_model/tasks/torch_upload.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os from digits.utils import subclass, override diff --git a/digits/pretrained_model/tasks/upload_pretrained.py b/digits/pretrained_model/tasks/upload_pretrained.py index a2d24314e..6c38ab12b 100644 --- a/digits/pretrained_model/tasks/upload_pretrained.py +++ b/digits/pretrained_model/tasks/upload_pretrained.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import os import shutil diff --git a/digits/pretrained_model/test_views.py b/digits/pretrained_model/test_views.py index f4f25ac17..a0f103b0f 100644 --- a/digits/pretrained_model/test_views.py +++ b/digits/pretrained_model/test_views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. import json import os diff --git a/digits/pretrained_model/views.py b/digits/pretrained_model/views.py index 67686eae5..02783c1dd 100644 --- a/digits/pretrained_model/views.py +++ b/digits/pretrained_model/views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. import flask import tempfile import tarfile diff --git a/digits/scheduler.py b/digits/scheduler.py index d9e06ddc9..7a75eabce 100644 --- a/digits/scheduler.py +++ b/digits/scheduler.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import from collections import OrderedDict diff --git a/digits/standard-networks/tensorflow/alexnet.py b/digits/standard-networks/tensorflow/alexnet.py index 93dc48f50..361ead2d2 100644 --- a/digits/standard-networks/tensorflow/alexnet.py +++ b/digits/standard-networks/tensorflow/alexnet.py @@ -87,7 +87,8 @@ def conv_net(x, weights, biases): @model_property def loss(self): - loss = digits.classification_loss(self.inference, self.y) - accuracy = digits.classification_accuracy(self.inference, self.y) - self.summaries.append(tf.scalar_summary(accuracy.op.name, accuracy)) + model = self.inference + loss = digits.classification_loss(model, self.y) + accuracy = digits.classification_accuracy(model, self.y) + self.summaries.append(tf.summary.scalar(accuracy.op.name, accuracy)) return loss diff --git a/digits/standard-networks/tensorflow/alexnet_slim.py b/digits/standard-networks/tensorflow/alexnet_slim.py index d51ec5e36..5655b4a1b 100644 --- a/digits/standard-networks/tensorflow/alexnet_slim.py +++ b/digits/standard-networks/tensorflow/alexnet_slim.py @@ -24,7 +24,8 @@ def inference(self): @model_property def loss(self): - loss = digits.classification_loss(self.inference, self.y) - accuracy = digits.classification_accuracy(self.inference, self.y) - self.summaries.append(tf.scalar_summary(accuracy.op.name, accuracy)) + model = self.inference + loss = digits.classification_loss(model, self.y) + accuracy = digits.classification_accuracy(model, self.y) + self.summaries.append(tf.summary.scalar(accuracy.op.name, accuracy)) return loss diff --git a/digits/standard-networks/tensorflow/binary_segmentation.py b/digits/standard-networks/tensorflow/binary_segmentation.py deleted file mode 100644 index 8b9ff6298..000000000 --- a/digits/standard-networks/tensorflow/binary_segmentation.py +++ /dev/null @@ -1,23 +0,0 @@ -# Tensorflow Triangle binary segmentation model using TensorFlow-Slim - -def build_model(params): - _x = tf.reshape(params['x'], shape=[-1, params['input_shape'][0], params['input_shape'][1], params['input_shape'][2]]) - with slim.arg_scope([slim.conv2d, slim.conv2d_transpose, slim.fully_connected], - weights_initializer=tf.contrib.layers.xavier_initializer(), - weights_regularizer=slim.l2_regularizer(0.0005) ): - - model = slim.conv2d(_x, 32, [3, 3], padding='SAME', scope='conv1') # 1*H*W -> 32*H*W - model = slim.conv2d(model, 1024, [16, 16], padding='VALID', scope='conv2', stride=16) # 32*H*W -> 1024*H/16*W/16 - model = slim.conv2d_transpose(model, params['input_shape'][2], [16, 16], stride=16, padding='VALID', activation_fn=None, scope='deconv_1') - - def loss(y): - y = tf.reshape(y, shape=[-1, params['input_shape'][0], params['input_shape'][1], params['input_shape'][2]]) - # For a fancy tensorboard summary: put the input, label and model side by side (sbs) for a fancy image summary: - # sbs = tf.concat(2, [_x, y, model]) - # tf.image_summary(sbs.op.name, sbs, max_images=3, collections=[digits.GraphKeys.SUMMARIES_TRAIN]) - return digits.mse_loss(model, y) - - return { - 'model' : model, - 'loss' : loss - } diff --git a/digits/standard-networks/tensorflow/googlenet.py b/digits/standard-networks/tensorflow/googlenet.py new file mode 100644 index 000000000..9c8997c50 --- /dev/null +++ b/digits/standard-networks/tensorflow/googlenet.py @@ -0,0 +1,201 @@ +# The auxillary branches as spcified in the original googlenet V1 model do exist in this implementation of +# googlenet but it is not used. To use it, be sure to check self.is_training to ensure that it is only used +# during training. + +class UserModel(Tower): + + all_inception_settings = { + '3a': [[64], [96, 128], [16, 32], [32]], + '3b': [[128], [128, 192], [32, 96], [64]], + '4a': [[192], [96, 208], [16, 48], [64]], + '4b': [[160], [112, 224], [24, 64], [64]], + '4c': [[128], [128, 256], [24, 64], [64]], + '4d': [[112], [144, 288], [32, 64], [64]], + '4e': [[256], [160, 320], [32, 128], [128]], + '5a': [[256], [160, 320], [32, 128], [128]], + '5b': [[384], [192, 384], [48, 128], [128]] + } + + @model_property + def inference(self): + # rescale to proper form, really we expect 224 x 224 x 1 in HWC form + model = tf.reshape(self.x, shape=[-1, self.input_shape[0], self.input_shape[1], self.input_shape[2]]) + + conv_7x7_2s_weight, conv_7x7_2s_bias = self.create_conv_vars([7, 7, self.input_shape[2], 64], 'conv_7x7_2s') + model = self.conv_layer_with_relu(model, conv_7x7_2s_weight, conv_7x7_2s_bias, 2) + + model = self.max_pool(model, 3, 2) + + model = tf.nn.local_response_normalization(model) + + conv_1x1_vs_weight, conv_1x1_vs_bias = self.create_conv_vars([1, 1, 64, 64], 'conv_1x1_vs') + model = self.conv_layer_with_relu(model, conv_1x1_vs_weight, conv_1x1_vs_bias, 1, 'VALID') + + conv_3x3_1s_weight, conv_3x3_1s_bias = self.create_conv_vars([3, 3, 64, 192], 'conv_3x3_1s') + model = self.conv_layer_with_relu(model, conv_3x3_1s_weight, conv_3x3_1s_bias, 1) + + model = tf.nn.local_response_normalization(model) + + model = self.max_pool(model, 3, 2) + + inception_settings_3a = InceptionSettings(192, UserModel.all_inception_settings['3a']) + model = self.inception(model, inception_settings_3a, '3a') + + inception_settings_3b = InceptionSettings(256, UserModel.all_inception_settings['3b']) + model = self.inception(model, inception_settings_3b, '3b') + + model = self.max_pool(model, 3, 2) + + inception_settings_4a = InceptionSettings(480, UserModel.all_inception_settings['4a']) + model = self.inception(model, inception_settings_4a, '4a') + + # first auxiliary branch for making training faster + aux_branch_1 = self.auxiliary_classifier(model, 512, "aux_1") + + inception_settings_4b = InceptionSettings(512, UserModel.all_inception_settings['4b']) + model = self.inception(model, inception_settings_4b, '4b') + + inception_settings_4c = InceptionSettings(512, UserModel.all_inception_settings['4c']) + model = self.inception(model, inception_settings_4c, '4c') + + inception_settings_4d = InceptionSettings(512, UserModel.all_inception_settings['4d']) + model = self.inception(model, inception_settings_4d, '4d') + + # second auxiliary branch for making training faster + aux_branch_2 = self.auxiliary_classifier(model, 528, "aux_2") + + inception_settings_4e = InceptionSettings(528, UserModel.all_inception_settings['4e']) + model = self.inception(model, inception_settings_4e, '4e') + + model = self.max_pool(model, 3, 2) + + inception_settings_5a = InceptionSettings(832, UserModel.all_inception_settings['5a']) + model = self.inception(model, inception_settings_5a, '5a') + + inception_settings_5b = InceptionSettings(832, UserModel.all_inception_settings['5b']) + model = self.inception(model, inception_settings_5b, '5b') + + model = self.avg_pool(model, 7, 1, 'VALID') + + fc_weight, fc_bias = self.create_fc_vars([1024, self.nclasses], 'fc') + model = self.fully_connect(model, fc_weight, fc_bias) + + return model + + @model_property + def loss(self): + model = self.inference + loss = digits.classification_loss(model, self.y) + accuracy = digits.classification_accuracy(model, self.y) + self.summaries.append(tf.summary.scalar(accuracy.op.name, accuracy)) + return loss + + + def inception(self, model, inception_setting, layer_name): + weights, biases = self.create_inception_variables(inception_setting, layer_name) + conv_1x1 = self.conv_layer_with_relu(model, weights['conv_1x1_1'], biases['conv_1x1_1'], 1) + + conv_3x3 = self.conv_layer_with_relu(model, weights['conv_1x1_2'], biases['conv_1x1_2'], 1) + conv_3x3 = self.conv_layer_with_relu(conv_3x3, weights['conv_3x3'], biases['conv_3x3'], 1) + + conv_5x5 = self.conv_layer_with_relu(model, weights['conv_1x1_3'], biases['conv_1x1_3'], 1) + conv_5x5 = self.conv_layer_with_relu(conv_5x5, weights['conv_5x5'], biases['conv_5x5'], 1) + + conv_pool = self.max_pool(model, 3, 1) + conv_pool = self.conv_layer_with_relu(conv_pool, weights['conv_pool'], biases['conv_pool'], 1) + + final_model = tf.concat([conv_1x1, conv_3x3, conv_5x5, conv_pool], 3) + + return final_model + + def create_inception_variables(self, inception_setting, layer_name): + model_dim = inception_setting.model_dim + conv_1x1_1_weight, conv_1x1_1_bias = self.create_conv_vars([1, 1, model_dim, inception_setting.conv_1x1_1_layers], layer_name + '-conv_1x1_1') + conv_1x1_2_weight, conv_1x1_2_bias = self.create_conv_vars([1, 1, model_dim, inception_setting.conv_1x1_2_layers], layer_name + '-conv_1x1_2') + conv_1x1_3_weight, conv_1x1_3_bias = self.create_conv_vars([1, 1, model_dim, inception_setting.conv_1x1_3_layers], layer_name + '-conv_1x1_3') + conv_3x3_weight, conv_3x3_bias = self.create_conv_vars([3, 3, inception_setting.conv_1x1_2_layers, inception_setting.conv_3x3_layers], layer_name + '-conv_3x3') + conv_5x5_weight, conv_5x5_bias = self.create_conv_vars([5, 5, inception_setting.conv_1x1_3_layers, inception_setting.conv_5x5_layers], layer_name + '-conv_5x5') + conv_pool_weight, conv_pool_bias = self.create_conv_vars([1, 1, model_dim, inception_setting.conv_pool_layers], layer_name + '-conv_pool') + + weights = { + 'conv_1x1_1': conv_1x1_1_weight, + 'conv_1x1_2': conv_1x1_2_weight, + 'conv_1x1_3': conv_1x1_3_weight, + 'conv_3x3': conv_3x3_weight, + 'conv_5x5': conv_5x5_weight, + 'conv_pool': conv_pool_weight + } + + biases = { + 'conv_1x1_1': conv_1x1_1_bias, + 'conv_1x1_2': conv_1x1_2_bias, + 'conv_1x1_3': conv_1x1_3_bias, + 'conv_3x3': conv_3x3_bias, + 'conv_5x5': conv_5x5_bias, + 'conv_pool': conv_pool_bias + } + + return weights, biases + + def auxiliary_classifier(self, model, input_size, name): + aux_classifier = self.avg_pool(model, 5, 3, 'VALID') + + conv_weight, conv_bias = self.create_conv_vars([1, 1, input_size, input_size], name + '-conv_1x1') + aux_classifier = self.conv_layer_with_relu(aux_classifier, conv_weight, conv_bias, 1) + + fc_weight, fc_bias = self.create_fc_vars([4*4*input_size, self.nclasses], name + '-fc') + aux_classifier = self.fully_connect(aux_classifier, fc_weight, fc_bias) + + aux_classifier = tf.nn.dropout(aux_classifier, 0.7) + + return aux_classifier + + def conv_layer_with_relu(self, model, weights, biases, stride_size, padding='SAME'): + new_model = tf.nn.conv2d(model, weights, strides=[1, stride_size, stride_size, 1], padding=padding) + new_model = tf.nn.bias_add(new_model, biases) + new_model = tf.nn.relu(new_model) + return new_model + + def max_pool(self, model, kernal_size, stride_size, padding='SAME'): + new_model = tf.nn.max_pool(model, ksize=[1, kernal_size, kernal_size, 1], strides=[1, stride_size, stride_size, 1], padding=padding) + return new_model + + def avg_pool(self, model, kernal_size, stride_size, padding='SAME'): + new_model = tf.nn.avg_pool(model, ksize=[1, kernal_size, kernal_size, 1], strides=[1, stride_size, stride_size, 1], padding=padding) + return new_model + + def fully_connect(self, model, weights, biases): + fc_model = tf.reshape(model, [-1, weights.get_shape().as_list()[0]]) + fc_model = tf.matmul(fc_model, weights) + fc_model = tf.add(fc_model, biases) + fc_model = tf.nn.relu(fc_model) + return fc_model + + def create_conv_vars(self, size, name): + weight = self.create_weight(size, name + '_W') + bias = self.create_bias(size[3], name + '_b') + return weight, bias + + def create_fc_vars(self, size, name): + weight = self.create_weight(size, name + '_W') + bias = self.create_bias(size[1], name + '_b') + return weight, bias + + def create_weight(self, size, name): + weight = tf.get_variable(name, size, initializer=tf.contrib.layers.xavier_initializer()) + return weight + + def create_bias(self, size, name): + bias = tf.get_variable(name, [size], initializer=tf.constant_initializer(0.2)) + return bias + +class InceptionSettings(): + + def __init__(self, model_dim, inception_settings): + self.model_dim = model_dim + self.conv_1x1_1_layers = inception_settings[0][0] + self.conv_1x1_2_layers = inception_settings[1][0] + self.conv_1x1_3_layers = inception_settings[2][0] + self.conv_3x3_layers = inception_settings[1][1] + self.conv_5x5_layers = inception_settings[2][1] + self.conv_pool_layers = inception_settings[3][0] \ No newline at end of file diff --git a/digits/standard-networks/tensorflow/lenet.py b/digits/standard-networks/tensorflow/lenet.py index 677c905d6..f52a78205 100644 --- a/digits/standard-networks/tensorflow/lenet.py +++ b/digits/standard-networks/tensorflow/lenet.py @@ -67,7 +67,8 @@ def conv_net(x, weights, biases): @model_property def loss(self): - loss = digits.classification_loss(self.inference, self.y) - accuracy = digits.classification_accuracy(self.inference, self.y) - self.summaries.append(tf.scalar_summary(accuracy.op.name, accuracy)) + model = self.inference + loss = digits.classification_loss(model, self.y) + accuracy = digits.classification_accuracy(model, self.y) + self.summaries.append(tf.summary.scalar(accuracy.op.name, accuracy)) return loss \ No newline at end of file diff --git a/digits/standard-networks/tensorflow/lenet_slim.py b/digits/standard-networks/tensorflow/lenet_slim.py index 58f64f020..8d3a71b77 100644 --- a/digits/standard-networks/tensorflow/lenet_slim.py +++ b/digits/standard-networks/tensorflow/lenet_slim.py @@ -20,7 +20,8 @@ def inference(self): @model_property def loss(self): - loss = digits.classification_loss(self.inference, self.y) - accuracy = digits.classification_accuracy(self.inference, self.y) - self.summaries.append(tf.scalar_summary(accuracy.op.name, accuracy)) + model = self.inference + loss = digits.classification_loss(model, self.y) + accuracy = digits.classification_accuracy(model, self.y) + self.summaries.append(tf.summary.scalar(accuracy.op.name, accuracy)) return loss diff --git a/digits/standard-networks/tensorflow/rnn_mnist.py b/digits/standard-networks/tensorflow/rnn_mnist.py deleted file mode 100644 index 73aa0bdba..000000000 --- a/digits/standard-networks/tensorflow/rnn_mnist.py +++ /dev/null @@ -1,53 +0,0 @@ -from tensorflow.python.ops import rnn, rnn_cell - -def build_model(params): - n_hidden = 28 - n_classes = params['nclasses'] - n_steps = params['input_shape'][0] - n_input = params['input_shape'][1] - - x = tf.reshape(params['x'], shape=[-1, params['input_shape'][0], params['input_shape'][1], params['input_shape'][2]]) - - tf.image_summary(x.op.name, x, max_images=1, collections=[digits.GraphKeys.SUMMARIES_TRAIN]) - x = tf.squeeze(x) - - - - # Define weights - weights = { - 'w1': tf.get_variable('w1', [n_hidden, params['nclasses']]) - } - biases = { - 'b1': tf.get_variable('b1', [params['nclasses']]) - } - - # Prepare data shape to match `rnn` function requirements - # Current data input shape: (batch_size, n_steps, n_input) - # Required shape: 'n_steps' tensors list of shape (batch_size, n_input) - - # Permuting batch_size and n_steps - x = tf.transpose(x, [1, 0, 2]) - # Reshaping to (n_steps*batch_size, n_input) - x = tf.reshape(x, [-1, n_input]) - # Split to get a list of 'n_steps' tensors of shape (batch_size, n_input) - x = tf.split(0, n_steps, x) - - # Define a lstm cell with tensorflow - lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0) - - # Get lstm cell output - outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32) - - # Linear activation, using rnn inner loop last output - model = tf.matmul(outputs[-1], weights['w1']) + biases['b1'] - - def loss(y): - loss = digits.classification_loss(model, y) - accuracy = digits.classification_accuracy(model, y) - tf.scalar_summary(accuracy.op.name, accuracy, collections=[digits.GraphKeys.SUMMARIES_TRAIN]) - return loss - - return { - 'model' : model, - 'loss' : loss - } diff --git a/digits/standard-networks/tensorflow/siamese.py b/digits/standard-networks/tensorflow/siamese.py deleted file mode 100644 index 2b0fe588e..000000000 --- a/digits/standard-networks/tensorflow/siamese.py +++ /dev/null @@ -1,38 +0,0 @@ -def build_model(params): - _x = tf.reshape(params['x'], shape=[-1, params['input_shape'][0], params['input_shape'][1], params['input_shape'][2]]) - #tf.image_summary(_x.op.name, _x, max_images=10, collections=[digits.GraphKeys.SUMMARIES_TRAIN]) - - # Split out the color channels - _, model_g, model_b = tf.split(3, 3, _x, name='split_channels') - #tf.image_summary(model_g.op.name, model_g, max_images=10, collections=[digits.GraphKeys.SUMMARIES_TRAIN]) - #tf.image_summary(model_b.op.name, model_b, max_images=10, collections=[digits.GraphKeys.SUMMARIES_TRAIN]) - - with slim.arg_scope([slim.conv2d, slim.fully_connected], - weights_initializer=tf.contrib.layers.xavier_initializer(), - weights_regularizer=slim.l2_regularizer(0.0005) ): - with tf.variable_scope("siamese") as scope: - def make_tower(net): - net = slim.conv2d(net, 20, [5, 5], padding='VALID', scope='conv1') - net = slim.max_pool2d(net, [2, 2], padding='VALID', scope='pool1') - net = slim.conv2d(net, 50, [5, 5], padding='VALID', scope='conv2') - net = slim.max_pool2d(net, [2, 2], padding='VALID', scope='pool2') - net = slim.flatten(net) - net = slim.fully_connected(net, 500, scope='fc1') - net = slim.fully_connected(net, 2, activation_fn=None, scope='fc2') - return net - - model_g = make_tower(model_g) - model_g = tf.reshape(model_g, shape=[-1, 2]) - scope.reuse_variables() - model_b = make_tower(model_b) - model_b = tf.reshape(model_b, shape=[-1, 2]) - - def loss(y): - y = tf.reshape(y, shape=[-1]) - y = tf.to_float(y) - return digits.constrastive_loss(model_g, model_b, y) - - return { - 'model' : model_g, - 'loss' : loss, - } diff --git a/digits/standard-networks/tensorflow/siamese_simple.py b/digits/standard-networks/tensorflow/siamese_simple.py deleted file mode 100644 index bd0cd8d15..000000000 --- a/digits/standard-networks/tensorflow/siamese_simple.py +++ /dev/null @@ -1,38 +0,0 @@ -def build_model(params): - _x = tf.reshape(params['x'], shape=[-1, params['input_shape'][0], params['input_shape'][1], params['input_shape'][2]]) - #tf.image_summary(_x.op.name, _x, max_images=10, collections=[digits.GraphKeys.SUMMARIES_TRAIN]) - - # Split out the channel in two - lhs, rhs = tf.split(0, 2, _x, name='split_batch') - - with slim.arg_scope([slim.conv2d, slim.fully_connected], - weights_initializer=tf.contrib.layers.xavier_initializer(), - weights_regularizer=slim.l2_regularizer(0.0005) ): - with tf.variable_scope("siamese") as scope: - def make_tower(net): - net = slim.conv2d(net, 20, [5, 5], padding='VALID', scope='conv1') - net = slim.max_pool2d(net, [2, 2], padding='VALID', scope='pool1') - net = slim.conv2d(net, 50, [5, 5], padding='VALID', scope='conv2') - net = slim.max_pool2d(net, [2, 2], padding='VALID', scope='pool2') - net = slim.flatten(net) - net = slim.fully_connected(net, 500, scope='fc1') - net = slim.fully_connected(net, 2, activation_fn=None, scope='fc2') - return net - - lhs = make_tower(lhs) - lhs = tf.reshape(lhs, shape=[-1, 2]) - scope.reuse_variables() - rhs = make_tower(rhs) - rhs = tf.reshape(rhs, shape=[-1, 2]) - - def loss(y): - y = tf.reshape(y, shape=[-1]) - ylhs, yrhs = tf.split(0, 2, y, name='split_label') - y = tf.equal(ylhs, yrhs) - y = tf.to_float(y) - return digits.constrastive_loss(lhs, rhs, y) - - return { - 'model' : tf.concat(0, [lhs, rhs]), - 'loss' : loss, - } diff --git a/digits/standard-networks/tensorflow/vgg16.py b/digits/standard-networks/tensorflow/vgg16.py index 3ce89edcc..6efd55bde 100644 --- a/digits/standard-networks/tensorflow/vgg16.py +++ b/digits/standard-networks/tensorflow/vgg16.py @@ -28,5 +28,5 @@ def inference(self): def loss(self): loss = digits.classification_loss(self.inference, self.y) accuracy = digits.classification_accuracy(self.inference, self.y) - self.summaries.append(tf.scalar_summary(accuracy.op.name, accuracy)) + self.summaries.append(tf.summary.scalar(accuracy.op.name, accuracy)) return loss diff --git a/digits/standard-networks/torch/ImageNet-Training/googlenet.lua b/digits/standard-networks/torch/ImageNet-Training/googlenet.lua index d11aff601..1c4e23e7f 100644 --- a/digits/standard-networks/torch/ImageNet-Training/googlenet.lua +++ b/digits/standard-networks/torch/ImageNet-Training/googlenet.lua @@ -84,7 +84,7 @@ function createModel(nChannels, nClasses) main_branch:add(nn.Linear(1024,nClasses)) main_branch:add(nn.LogSoftMax()) - -- add auxillary classifier here (thanks to Christian Szegedy for the details) + -- add auxiliary classifier here (thanks to Christian Szegedy for the details) local aux_classifier = nn.Sequential() local l = backend.SpatialAveragePooling(5,5,3,3) if backend == cudnn then l = l:ceil() end diff --git a/digits/static/css/style.css b/digits/static/css/style.css index 056923777..094177ed7 100644 --- a/digits/static/css/style.css +++ b/digits/static/css/style.css @@ -1,4 +1,4 @@ -/* Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. */ +/* Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. */ body { padding-top: 50px; @@ -154,3 +154,7 @@ ul.inline li { margin-top:10px; margin-bottom:10px; } + +div.exploration { + margin-top: 10px; +} diff --git a/digits/static/js/PretrainedModel.js b/digits/static/js/PretrainedModel.js index 955d40604..4a2fc9043 100644 --- a/digits/static/js/PretrainedModel.js +++ b/digits/static/js/PretrainedModel.js @@ -1,4 +1,4 @@ -// Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +// Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. var PretrainedModel = function(params) { var props = _.extend({ selector: '#pretrainedModelContent', diff --git a/digits/static/js/digits.js b/digits/static/js/digits.js index 84e5c16a5..e46a0de24 100644 --- a/digits/static/js/digits.js +++ b/digits/static/js/digits.js @@ -1,4 +1,4 @@ -// Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +// Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. function errorAlert(response) { var title, msg; diff --git a/digits/static/js/file_field.js b/digits/static/js/file_field.js index 27b977168..af8231019 100644 --- a/digits/static/js/file_field.js +++ b/digits/static/js/file_field.js @@ -1,4 +1,4 @@ -// Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. +// Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. $(document).on('change', '.btn-file :file', function() { var input = $(this), diff --git a/digits/static/js/home_app.js b/digits/static/js/home_app.js index cfbc3d83c..402db8556 100644 --- a/digits/static/js/home_app.js +++ b/digits/static/js/home_app.js @@ -1,4 +1,4 @@ -// Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +// Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. 'use strict'; diff --git a/digits/static/js/model-graphs.js b/digits/static/js/model-graphs.js index 72a0f0025..a2cf0c427 100644 --- a/digits/static/js/model-graphs.js +++ b/digits/static/js/model-graphs.js @@ -1,4 +1,4 @@ -// Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +// Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. function drawCombinedGraph(data) { $('.combined-graph').show(); // drawCombinedGraph.chart is a static variable that holds the graph state; diff --git a/digits/static/js/store.js b/digits/static/js/store.js index 2f864115a..402a89e72 100644 --- a/digits/static/js/store.js +++ b/digits/static/js/store.js @@ -1,4 +1,4 @@ -// Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +// Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. (function(angular) { 'use strict'; diff --git a/digits/static/js/time_filters.js b/digits/static/js/time_filters.js index 35234c49d..ed075aba3 100644 --- a/digits/static/js/time_filters.js +++ b/digits/static/js/time_filters.js @@ -1,4 +1,4 @@ -// Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +// Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. function print_time_diff(diff) { if (diff < 0) { diff --git a/digits/status.py b/digits/status.py index f2178ecd5..779b5088a 100644 --- a/digits/status.py +++ b/digits/status.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import time diff --git a/digits/store/views.py b/digits/store/views.py index b4e1a1b90..82377cd30 100644 --- a/digits/store/views.py +++ b/digits/store/views.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import json diff --git a/digits/task.py b/digits/task.py index 2977789cf..be36ac370 100644 --- a/digits/task.py +++ b/digits/task.py @@ -1,4 +1,4 @@ -# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. +# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. from __future__ import absolute_import import logging @@ -17,7 +17,7 @@ from .status import Status, StatusCls import digits.log -# NOTE: Increment this everytime the pickled version changes +# NOTE: Increment this every time the pickled version changes PICKLE_VERSION = 1 diff --git a/digits/templates/datasets/generic/new.html b/digits/templates/datasets/generic/new.html index 23bc431f3..9cd54b9cc 100644 --- a/digits/templates/datasets/generic/new.html +++ b/digits/templates/datasets/generic/new.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes %} {% from "helper.html" import print_combined_errors %} diff --git a/digits/templates/datasets/generic/show.html b/digits/templates/datasets/generic/show.html index 9df109cde..705ea022e 100644 --- a/digits/templates/datasets/generic/show.html +++ b/digits/templates/datasets/generic/show.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #} {% extends "job.html" %} {% from "helper.html" import serve_file %} diff --git a/digits/templates/datasets/generic/summary.html b/digits/templates/datasets/generic/summary.html index d53797950..5d8e7b3a7 100644 --- a/digits/templates/datasets/generic/summary.html +++ b/digits/templates/datasets/generic/summary.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #}

diff --git a/digits/templates/datasets/images/classification/new.html b/digits/templates/datasets/images/classification/new.html index e6fa9eb10..5a8836970 100644 --- a/digits/templates/datasets/images/classification/new.html +++ b/digits/templates/datasets/images/classification/new.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes %} {% from "helper.html" import print_errors %} diff --git a/digits/templates/datasets/images/classification/show.html b/digits/templates/datasets/images/classification/show.html index cd099aa59..10a06ec5e 100644 --- a/digits/templates/datasets/images/classification/show.html +++ b/digits/templates/datasets/images/classification/show.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. #} {% extends "job.html" %} {% from "helper.html" import serve_file %} @@ -87,7 +87,10 @@

{{task.name()}}

{% endif %} {% if task.entries_count %}
DB Entries
-
{{task.entries_count}}
+
+ {{task.entries_count}} + {% if task.entries_error %} ({{task.entries_error}} failed to load) {% endif %} +
{% endif %} diff --git a/digits/templates/datasets/images/classification/summary.html b/digits/templates/datasets/images/classification/summary.html index 9f6cab95a..fc18ccc4d 100644 --- a/digits/templates/datasets/images/classification/summary.html +++ b/digits/templates/datasets/images/classification/summary.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. #}

diff --git a/digits/templates/datasets/images/explore.html b/digits/templates/datasets/images/explore.html index 75217a04d..f17444668 100644 --- a/digits/templates/datasets/images/explore.html +++ b/digits/templates/datasets/images/explore.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. #} {% extends "layout.html" %} diff --git a/digits/templates/datasets/images/generic/new.html b/digits/templates/datasets/images/generic/new.html index 8182ab824..d0dc68ba1 100644 --- a/digits/templates/datasets/images/generic/new.html +++ b/digits/templates/datasets/images/generic/new.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes %} {% from "helper.html" import print_errors %} diff --git a/digits/templates/datasets/images/generic/show.html b/digits/templates/datasets/images/generic/show.html index 3a4162fb0..8e45866fe 100644 --- a/digits/templates/datasets/images/generic/show.html +++ b/digits/templates/datasets/images/generic/show.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. #} {% extends "job.html" %} {% from "helper.html" import serve_file %} diff --git a/digits/templates/datasets/images/generic/summary.html b/digits/templates/datasets/images/generic/summary.html index e3754146c..dd1a427ab 100644 --- a/digits/templates/datasets/images/generic/summary.html +++ b/digits/templates/datasets/images/generic/summary.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. #}

diff --git a/digits/templates/error.html b/digits/templates/error.html index e91f9455f..bc34a190e 100644 --- a/digits/templates/error.html +++ b/digits/templates/error.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. #} diff --git a/digits/templates/helper.html b/digits/templates/helper.html index 5efd3a8ca..b8f69b859 100644 --- a/digits/templates/helper.html +++ b/digits/templates/helper.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. #} {% macro print_flashes() %} {% with messages = get_flashed_messages(with_categories=true) %} diff --git a/digits/templates/home.html b/digits/templates/home.html index 34d545a5b..1229f7b9d 100644 --- a/digits/templates/home.html +++ b/digits/templates/home.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes %} diff --git a/digits/templates/job.html b/digits/templates/job.html index 26998e11c..305a79c00 100644 --- a/digits/templates/job.html +++ b/digits/templates/job.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes, print_exception %} diff --git a/digits/templates/layout.html b/digits/templates/layout.html index 7380a1350..7157da232 100644 --- a/digits/templates/layout.html +++ b/digits/templates/layout.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. #} diff --git a/digits/templates/login.html b/digits/templates/login.html index ee81df044..68f527584 100644 --- a/digits/templates/login.html +++ b/digits/templates/login.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes %} diff --git a/digits/templates/models/data_augmentation.html b/digits/templates/models/data_augmentation.html index 3c127cbad..bf8bbc429 100644 --- a/digits/templates/models/data_augmentation.html +++ b/digits/templates/models/data_augmentation.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #}
{{form.aug_flip.label}} diff --git a/digits/templates/models/gpu_utilization.html b/digits/templates/models/gpu_utilization.html index 371a3c506..c42e4f26b 100644 --- a/digits/templates/models/gpu_utilization.html +++ b/digits/templates/models/gpu_utilization.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. #}

Hardware

{% for info in data_gpu %}

{{info.name}} (#{{info.index}})

diff --git a/digits/templates/models/images/classification/classify_many.html b/digits/templates/models/images/classification/classify_many.html index b91051f07..96d67aa2e 100644 --- a/digits/templates/models/images/classification/classify_many.html +++ b/digits/templates/models/images/classification/classify_many.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. #} {% extends "job.html" %} {% block nav %} diff --git a/digits/templates/models/images/classification/classify_one.html b/digits/templates/models/images/classification/classify_one.html index 396c68691..af533bd9b 100644 --- a/digits/templates/models/images/classification/classify_one.html +++ b/digits/templates/models/images/classification/classify_one.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. #} {% extends "job.html" %} {% block nav %} diff --git a/digits/templates/models/images/classification/custom_network_explanation.html b/digits/templates/models/images/classification/custom_network_explanation.html index c79606d10..90aaaf5bb 100644 --- a/digits/templates/models/images/classification/custom_network_explanation.html +++ b/digits/templates/models/images/classification/custom_network_explanation.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. #}

Specifying a custom Caffe network

@@ -105,3 +105,10 @@

Specifying a custom Torch network

Use this field to enter a Torch network using Lua code. Refer to the documentation for more information.

+ +

Specifying a custom Tensorflow network

+ +

+ Use this field to enter a Tensorflow network using python. + Refer to the documentation for more information. +

\ No newline at end of file diff --git a/digits/templates/models/images/classification/new.html b/digits/templates/models/images/classification/new.html index fb2f7eb57..2def43d80 100644 --- a/digits/templates/models/images/classification/new.html +++ b/digits/templates/models/images/classification/new.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes %} {% from "helper.html" import print_errors %} @@ -624,9 +624,35 @@

Data Augmentations

return the_data; } +//copied from https://stackoverflow.com/questions/4565112/javascript-how-to-find-out-if-the-user-browser-is-chrome/13348618#13348618 +function isChrome() { + var isChromium = window.chrome, + winNav = window.navigator, + vendorName = winNav.vendor, + isOpera = winNav.userAgent.indexOf("OPR") > -1, + isIEedge = winNav.userAgent.indexOf("Edge") > -1, + isIOSChrome = winNav.userAgent.match("CriOS"); + + if(isIOSChrome){ + return true; + } else if(isChromium !== null && isChromium !== undefined && vendorName === "Google Inc." && isOpera == false && isIEedge == false) { + return true; + } else { + return false; + } +} + function visualizeNetwork() { var framework = $('#framework').val(); var is_tf = framework.includes("ensorflow") // @TODO(tzaman) - dirty + + if (is_tf) { + if (!isChrome()) { + bootbox.alert({title: "Visualization Error", message: "Tensorflow network visualization is only available for Google Chrome"}); + return; + } + } + var num_sel_gpus = 0 var sel_gpus = $("#select_gpus").val() if (sel_gpus) { diff --git a/digits/templates/models/images/classification/partials/new/network_tab_pretrained.html b/digits/templates/models/images/classification/partials/new/network_tab_pretrained.html index 1e10916cf..6b68e238a 100644 --- a/digits/templates/models/images/classification/partials/new/network_tab_pretrained.html +++ b/digits/templates/models/images/classification/partials/new/network_tab_pretrained.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #} +{% endblock %} + +{% block nav %} +
  • {{job.job_type()}}
  • +{% endblock %} + +{% block content %} + +{% set task = job.train_task() %} + +
    +
    +
    + {% set combined_graph_data = job.train_task().combined_graph_data(cull=False) %} + {% if combined_graph_data %} +
    + + {% else %} + No data. + {% endif %} + +
    +
    +
    + +{% endblock %} + diff --git a/digits/templates/models/images/generic/new.html b/digits/templates/models/images/generic/new.html index 84f9422ec..ea682ff6c 100644 --- a/digits/templates/models/images/generic/new.html +++ b/digits/templates/models/images/generic/new.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. #} {% from "helper.html" import print_flashes %} {% from "helper.html" import print_errors %} @@ -593,9 +593,35 @@

    Data Augmentations

    return the_data; } +//copied from https://stackoverflow.com/questions/4565112/javascript-how-to-find-out-if-the-user-browser-is-chrome/13348618#13348618 +function isChrome() { + var isChromium = window.chrome, + winNav = window.navigator, + vendorName = winNav.vendor, + isOpera = winNav.userAgent.indexOf("OPR") > -1, + isIEedge = winNav.userAgent.indexOf("Edge") > -1, + isIOSChrome = winNav.userAgent.match("CriOS"); + + if(isIOSChrome){ + return true; + } else if(isChromium !== null && isChromium !== undefined && vendorName === "Google Inc." && isOpera == false && isIEedge == false) { + return true; + } else { + return false; + } +} + function visualizeNetwork() { var framework = $('#framework').val(); var is_tf = framework.includes("ensorflow") // @TODO(tzaman) - dirty + + if (is_tf) { + if (!isChrome()) { + bootbox.alert({title: "Visualization Error", message: "Tensorflow network visualization is only available for Google Chrome"}); + return; + } + } + var num_sel_gpus = 0 var sel_gpus = $("#select_gpus").val() if (sel_gpus) { diff --git a/digits/templates/models/images/generic/partials/new/network_tab_pretrained.html b/digits/templates/models/images/generic/partials/new/network_tab_pretrained.html index 6423cc8a3..3c90617d7 100644 --- a/digits/templates/models/images/generic/partials/new/network_tab_pretrained.html +++ b/digits/templates/models/images/generic/partials/new/network_tab_pretrained.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #}
    {% set batch_size = 10 %} {% for batch in form.pretrained_networks|batch(batch_size) %} diff --git a/digits/templates/models/images/generic/partials/new/network_tab_previous.html b/digits/templates/models/images/generic/partials/new/network_tab_previous.html index c408641e0..fa3d2fd82 100644 --- a/digits/templates/models/images/generic/partials/new/network_tab_previous.html +++ b/digits/templates/models/images/generic/partials/new/network_tab_previous.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #}
    {% set batch_size = 10 %} {% for batch in form.previous_networks|batch(batch_size) %} diff --git a/digits/templates/models/images/generic/partials/new/network_tab_standard.html b/digits/templates/models/images/generic/partials/new/network_tab_standard.html index a7c004f67..a4434b56f 100644 --- a/digits/templates/models/images/generic/partials/new/network_tab_standard.html +++ b/digits/templates/models/images/generic/partials/new/network_tab_standard.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #} diff --git a/digits/templates/models/images/generic/show.html b/digits/templates/models/images/generic/show.html index 6e98e1d81..e40d42e5a 100644 --- a/digits/templates/models/images/generic/show.html +++ b/digits/templates/models/images/generic/show.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. #} {% extends "job.html" %} {% from "helper.html" import serve_file %} diff --git a/digits/templates/models/large_graph.html b/digits/templates/models/large_graph.html index 61f9387c4..69d009f12 100644 --- a/digits/templates/models/large_graph.html +++ b/digits/templates/models/large_graph.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2014-2017, NVIDIA CORPORATION. All rights reserved. #} {% extends "layout.html" %} diff --git a/digits/templates/models/python_layer_explanation.html b/digits/templates/models/python_layer_explanation.html index aedacc78e..26a57ac61 100644 --- a/digits/templates/models/python_layer_explanation.html +++ b/digits/templates/models/python_layer_explanation.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved. #}

    Using Python layers

    diff --git a/digits/templates/partials/home/datasets_pane.html b/digits/templates/partials/home/datasets_pane.html index eede22f19..0381a85fd 100644 --- a/digits/templates/partials/home/datasets_pane.html +++ b/digits/templates/partials/home/datasets_pane.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #}
    diff --git a/digits/templates/partials/home/model_pane.html b/digits/templates/partials/home/model_pane.html index 9dff13f1e..e097eb80e 100644 --- a/digits/templates/partials/home/model_pane.html +++ b/digits/templates/partials/home/model_pane.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #}
    diff --git a/digits/templates/partials/home/pretrained_model_pane.html b/digits/templates/partials/home/pretrained_model_pane.html index ea44aea47..cd1913d12 100644 --- a/digits/templates/partials/home/pretrained_model_pane.html +++ b/digits/templates/partials/home/pretrained_model_pane.html @@ -1,4 +1,4 @@ -{# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. #} +{# Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved. #}
    Network