Skip to content

Commit

Permalink
Add readmes to src and aws_scripts
Browse files Browse the repository at this point in the history
Signed-off-by: Cody W. Eilar <[email protected]>
  • Loading branch information
AcidLeroy committed Jul 10, 2016
1 parent 33d9c60 commit 2811284
Show file tree
Hide file tree
Showing 2 changed files with 70 additions and 0 deletions.
35 changes: 35 additions & 0 deletions src/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# SRC directory

This directory contains many different source codes for achieving the `extract_features`
pipeline. The C++ code is the workhorse of the entire project,
the MATLAB code is the inspiration for the C++ code, most of
the python code is for running AWS services, and finally the R
code is used for classifying the output of the C++ code.
Below is a breif description of what is contained in
each of the directories:

- *aws_scripts* - This is all python code for running
`extract_features` on the cloud.
- *datatypes* - A small collection of C++ objects used for
general purpose data strucutures
- *experiments* - Most of the code in here is for bench
marking various aspects of the extract features program.
There are speed comparisons between using `Mat` and `UMat`
datatypes, as well as comparisons for different implementations
of overlap and add.
- *ipython_notebooks* - Any python notebooks used for
generating plots related to `extract_features`.
- *matlab* - The original code used validate that
classification of activities is possible using optical flow
- *optical_flow* - This is where the majority of the C++ library code lives
to extract features using optical flow. There are several
implementations inside the directory. All of the classes are
templatized.
- *programs* - There are a few misellaneous programs inside
this directory, but it also contains the main program of `extract_features`.
- *r_code* - This is where the classifier R code resides.
- *readers* - Originally, the intent of this directory
was to contain several types of video readers. I.e. I wanted
a way to read video formats but I also wanted to be able to read data stored in an hdf5 file. The HDF5 code was never implemented.
- *utilities* - A library of useful utilities for working with videos for this project. Mostly they are related
to *OpenCV* functions.
35 changes: 35 additions & 0 deletions src/aws_scripts/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# AWS Scripts

In order to distribute the `extract_features` program on the cloud,
code in this directory must be leveraged. In order to
successfully run this code, there are a few prerequisites
that must be first met:

## Prerequisites
- Must have an AWS account
- Must have access to the S3 bucket hosting the AOLME videos
(Please email [Cody Eilar](mailto:[email protected]) if you wish to have access)
- Must have an ECS cluster setup that pulls from the latest
optical flow docker build located here: [AWS Container Registry](883856218903.dkr.ecr.us-west-2.amazonaws.com/optical-flow:latest).
- `docker pull 883856218903.dkr.ecr.us-west-2.amazonaws.com/optical-flow:latest`
- If you have permission denied, be sure you follow
the steps on the AWS site for performing a docker pull,
and then you made need to request access from [Cody Eilar](mailto:[email protected]).
- Once the cluster is properly configured, you will then
be able to run the scripts in this directory.

## Description of Scripts
- `add_videos_to_process.py` This is the script that is used
to add video uris to an SQS queue. The videos are then popped
off the queue by the running docker images and the output
of extract_features is put into another queue, which is then read by this program and stored into a variable.
- `message_type.py` - This contains the layout of the messages
sent to the SQS queue for processing.
- `process_video_list.py` - This is ultimately the script
that does all the processing in the Docker image. It essentially
pulls video URI's off the SQS queue and processes them. When
processing is done, it then puts the answer onto another SQS
queue to be processed by the master node.
- `example_videos_to_process.txt` - This is an example input
into the `add_videos_to_process.py` script. This
illustrates how to send videos to the processing nodes.

0 comments on commit 2811284

Please sign in to comment.