-
Notifications
You must be signed in to change notification settings - Fork 1
/
index.html
403 lines (283 loc) · 32.8 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="chrome=1">
<title>OpenCV-object-detection-tutorial by JohnAllen</title>
<link rel="stylesheet" href="stylesheets/styles.css">
<link rel="stylesheet" href="stylesheets/github-light.css">
<script src="javascripts/scale.fix.js"></script>
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
<!--[if lt IE 9]>
<script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script>
<![endif]-->
</head>
<body>
<div class="wrapper">
<header>
<h1 class="header">OpenCV-object-detection-tutorial</h1>
<p class="header">How to Detect Objects Using OpenCV & a Negative Image Set</p>
<ul>
<li class="download"><a class="buttons" href="https://github.com/JohnAllen/opencv-object-detection-tutorial/zipball/master">Download ZIP</a></li>
<li class="download"><a class="buttons" href="https://github.com/JohnAllen/opencv-object-detection-tutorial/tarball/master">Download TAR</a></li>
<li><a class="buttons github" href="https://github.com/JohnAllen/opencv-object-detection-tutorial">View On GitHub</a></li>
</ul>
<p class="header">This project is maintained by <a class="header name" href="https://github.com/JohnAllen">JohnAllen</a></p>
</header>
<section>
<h1>
<a id="object-detection-using-opencv" class="anchor" href="#object-detection-using-opencv" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Object Detection Using OpenCV</h1>
<p>Recently I wanted to create object detection capabilities for a robot I am working on that will detect electrical outlets and plug itself in. The robot needs to perform with a high level of accuracy and success, at least 99% or more each step of the way. One thing to remember about robot operations is that if each step required to complete a goal succeeds only 99% of the time and there are multiple processes, the ultimate-goal success rate will be .99^n, which could result in ultimate-goal completion rate that is significantly less than 99%. So each step of the way must be nearly >99% successful. Object detection is the first step in many robotic operations and is a step that subsequent steps depend on. </p>
<p>Because the performance of the object detection directly affects the performance of the robots using it, I chose to take the time to understand how OpenCV’s object detection works and how to optimize its performance. I also found the available documentation, tutorials incomplete or outdated; and a few SO questions similar to mine remain unanswered. So it seemed that taking the time to write a detailed reference with my findings might benefit others.</p>
<p>Here's a great example of how well OpenCV's object detection can work when you get it right!!</p>
<p><img src="images/outlet-detection-10-ines.PNG" alt=""></p>
<p>In this post, I use *nix programs; I apologize to Windows users in advance. </p>
<p>I want to point out that installing OpenCV for certain platforms can be complicated and slow. I suggest reading this post thoroughly, collect your images and then install OpenCV on a remote server. Installation will be much easier if you use a remote server running Ubuntu and you can rent a server with much more CPU than your laptop will have to complete the training much faster.</p>
<p>As I began to learn about OpenCV’s object detection capabilities, I had numerous questions:</p>
<ul>
<li>What is going on behind the scenes? How does the Viola-Jones algorithm work?</li>
<li>How many positive and negative images do I need?<br>
</li>
<li>Should I provide multiple positive images or will using OpenCV’s <code>create_samples</code> utility to generate distorted versions of a single positive image suffice?</li>
<li>Does it matter what the negative images contain so long as they don’t contain the object I want to detect?</li>
<li>For positive images, do the objects need to fill the entire image?</li>
<li>What is the easiest way to create positive images? How can I acquire a negative image set?</li>
<li>Do the positive and negative images need to be the same size?</li>
<li>Which options do I want to pass to each of the OpenCV programs?</li>
<li>How can I quickly test the performance of my classifier and cascade file?</li>
<li>How could I train my classifier on a remote host so I don’t have to use my machine to train for multiple days or weeks?</li>
<li>Does it matter if I use Haar-features or can I use linear binary patterns (LBP) since the LBP approach is faster?</li>
</ul>
<h2>
<a id="viola-jones-algorithm---features-integral-images-and-rectangular-boxes" class="anchor" href="#viola-jones-algorithm---features-integral-images-and-rectangular-boxes" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Viola-Jones Algorithm - Features, Integral Images and Rectangular Boxes</h2>
<p>First off, let’s briefly delve into how the Viola-Jones algorithm works and try and understand what it’s doing. If one reads the abstract to the original Viola-Jones paper, we find some new but important terms: integral image, cascade, classifier, feature, etc. Let’s take a minute to learn about them.</p>
<h3>
<a id="how-does-this-software-use-rectangular-boxes-to-detect-objects--what-the-are-integral-images" class="anchor" href="#how-does-this-software-use-rectangular-boxes-to-detect-objects--what-the-are-integral-images" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>How does this software use rectangular boxes to detect objects? What the are integral images?</h3>
<p>Integral images and rectangular boxes are the building blocks that the Viola-Jones algorithm uses to detect features. An object’s features are seen by the computer as differences in pixel intensities between different parts of images. The algorithm doesn’t care what color our objects and images are, just the relative darkness between parts of the images. </p>
<p>The original paper uses the most obvious feature of human faces, the difference in darkness between the human eye and cheek regions. The training program looks at all combinations of adjacent rectangles as sub-images within each training image and compares the difference between adjacent rectangles.</p>
<p>A simplification that could help us understand how object features are detected is to reduce the image to how the computer sees it. Computers don’t see images, they see numbers. In this case, the algorithm determines the darkness of adjacent rectangles and compares those. Individual features are differences in the darkness of adjacent rectangles.</p>
<h3>
<a id="steps-required-to-create-an-object-detection-cascade-file" class="anchor" href="#steps-required-to-create-an-object-detection-cascade-file" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Steps Required to Create an Object Detection Cascade File</h3>
<p>Below is a brief overview of the steps required to generate a cascade file for object detection. Don’t worry about the details, now, we'll walk through each step below.</p>
<ol>
<li>Install OpenCV</li>
<li>Create a directory that will house your project and its images</li>
<li>Acquire or develop positive images</li>
<li>Create an annotation file with the paths to your objects in the positive images</li>
<li>Create a <code>.vec</code> file that contains images of your objects in binary format using the annotation file above</li>
<li>Develop and acquire negative images that do not contain the object you wish to detect</li>
<li>Train the cascade</li>
<li>Test your cascade.xml file</li>
</ol>
<h3>
<a id="installing-opencv-on-linuxubuntu" class="anchor" href="#installing-opencv-on-linuxubuntu" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Installing OpenCV on Linux/Ubuntu</h3>
<p>I mentioned that the training can take a long time. It can actually take weeks, I've read. I <strong>strongly</strong> recommend you use a remote server to train your cascade. Here are two reason why: one, it will speed up the training immensely (mine took only 18 minutes); and two, installing OpenCV on Ubuntu is way faster than compiling from source on a Mac. There are no pre-compiled binaries available for OS X.</p>
<p>I used an 8-core Digital Ocean server to train mine. This server cost about $5 per day. You should only need one for a few hours, or perhaps a few days if you struggle to get the training right. I believe when you sign up for Digital Ocean that you get $10 in credit too, so you can probably do this for free.</p>
<p>Tip: It’s not super difficult to find $10 coupons for Digital Ocean if you look around a bit. Another benefit of using Digital Ocean for this is that your local machine mustn’t be devoted to the training - a remote server will keep training even if you accidentally close your machine.</p>
<h3>
<a id="how-to-rent-a-digital-ocean-server" class="anchor" href="#how-to-rent-a-digital-ocean-server" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>How to Rent a Digital Ocean server</h3>
<ol>
<li><a href="https://m.do.co/c/caa99089d223">Create a droplet on Digital Ocean</a></li>
<li>Choose Ubuntu 14.04</li>
<li>Choose a region (region and latency don’t matter here since we only need to download our final cascade.xml file)</li>
<li>Ignore the additional options</li>
<li>For the SSH key, if you know what this and already have a key on your machine, you can add your public key to Digital Ocean, which is what I recommend. Otherwise please <a href="https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets">this brief tutorial</a> tutorial to setup SSH keys.</li>
<li>Once you have the server up and running and you're logged in, see <a href="https://help.ubuntu.com/community/OpenCV">this tutorial</a> to install OpenCV.</li>
</ol>
<h3>
<a id="how-to-develop-a-positive-image-set" class="anchor" href="#how-to-develop-a-positive-image-set" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>How to Develop a Positive Image Set</h3>
<p>There are a few ways to develop positives.</p>
<ol>
<li>Take pictures of the actual objects you intend to detect. You will have to do this if you're detecting something unique which is not easily google-able.</li>
<li>Google your object and save those images.</li>
</ol>
<p>I used a combination of these two approaches.</p>
<h3>
<a id="taking-your-own-photos" class="anchor" href="#taking-your-own-photos" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Taking Your Own Photos</h3>
<p>Here are a few things to remember when taking pictures of your object(s). Probably the most important: you can take multiple images of the same thing that count as multiple positives. You can slightly (but not too much) tilt and rotate your object (approximately 10-20º). If you have multiple instances of the object, like shoes, take pictures of all of them, positioned in the same way (toes facing left or right).</p>
<h3>
<a id="googling-for-images-of-your-object" class="anchor" href="#googling-for-images-of-your-object" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Googling for Images of Your Object</h3>
<p>I found different color outlets when googling; also different backgrounds and angles. When googling for your object, you can specify the size of the images Google returns, too. </p>
<p>To set the size once you have clicked "Images", </p>
<ul>
<li>Click <strong>“Search Tools”</strong>
</li>
<li>Then <strong>"Size"</strong>
</li>
<li>Click <strong>“Size”</strong>
</li>
<li>Click <strong>“Exactly”</strong>
</li>
<li>Enter a size. I used 256x256 pixels. I think this is a reasonable balance between maintaining resolution and using small enough images so as to minimize training time. I tried smaller images, 80x80 pixels which resulted in tons of false positives.</li>
</ul>
<p>I recommend using at least 100-200 positives to start off. You may get a decent result with fewer, some have. I used ~380 for my final, nearly perfect cascade file, with zero false positives that more than flickered on the screen.</p>
<h3>
<a id="creating-an-annotations-file-with-opencvs-annotation-tool" class="anchor" href="#creating-an-annotations-file-with-opencvs-annotation-tool" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Creating an Annotations File with OpenCV’s Annotation Tool</h3>
<p>Once you have your positive images, you <strong>should</strong> make an annotations file. I say "should" because I think this is an important step. I didn't generate a working <code>cascade.xml</code> file until I used this tool to create an <code>annotations</code> file. At first it seems like this tool will take a long time to make such a file, but it doesn’t. I suggest starting out by using this tool and not trying to train your cascade without it. </p>
<h3>
<a id="heres-how-it-works" class="anchor" href="#heres-how-it-works" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Here’s how it works:</h3>
<p>Along with OpenCV's <code>traincascade</code> and <code>createsamples</code> applications, when you type <code>opencv_[tab]</code> in your terminal (once you have OpenCV installed), you will find another tool:
<code>opencv_annotation</code></p>
<p>The <code>opencv_annotation</code> tool helps you to quickly generate an annotation file with paths to your positive images and the location and size of the objects within those positive images. Note that the starting pixel is the <em>top-left</em> corner of the rectangle that contains your object. When done, the file will look something like this:</p>
<p><img src="images/annotation-file.png" alt="annotation file"></p>
<p>The “2” after the file path is the number of positives in each image (lots of mine were two because outlets come in pairs). Then we have the top left hand corner starting pixel of our object. Next are the sizes of each object within the image. </p>
<p>So in the first line in the annotations image above, the “230 169” is the pixel at the top left corner in <code>GOPR4620.JPG</code> where an outlet starts. It is 33x40 pixels. You get the point.</p>
<p>The annotation tool writes the paths that you outline in each image for you which saves us a ton of time. </p>
<p>Here’s the command that I used to create the annotations file. </p>
<p><code>opencv_annotation -images . -annotations annotations.txt</code></p>
<p>I had one problem with this tool that will hopefully not happen to you or be fixed. The annotation tool would not write to the file when “n” was pressed after outlining an object. It would only write to the file when all of the images in the directory had been processed. </p>
<p>As a workaround, I moved my images into a series of directories and added each directory’s annotations file to the main one using a command like the following, which takes the contents of one file and adds those to another.</p>
<p><code>cat ./sub-dir/annotations.txt >> ./main-annotations.txt</code></p>
<p>Be sure to use two arrows, like “<code>>></code>” or else cat will overwrite your annotations file and you’ll have to start over!</p>
<p>After you create this annotations file you can use the opencv_createsamples tool to create a .vec file but with more varied positive images.</p>
<h3>
<a id="ideal-positive-and-negative-images" class="anchor" href="#ideal-positive-and-negative-images" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Ideal Positive and Negative Images</h3>
<p>Ideally your positive and negative images will contain the actual objects you’re trying to detect in their natural environment.</p>
<h3>
<a id="how-can-i-develop-a-negative-image-set" class="anchor" href="#how-can-i-develop-a-negative-image-set" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>How Can I Develop a Negative Image Set?</h3>
<p>There are a few ways to generate negative images. One thing to remember is that you will get the best results when using negatives from the environment you intend to use your cascade file in. </p>
<p>In this post's repository is a directory with a few tarballs that contain a total of 3,100 negatives. Note that you will need to scroll through each one to ensure they don't contain you object. </p>
<p>Here's another way to develop images using downloaded videos and grabbing frames.</p>
<ol>
<li>Identify the environment your object detection will be working in: warehouse, home, office, outside?
Find a Youtube.com video that contains your environment. This should be really easy, Youtube has millions if not billions of videos.</li>
<li>Scan the video to make sure it doesn’t contain your desired object. This may seem like it will take a long time to do. It doesn’t. Just start the video and click right every few seconds through it. You’ll be done in no time.</li>
<li>Find a site that will enable you to download the Youtube video. This should be easy. I will leave you to do that yourself.<br>
</li>
<li>Download that video to a project directory. I downloaded it to a negatives directory.</li>
<li>Grab frames from the video. This will enable you to create hundreds or thousands of negative images in a few minutes. I used <code>ffmpeg</code>. You can decide what percent of the video’s frames you would like to keep depending on how many negatives you think you need.</li>
<li>Repeat this a few times until you have thousands of negatives. Remember, the more the better. I didn’t start getting solid detection results until I used ~3,500 negatives.<br>
</li>
</ol>
<p><strong>Important Note</strong></p>
<p>If you use this frame-grabbing approach, make sure to only get one out of every few dozen frames, unless your video is really moving around the environment. Most videos show the same view for at least a few seconds, so ensure that your negatives generated using this approach are different.</p>
<h3>
<a id="getting-your-image-sets-to-the-remote-server" class="anchor" href="#getting-your-image-sets-to-the-remote-server" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Getting your Image Sets to the Remote Server</h3>
<p>Here are a couple of commands you can use to easily copy your positive and negative images to the remote server.</p>
<p>First, I suggest creating a tarball for each directory of images. This will speed up and simplify the transfer process.</p>
<p>While in your image directories do something like this:</p>
<p><code>tar -cvzf positives.tar.gz /path/to/positives-folder/*.jpg</code></p>
<p><code>tar -cvzf negatives.tar.gz /path/to/negatives-folder/*.jpg</code></p>
<p>These will each create a single file that contains your positive and negative images (with only the file extension you specify at the end) in the path you specified as the last argument above. </p>
<p>Here’s the command to copy your tarball to your remote server. This <code>ssh</code>s into your remote server and copies the file to the path you specify:</p>
<p><code>scp positives.tar.gz root@[your-remote-ip]:/remote-project-dir/positive-image-dir</code></p>
<p>Don’t forget the “<code>:</code>” in the command above.</p>
<p>Once you've connected to your remote server, while in the appropriate directories, unzip your tarballs: </p>
<p><code>tar -xvf negatives.tar.gz</code></p>
<h3>
<a id="what-size-should-my-images-be" class="anchor" href="#what-size-should-my-images-be" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>What Size Should My Images Be?</h3>
<p>Some people use consistently sized images. I didn't. One important thing is that the sizes of your images need to be at least the size of the test, which defaults to 24x24 pixels. </p>
<p>According to an OpenCV author, Steven Puttemans, he never uses images with dimensions larger than 80px. I tried using 80px dimensions to speed things up. I got tons of false positives when doing so. But, I believe much of the image information was lost. I ended up using 256x256 pixel images. Smaller images may work, but 256 pixels square worked for me. </p>
<p>Note that if your images are small to begin with, increasing their size with <code>mogrify</code>will not necessarily magically make them useful to the algorithm. I used this resize for images that started off larger than this, to increase the training speed.</p>
<p>What definitely does matter is the width <code>-w</code> and height <code>-h</code> arguments you pass to <code>createsamples</code> and <code>traincascade</code> . You will not be able to detect objects smaller than the dimensions you pass. They both default to 24x24.</p>
<h3>
<a id="opencv_createsamples-parameters" class="anchor" href="#opencv_createsamples-parameters" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a><code>opencv_createsamples</code> Parameters</h3>
<ul>
<li>
<strong><code>-num</code></strong>
<ul>
<li>How many samples to generate. This is based on how many objects are in your annotations file.</li>
</ul>
</li>
<li>
<strong><code>-vec object.vec</code></strong>
<ul>
<li>This is the filename that will be created that will contain your positives.</li>
</ul>
</li>
<li>
<strong><code>-info annotations.txt</code></strong>
<ul>
<li>Because I know you used the <code>annotation</code> tool to create an annotations file.</li>
</ul>
</li>
<li>
<strong><code>-bg bg.txt</code></strong>
<ul>
<li>This is the same file that holds the paths to your negative files. <code>createsamples</code> inserts your positives on your negatives.</li>
</ul>
</li>
</ul>
<h3>
<a id="opencv_traincascade-parameters" class="anchor" href="#opencv_traincascade-parameters" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a><code>opencv_traincascade</code> Parameters</h3>
<ul>
<li>
<strong><code>-featureType</code></strong>
<ul>
<li>I would use LBP. It is faster than HAAR and can result in awesome object detection.</li>
</ul>
</li>
<li>
<strong><code>-w</code></strong> and <strong><code>-h</code></strong>
<ul>
<li>These specify the size of the window the algorithm will apply to the negatives. Do not specify these dimensions smaller than the object will appear in your working images.</li>
</ul>
</li>
<li>
<strong><code>-numPos</code></strong>
<ul>
<li>This one has some gotchas. You must actually pass a smaller number than the actual number of positives you have. You should use 85% as many positives as are actually in the <code>.vec</code> file*. This is because the training algorithm may discard some positives if some are too similar. If you use <code>create_samples</code> to create a <code>.vec</code> file, you are more likely to run into this problem. See <a href="http://answers.opencv.org/question/4368/traincascade-error-bad-argument-can-not-get-new-positive-sample-the-most-possible-reason-is-insufficient-count-of-samples-in-given-vec-file/">this link</a> for more of an explanation on why to use 85%</li>
</ul>
</li>
<li>The memory options are supposed to help performance, but if you're using a remote server that's doing nothing else, I don't think they will speed things up.</li>
<li>
<strong><code>-data</code></strong> The directory where OpenCV will store your cascade file and other related files.</li>
<li>
<strong><code>-bg bg.txt</code></strong> This file contains the paths to your negatives. This file is pretty easy to create, just: <code>ls *.jpg > bg.txt</code>, while in your negatives directory.</li>
<li>
<strong><code>-acceptanceRatioBreakValue</code></strong> You can use this to stop training at .00001 or 10e-5.</li>
<li>
<strong><code>-vec</code></strong> This is the file output by <code>opencv_createsamples</code> that contains your positives.</li>
</ul>
<h3>
<a id="testing-the-performance-of-your-cascade-file" class="anchor" href="#testing-the-performance-of-your-cascade-file" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Testing the Performance of Your Cascade File</h3>
<p>To quickly test the performance of our cascade files, I have included a <a href="https://github.com/JohnAllen/opencv-object-detection-tutorial/test/webcam.py">Python file</a> that you can use to test your object detection locally with your computer's webcam. I'd like to credit Shantnu for <a href="https://github.com/shantnu/Webcam-Face-Detect">originally posting</a> a file very similar to the one included (with a version-error fix). This file will let you quickly test your cascade file. To test your cascade file, just run this command:</p>
<p><code>python webcam.py cascade.xml</code></p>
<p>What this file does is run OpenCV's detection in your computer's webcam, so this will only work if you have one of your objects handy. Sometimes images of objects on your phone or perhaps a printed image will work too. </p>
<p>Shantnu <a href="https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/">wrote a post</a> about this file and explains what's going on inside. I recommend you take a minute to understand it, especially the </p>
<p><code>faces = faceCascade.detectMultiScale</code> </p>
<p>part. </p>
<p>This is the core OpenCV function that actually uses our cascade files to detect our objects. The parameters are important here. Pay close attention to the <code>scaleFactor</code>, <code>minNeighbors</code> and <code>minSize</code>. <code>minSize</code> is self-explanatory. But the others aren't: <code>scaleFactor</code> scales your image down to enable your object to be detected. So <code>scaleFactor = 1.1</code> shrinks your image by 10% - it zooms out, so to speak. <code>minNeighbors</code> is also very important. This <a href="http://stackoverflow.com/questions/22249579/opencv-detectmultiscale-minneighbors-parameter">SO answer</a> definitely will do a better job explaining it than I will. So please check that out. The gist of it is that the higher <code>minNeighbors</code> is the higher the threshold for detecting objects is. If <code>minNeighbors</code> is too low, you will get too many false positives. This image shows you exactly what I'm talking about.</p>
<p><img src="http://i.stack.imgur.com/qo2Xn.jpg" alt=""></p>
<p>See how the actual faces have more squares? Even with a working cascade file we still have some false positives. The <code>detectMultiScale</code> function is sliding a square over our image source looking for parameters. Stronger matches (our actual objects) will have neighboring squares that also match. Those are the neighbors we're looking for. </p>
<p>Included with OpenCV are a <a href="https://github.com/Itseez/opencv/tree/master/data/haarcascades">few working cascade.xml files</a> too. It's fun to run things just to see them work, so check those out.</p>
<h3>
<a id="how-small-can-my-objects-be-in-the-image-and-still-be-detected" class="anchor" href="#how-small-can-my-objects-be-in-the-image-and-still-be-detected" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>How small can my objects be in the image and still be detected?</h3>
<p>This depends on how small your samples are in your <code>.vec</code> file. I set mine at 20px x 20px because I want my robot to detect outlets from a long ways away. Your situation may be different.</p>
<h3>
<a id="general-tips" class="anchor" href="#general-tips" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>General Tips</h3>
<ul>
<li>Use an image format that doesn’t lose information compression as much. This will avoid compression artifacts. “This is especially the case when resizing your training data.” <a href="http://answers.opencv.org/question/39160/opencv_traincascade-parameters-explanation-image-sizes-etc/">http://answers.opencv.org/question/39160/opencv_traincascade-parameters-explanation-image-sizes-etc/</a>
</li>
<li>ImageMagick is your friend. ImageMagick, which is easily installable with the HomeBrew package manager makes some image operations super easy. Want to resize some large positive or negative images you took on your smartphone (modern iPhones are 12MP, 3000px * 4000px) which slows down the training algorithm without adding detection capabilities. </li>
<li>There is not necessarily a correct ratio of positives to negatives. I have always seen people recommend one or more times as many negatives as positives. My ratio of negatives to positives was 10:1. That's what worked for my object but yours could be different.</li>
<li>Train until you reach a <a href="http://answers.opencv.org/question/84852/traincascades-error-required-leaf-false-alarm-rate-achieved-branch-training-terminated/">~10^-5 or ~.00001</a>. Any more than this and you could be overfitting.</li>
<li>
<strong>This could be really helpful</strong>: how do you get a <code>cascade.xml</code> file when <code>traincascade</code> wants to keep training past .00001? Simple: stop training with [ctl]-c. Then add the <code>-numStages n-1</code> parameter to the <code>traincascade</code> command you were just using, where n is the number of the stage after it reached .00001.</li>
<li>Play with the parameters in <code>detectMultiScale</code> a bit if you're getting too many false positives or otherwise poor detection results. Try reducing the <code>minNeighors</code> to 3 or below to see if your cascade is detecting anything at all.</li>
<li>For the <code>bg.txt</code> file. It is common for this to have some extraneous files in it. So use this command so you only get your <code>jpg</code>s in it. <code>ls *.jpg > bg.txt</code> command while in the negatives dir. Make sure you don't have any <code>newlines</code> or <code>BOM</code>s if you're on Windows.</li>
<li> Another common mistake that is potentially the fault of OpenCV is the absolute/relative path thing while running <code>traincascade</code>. I ended up running <code>traincascade</code> while in my negatives folder which solved most of those problems. Just pass <code>-vec ../some.vec</code> <code>-data ../data</code>, etc.</li>
</ul>
<h3>
<a id="too-many-false-alarms-or-false-positives" class="anchor" href="#too-many-false-alarms-or-false-positives" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Too many false alarms or false positives</h3>
<p>Add more information! Increase your positive and negative image sets. Your classifier does not have enough information to correctly determine that your object is not in your test images. When I increased my positives and negatives when I had too many false positives, their number immediately declined and I started getting more stages. </p>
<h3>
<a id="error-messages" class="anchor" href="#error-messages" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Error Messages</h3>
<p>Here are the likely causes of various error messages.</p>
<p><code>“Required leaf false alarm rate achieved. Branch training terminated“</code></p>
<p>The training algorithm can run out of information that will help it to add to its classifiers. If it has already gleaned as much as it can from the images, it simply stops. This is the output you will get when this happens. </p>
<p>This will happen earlier when you are using smaller image set sizes. If you only pass it a few dozen or hundred images it can only train a few stages. The more images you pass, the later you will run into this error and the better the cascade file will do to detect your objects. </p>
<p>But maybe your object is super static and it doesn’t take many positives to develop a good classifier. What you can do is to add the argument <code>-numStages n-1</code>, to <code>opencv_traincascade</code> where n is the stage number that gave you that error message. This will cause a <code>cascade.xml</code> file to be made that may work, or could at least provide you with some information about whether your arguments and images are on track. </p>
<p><code>Train dataset for temp stage can not be filled. Branch training terminated.</code></p>
<p>This is most common when you have not provided enough positives, which is really the most time-consuming aspect of training. Add more positives!</p>
<h2>
<a id="overarching-takeaway" class="anchor" href="#overarching-takeaway" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Overarching Takeaway</h2>
<p>OpenCV is a mature, robust computer vision library. If you don't get solid results, you are either passing <code>traincascade</code> not enough images or the wrong images. Keep working at it until you get good detection. It may take a few tries like it did for me, but stick at it, it's magical when it works!</p>
</section>
<footer>
<p><small>Hosted on <a href="https://pages.github.com">GitHub Pages</a> using the Dinky theme</small></p>
</footer>
</div>
<!--[if !IE]><script>fixScale(document);</script><![endif]-->
</body>
</html>