Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImageCapture API support #159

Open
ltlBeBoy opened this issue Feb 7, 2017 · 8 comments
Open

ImageCapture API support #159

ltlBeBoy opened this issue Feb 7, 2017 · 8 comments

Comments

@ltlBeBoy
Copy link

ltlBeBoy commented Feb 7, 2017

I extended the code (Version 1.0) to support the (currently experimental) MediaStream Image Capture. It creates an ImageCapture from the active VideoStreamTrack and uses ImageCapture#grabFrame() to grab a frame from the stream.
By using this API the PhoneSettings can be specified which means that e.g. flash light or focus can be configured.
Note that this feature is currently only supported by Chrome for Android when the flag "chrome://flags/#enable-experimental-web-platform-features" is set to "true".

Example configuration (note inputStream.photoSettings and flag inputStream.imageCapture to enable use of the API):

{
    numOfWorkers: 4,
    locate: true,
    inputStream: {
        name: "Live",
        type: "LiveStream",
        constraints: {
            width: {min: 640},
            height: {min: 480},
            facingMode: "environment",
            aspectRatio: {min: 1, max: 2}
        },
        photoSettings: {
            fillLightMode: "torch", /* or "flash" */
            focusMode: "continuous"
        },
        imageCapture: true
    },
    frequency: 10,
    decoder: {
        readers: [{
               format: "ean_reader",
               config: {}
           }]
    },
    locator: {
        patchSize: "medium",
        halfSample: true
    }
}

Modifications:

--- a/src/input/camera_access.js
+++ b/src/input/camera_access.js
@@ -127,6 +127,14 @@
         streamRef = null;
     },
     enumerateVideoDevices,
+    getActiveStreamTrack: function () {
+        if (streamRef) {
+            const tracks = streamRef.getVideoTracks();
+            if (tracks && tracks.length) {
+                return tracks[0];
+            }
+        }
+    },
     getActiveStreamLabel: function() {
         if (streamRef) {
             const tracks = streamRef.getVideoTracks();

--- a/src/input/frame_grabber.js
+++ b/src/input/frame_grabber.js
@@ -69,12 +69,13 @@
      * The image-data is converted to gray-scale and then half-sampled if configured.
      */
     _that.grab = function() {
+        return new Promise(function (resolve, reject) {
+            inputStream.getFrame()
+                    .then(function (frame) {
         var doHalfSample = _streamConfig.halfSample,
-            frame = inputStream.getFrame(),
             drawable = frame,
             drawAngle = 0,
             ctxData;
-        if (drawable) {
             adjustCanvasSize(_canvas, _canvasSize);
             if (_streamConfig.type === 'ImageStream') {
                 drawable = frame.img;
@@ -106,10 +107,12 @@
             } else {
                 computeGray(ctxData, _data, _streamConfig);
             }
-            return true;
-        } else {
-            return false;
-        }
+
+                        resolve();
+                    }).catch(function (err) {
+                        console.error('Error occured while getting frame: ' + err);
+                    });
+        })
     };
 
     _that.getSize = function() {

--- a/src/input/input_stream.js
+++ b/src/input/input_stream.js
@@ -134,7 +134,9 @@
     };
 
     that.getFrame = function() {
-        return video;
+        return new Promise(function (resolve) {
+                resolve(video);
+            });
     };
 
     return that;
@@ -142,8 +144,27 @@
 
 InputStream.createLiveStream = function(video) {
     video.setAttribute("autoplay", true);
-    var that = InputStream.createVideoStream(video);
+    var that = InputStream.createVideoStream(video),
+            _imageCapture = null;
 
+    that.setImageCapture = function (imageCapture) {
+        _imageCapture = imageCapture;
+    };
+
+    that.getFrame = function () {
+        if (_imageCapture !== null) {
+            console.log('Getting frame from ImageCapture...');
+
+            return _imageCapture.grabFrame();
+        } else {
+            console.warn('ImageCapture not set, using video as fallback!');
+
+            return new Promise(function (resolve, reject) {
+                    resolve(video);
+                });
+        }
+    };
+
     that.ended = function() {
         return false;
     };
@@ -305,13 +326,9 @@
     };
 
     that.getFrame = function() {
-        var frame;
-
-        if (!loaded){
-            return null;
-        }
-        if (!paused) {
-            frame = imgArray[frameIdx];
+        return new Promise(function (resolve, reject) {
+                if (loaded && !paused) {
+                    var frame = imgArray[frameIdx];
             if (frameIdx < (size - 1)) {
                 frameIdx++;
             } else {
@@ -320,8 +337,11 @@
                     publishEvent("ended", []);
                 }, 0);
             }
+                    resolve(frame);
+                } else {
+                    reject();
         }
-        return frame;
+            });
     };
 
     return that;

--- a/src/quagga.js
+++ b/src/quagga.js
@@ -60,6 +60,20 @@
         _inputStream = InputStream.createLiveStream(video);
         CameraAccess.request(video, _config.inputStream.constraints)
         .then(() => {
+            if (_config.inputStream.imageCapture === true) {
+                var activeStreamTrack = CameraAccess.getActiveStreamTrack();
+                if (activeStreamTrack) {
+                    var imageCapture = new ImageCapture(activeStreamTrack);
+                    if (imageCapture) {
+                        var photoSettings = _config.inputStream.photoSettings;
+
+                        // set the image capture options (e.g. flash light, autofocus, ...)
+                        imageCapture.setOptions(photoSettings)
+                                .then(function () {_inputStream.setImageCapture(imageCapture)})
+                                .catch(function (err) {console.error('setOptions(' + JSON.stringify(photoSettings) + ') failed: ', err)});                         
+                    }  
+                }
+            }
             _inputStream.trigger("canrecord");
         }).catch((err) => {
             return cb(err);
@@ -281,7 +295,7 @@
         } else {
             _framegrabber.attachData(_inputImageWrapper.data);
         }
-        if (_framegrabber.grab()) {
+        _framegrabber.grab().then(function () {
             if (availableWorker) {
                 availableWorker.busy = true;
                 availableWorker.worker.postMessage({
@@ -291,7 +305,7 @@
             } else {
                 locateAndDecode();
             }
-        }
+        });
     } else {
         locateAndDecode();
     }
@serratus
Copy link
Owner

serratus commented Mar 8, 2017

Thanks a lot for sharing. I'm really excited, I totally missed out on this new API. I was looking for something that supports turning on/off the torch. I'll consider integrating it with the next version.

@drake7707
Copy link

Any progress with this? I've tried enabling the torch through the capabilities but even though it reports torch as available capability it refuses to enable it on my Moto 3G through applyConstraints. I'm hoping that it might work with the above approach.

@seanpoulter
Copy link

Light/torch support has been released with v.12.0.

@ericblade
Copy link
Collaborator

Torch/Focus/etc can be specified now, does that leave this with any valid changes still to be made?

@TomasHubelbauer
Copy link
Collaborator

Since the full media stream is exposed, the user can apply any settings they need so I think this can be closed.

@ericblade
Copy link
Collaborator

It might be an interesting proposal to wrap camera settings like that in something that's a little easier to get to (and perhaps could be given whether the stream is open or not).

@TomasHubelbauer
Copy link
Collaborator

The question then arises whether Quagga is a good place for that? Because at that point we wrap browser API and we need to clone typings for that browser API in our public API and so on. Instead, I think we should keep it simple and either include an example of how to apply the settings in the README (so we can link to it) or as a separate package - but that's generally speaking when it comes to flows on top of Quagga, for applying these settings specifically, a separate package is an overkill. I'm in favor of keeping the core and API surface small and manageable and allow higher order operations be handled by either 3rd party packages or as standalone snippets in the docs.

@ericblade
Copy link
Collaborator

sure, makes sense also. Maybe I'll consider doing a wrapper that does something like that, to mess with it.

In the meantime, I think we should compare @ltlBeBoy 's patch with the current code tree, and see if there's any good functionality in the patch that hasn't been added to master already, so we can either close this, or write a change to include it, then close it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants