-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port "capture" interface into this library from main spectral-workbench project #56
Comments
I think this would do to get a Pi running:
This could be seriously improved by getting the preview window working. But it'll do OK for starters. I think the images could be drawn to the |
OK, I've done a very minimal initial implementation here: #57 This needs a good bit of work, but is successfully getting video from the webcam and graphing. |
It still needs debugging, as well as better separation of basic UI from the integration with the parent server Rails code. Ideally the specific interface connections would be set up via a more standardized set of HTML elements with unique IDs, which would be bound to abstracted features of the capture code. Right now, it's all munged together! |
But an initial step could be to comment out server-specific code and connect it up for us on the Raspberry Pi, and stand-alone in github pages. |
We'll also want this code to produce data in the formats described in https://github.com/publiclab/spectral-workbench.js#usage |
@jywarren I would like to initiate with this work. Can i reserve this project for GSOC 2019 ? I need a |
Hi @Dhiraj240! Thanks for your interest! We don't really reserve projects, but you are very welcome to indicate this as the subject of your GSoC proposal, and that would be great! I think there are some proposals for neural network or machine learning models, and i think there is some potential for this, but I think perhaps it makes sense for such ideas to be implemented in parallel as swap-in alternatives for existing features, such as the linear calibration we currently use. That way people could mix and match, and the substantial code that might go into such a model could be isolated from the main codebase in a modular way. But I like the idea - as do I like the idea of using machine learning to match spectra, or identify features, perhaps. I'd also recommend opening a separate issue for this and linking in related ideas from publiclab/spectral-workbench#399, publiclab/plots2#4660, and potentially some of the ideas from https://publiclab.org/notes/warren/01-02-2019/brainstorming-for-summer-of-code-2019 (where i recommend you leave a comment about your ideas if you haven't already!) For the camera integration, yes, I've done a crude initial move of the code, but SpectralWorkbench.org is not running this capture code yet, and it'd require some cleanup and integration work to get it to be included over there. Worthwhile! And the most related to this issue topic. For the Ruby server, I don't think we're likely to port the entire system, to tell the truth; the code is substantial enough that I would prefer to think about ways to move some more of the functionality into client-side javascript, while keeping the data storage in the back-end Ruby system. One thing that could be an important step towards shifting Ruby code into JS, however, would be to formalize, document, and write tests for the API that is required to provide data to the client-side code. Right now it's spread among a set of different calls that aren't very consistent and most do not have tests. Opening a separate issue for this kind of project, like Thanks for your great energy! |
Also interesting! #75 |
Hmm. Things look more clearer to me.But right now i am confused as from where should i start ? |
I think the best course would be to start refactoring the capture code to
follow a structure similar to the code in, say, publuclab/image-sequencer -
something that gets built with browserify, and has tests, so we can get the
under the surface stuff better organized and easier to read and maintain,
then work on the integration parts. Does this make sense?
…On Mon, Jan 28, 2019, 3:13 PM Dhiraj Sharma ***@***.*** wrote:
Hmm. Things look more clearer to me.But right now i am confused as from
where should i start ?
Should i be working on connecting raspberry pi as suggested above.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#56 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABfJ4_yI54O96JoO1Eu9bimkvrZkttKks5vHz2UgaJpZM4YckDW>
.
|
Okay, things seem clearer now !! i will start working over it.I had a long gap since i joined this organisation and now i got something to prove. |
Thank you too, and let's work on this in small pieces please, so that we
can collaborate! There are others who join our weekly check ins that have
done architectural work on breaking up and restructuring libraries, most
recently in image sequencer. Say hello in the check in and see if others
can offer some help!
…On Mon, Jan 28, 2019, 3:32 PM Dhiraj Sharma ***@***.*** wrote:
Okay, things seem clearer now !! i will start working over it.I had a long
gap since i joined this organisation and now i got something to prove.
Thank You Sir !! 😄
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#56 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABfJzYw_WtI2t-Fgr9uzqJ5igCns76rks5vH0IQgaJpZM4YckDW>
.
|
See for example this one and ask @Mridul97 for input!
publiclab/image-sequencer#668
…On Mon, Jan 28, 2019, 3:56 PM Jeffrey Warren ***@***.*** wrote:
Thank you too, and let's work on this in small pieces please, so that we
can collaborate! There are others who join our weekly check ins that have
done architectural work on breaking up and restructuring libraries, most
recently in image sequencer. Say hello in the check in and see if others
can offer some help!
On Mon, Jan 28, 2019, 3:32 PM Dhiraj Sharma ***@***.***
wrote:
> Okay, things seem clearer now !! i will start working over it.I had a
> long gap since i joined this organisation and now i got something to prove.
> Thank You Sir !! 😄
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#56 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AABfJzYw_WtI2t-Fgr9uzqJ5igCns76rks5vH0IQgaJpZM4YckDW>
> .
>
|
oh good to hear it! Great!!! Good luck and looking forward to it!!!
…On Sat, Feb 23, 2019 at 1:32 AM Dhiraj Sharma ***@***.***> wrote:
@jywarren <https://github.com/jywarren> I have started refracting the
capture.js code file. I will submit my PR by tomorrow, should i send the
PR without getting assigned.
If not then assign this issue and #75
<#75> to me.
After a hard time i configured my Pi. 😄
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#56 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABfJxRdf-VoMUbm17SKsebwhxo6EOITks5vQOBKgaJpZM4YckDW>
.
|
No, i haven't! |
Ok, just wanted to confirm. Thanks 🙂 |
Hi @sidntrivedi012 just copying in helpful links from the Open Call! spectralworkbench.org - spectralworkbench.org/capture
|
I believe this is largely complete in #175 !!! |
The live Capture interface is a great thing to bring into here -- https://spectralworkbench.org/capture
https://github.com/publiclab/spectral-workbench/blob/main/app/views/capture/_capture.html.erb and other files in that directory are the HTML portion.
The JavaScript is in https://github.com/publiclab/spectral-workbench/blob/main/app/assets/javascripts/capture.js
The first attempt, we can just copy that whole file over, and have it appear locally in
/src/
.We can consolidate the HTML files into a single static file in https://github.com/publiclab/spectral-workbench.js/tree/main/examples/capture.html and make sure it's pointed at capture.js, and add any necessary dependencies to our package.json file, installing them via
npm
.Once we get it all running, we can start doing some awesome things like getting it running on a Raspberry Pi! See this code for reference: https://github.com/publiclab/infragram/blob/f5f62ce483fbc88823de96f5c87d160228c0a764/pi/index.html#L189-L191
We'd love help with this one! A lot of moving parts, but a fun one :-)
The text was updated successfully, but these errors were encountered: