-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why not use neural networks for create depth map? #528
Comments
I think it would be a good way to collect alternative methods and be able to implement then in new nodes with compatible in and output. For example I starred https://github.com/AIBluefisher/EGSfM but I forgot if it came from here or alicevision. This might also be an alternative method. |
The problem is not finding alternatives: GeoDesc replace SIFT
If you are interested in implementing a Machine Learning approach, you are welcome to contribute to Alicevision. Here is a similar project @Dok11 |
@natowi could we maybe add this knowledge to https://github.com/alicevision/meshroom/wiki/Good-first-contributions ? |
Wow, its amazing. Nice topic!
It may be interest challenge. If I will realize how compilation program and related actions to contribute :)
I didnt found this path in repo. May be you can provide link? |
@Dok11 also #520 (comment) |
This is way over my skill level, but I am interested. I have a photo booth, 360 degrees, cameras and lighting facing inward. Light flare/reflection is extremely difficult to manage....I'm wondering if this type of approach would make life easier for me |
It will work and solve your problem, but I cant say when this workflow with neural networks will be able in usual software like meshroom. |
@Dok11 I found a script for Metashape (Photoscan) https://github.com/agisoft-llc/metashape-scripts/blob/master/src/model_style_transfer.py and tensorflow... But this is for model style transfer (texture) and not 3d reconstruction |
I still save interest for this topic but without Meshroom. Maybe I will still motivation and will do something real from this words :)
I see using style transfer (one of type on neural networks). Maybe it provide ability to avoid glithces on textures? But is just assumption |
@natowi @Baasje85 Currently I develop network which can potential predict camera positions. But second three numbers currently is Now I'll some describe for more clarity: But now imagine what will be if I'll scale whole scene up to 10 times? All dimensions size will be increased BUT content in images will not change and still as was before. So what I want. Maybe you can suggest some method to describe camera movement in 3d space without absolute variables in scene dimensions? |
Thanks @Baasje85 pointing me here. @Dok11 have you considered using parametric constraints first? For example by angle, percentage, projection, scale. If all your project images files are resolved you could resolve the parametric constraints (maybe even including ground controlpoint registration) towards absolute positions. Then use the current pipeline to do the other tricks with respect to alignment. |
@skinkie thanks for your participian! Seems like you right but for my opinion any constraint make task not so universal. I searched decision based on camera degrees but here people describe me what it is impossible :) well okay In time decision searching I got a couple of ideas to make neural network more robustness. Maybe in next week I share my results and decision details, if I successed. And more. For more accurate distance result from picture to picture I think need to use pipeline with two neural networks: |
Bees track their position over time by the the angle towards the moving sun, a distance metric based on the rocking of their wings and a local heuristic based on smell. I would completely agree that using a single constraint doesn't work similar to using a single x or a single y. But this is not the case with parametric constraints, you can apply multiple (even controversial) constraints based on angle and distance (offset), all being relative towards each other. Your approach using multiple steps is advisable because an other reason: even with an unknown FOV you could infer a FOV in relative units which is a new metric. I do wonder if it wouldn't be better to do so with the existing approaches opposed to neural networks. As I wrote in another ticket even information such as time and order of the photo might give you significant clues regarding the search space. |
No. I have plan to make several neuralnetworks, it includes:
I thought estimation camera position dont cross with FeatureExtraction tasks. |
@Baasje85 and myself have a huge dataset available created by multiple camera's, some even geotagged. If you are interested in using it, we can obviously provide it.
This may be true for meshroom (and its depthmap), but not for other approaches like openMVG/openMVS in which a high density point cloud seems to be the thing used.
Would obviously be cool :-) |
Sounds good! Can you show several examles of it?
I did not come across this. Maybe do it for Meshroom will be more understandable task. |
@Dok11 with repair you mean something analogue to LocalBundle adjustment? |
I did't seen this term before but seems like you right. As I wrote before my currently NN can define camera positions in scene by image (transition and rotation). In sum I want know minamal set of skills of NN which can be helpful to Meshroom. |
If you can return the intrinsics for the camera's I am sure that would significantly reduce the computing effort to find them. |
@Dok11 https://arxiv.org/abs/1506.06825 Seen that one? |
@skinkie not this one, but every year on arxiv appear several works in this theme. For example more newest article https://arxiv.org/abs/2001.05036 (january 2020), I have seen that but in common way it about either creating only depth maps or generating new points of view. And I was mistake when think what including neural network in the process is easy. Not rocket science of course but still. Neural network which about I wrote above is not ready but will be useful for neural network which will camera pose estimation. Release day postponed xD |
"DeepV2D: Video to Depth with Differentiable Structure from Motion" |
Hello everyone, as suggested at the top of this issue, I start to implement Dense depth as Meshroom node( codebase). (I'm new in this project so I do a class plugin that call by command the python code, I think that it can be included into the standard class of Meshroom but I do not know enough, the dependencies are torch and numpy for now, if someone have advice help me also for this point) Thank you in advance! |
@nicolalandro that´s great! I found a few OpenExr examples that may be useful for you: There is aliceVision_utils_imageProcessing (called by https://github.com/alicevision/meshroom/blob/develop/meshroom/nodes/aliceVision/ImageProcessing.py) which can convert images to exr, but I don´t know how useful this is in your case. Meshroom does use OIIO for image processing and even has its own oiio plugin for qt https://github.com/alicevision/QtOIIO/. |
Thank you for all material! It is very interesting! My question is about the information that I should write in this file, I must write dept only or RGB + Depth? |
Now I see what you were asking for. I think it is depth only, as for the RGB information the undistorted images will be used. Meshroom uses PIZ (wavelet compression) for exr. "Depth maps are stored in 32 bits floating point EXR" |
Thank you! As soon I have a minute I try to implement It and test! |
As I expected when I create the exp file I miss some important header information. If someone wants to try to run it (code) I suggest using small images like 720*980 to have a quick response (up to now the node do not use GPU). Does anyone have some ideas? If not I can get the data directly from the DeepMap node and change the Y value that is the depth map to reach quick results and ask the question "Can this AI be included in the standard pipeline?" (but I want to replace that node). Doing this I see that the node create a depthMap but also a SImMap, what is a SimMap? I also try to copy it, but I get the same error. |
@nicolalandro I just remembered there was the experimental masking node, that also uses the openexr library on the Meshroom side: https://github.com/alicevision/meshroom/pull/641/files L83 The discussion around this topic also contains some valuable information like #566 (comment) or #566 (comment) Also #1361 (comment)
That is especially interesting, so I´ll be happy to check out your code, when I have some time! I hope this information is helpful for you. If you need more information, maybe @ALfuhrmann can give you some input on the openexr writer based on his node code I mentioned earlier or @fabiencastan can give you a hint. The documentation on the EU project from which Meshroom originates from has some valuable (but sometimes outdated!) information that is not documented anywhere else. They are publicly available, but I merged the relevant articles here (search for depth map / similarity map / exr) |
@nicolalandro any progress? |
Now I'm working a lot so I do not use time to continue the work: there there is DenseDepthDeepMap.py that run the main.py script that extract the depth map and save into files, but it seams do not work into the pipeline. I'm locked here. |
@nicolalandro , do you still have problems writing exr? To write it, I simply uses python opencv:
|
I think that I writer correct the exr, but the problem Is that the death map Is not the only output of the standard node. So the standard flow Is broken. |
@nicolalandro I'm doing something similar: creating my own depthMap.exr |
I'll try by removing the original and sostituite with mine, I Will try with both but I do not keep the _simMap, I Will try when I can, but in this way the computation Is worse, maybe the good idea can be to change the whole workflow. |
I've quickly checked your code, and there are some stuff which doesnt seems to be mandatory like adding the header or clipping the predictions |
https://github.com/alicevision/MeshroomResearch Deep-learning-based depth map estimation:
Optimization-based via NerfStudio (Upcoming):
|
Something like this https://github.com/ialhashim/DenseDepth#results
or this https://github.com/gautam678/Pix2Depth
Using this technlogy mey help to avoid some bugs on the flat/mirror/shiny areas. And using neural network will be faster than the current approach.
I wanted to see how DepthMap node works at this moment, but cant find code exept node description in
meshroom/nodes/aliceVision/
The text was updated successfully, but these errors were encountered: