-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reconstruction "grows" artifacts into neutral background #566
Comments
I think this error is caused by texture/pattern similarity resulting in inaccurate feature detection in this area. |
I would try increasing the SGM: Nb Neighbour Cameras, Refine: Nb Neighbour Cameras values and compare the results. Basically use the information provided in the wiki to increase the density of your reconstruction in You can create a new node graph variation to compare the different outputs. (Duplicate .. From here) |
This is one of the strong reasons that you should probably avoid featureless backgrounds for scanning. Once the algorithm is looking really hard for variations in a surface in order to get an idea of where it is, tiny become enormously important to it. In most cases, you would be better off putting your target on a sheet of newspaper with all of its texture and type then you are putting it on a featureless white field. It's easy enough to clean up the model after the fact and remove a properly visible surface. |
Point taken, but in this case I have reflective objects, which I can reconstruct only under a diffuse environment. I did this in Agisoft Photoscan by simply adding masks which I can easily pull from background.
(1) is a dirty hack, (2) does essentially what you proposed in post, and (3) would probably fix the cause of the problem, but I am not sure if it works. If (3) worked, I could write a new node which takes masks, undistorts them and subtracts them from the depthmaps after filtering. Any suggestions? Would (3) be feasible? |
I'm also curious about depthmap masking -- if it plays nice with the subsequent meshing steps, I think it could help a lot of use cases (e.g. automated background removal) considering the extent of resources available for automated image processing relative to automated point-cloud/mesh processing. |
It actually might be more effective to matte spray your reflective objects to remove their reflectivity. That's going to be problematic for a lot of reasons anyway so you'll be saving yourself a lot of reconstruction problems in the long run. A lightly tacky matte spray and some powder to actually give the system some detail to work with is always going to be better than trying to work with reflective objects. Theoretically, dropping an object mask on the original images would only provide data for the object. In actual practice, however, I find that the resulting reconstruction even on near ideal objects like wood bark and textured rock to never be quite as good at capturing camera positions as having a real background which provides parallax with the object. But that's my experience. Putting in a rendered background based on what's ultimately an approximated camera position (because the digital environment is never going to perfectly match up with the real world camera) would seem to be borrowing the worst of all worlds. Even an empty mask has got to be better than that. |
@SquidLord: For my application, a modification of the object is not possible. I already tried white spray chalk for getting a ground truth using a laser scanner, but removing this is a pain, which would not work in the proposed workflow. Additionally I am using extreme closeup macro to get the necessary resolution and precision. Adding any kind of layer thickness is not helping. Using cyclododecane spray would work, since this just vanishes by sublimation, but you need a relatively thick, waxy layer which exhibits subsurface scattering. For tiny objects, the layer thickness is prohibitive. The reconstruction itself works fine, the only problems I get are with symmetric objects (naturally). Matching a rendered environment up with the real world camera is probably not so difficult, since I would do this after SFM, where I already have a calibrated camera and undistorted images, which I can use to align the render camera. But this would be a stupid workaround, not fixing the cause of the problem. And since masking works in Photoscan & Reality Capture, I think it would be a useful enhancement to the workflow. I just need to know if the depthmaps are generated from the same point of view as the associated undistorted images. It would be nice to consider the masks for SFM, too, but I have no idea how. |
@ALfuhrmann Yes, featureless background can have its drawbacks, as @SquidLord explained. You could try enabling AKAZE in the FeatureMatching node. FeatureMatching matches the Features from FeatureExtraction, so it can work like a filter. Modifying the Distance Ratio can reduce outliers. One other thing you could try: do not place the key on a flat surface, place something underneath. This will enable you to better capture the edges. If this does not work, I think masking would be the best way to go in your case.
This would be a good and welcome contribution #188 |
@natowi So now I just have to figure out WHICH maps to mask: What is the difference between depthMap, simMap, and nmodMap? Do I have to walk through the source code, or is this documented somewhere? |
I think simMap=SGM optimized similarity volume, depthMap=depthMap, nmodMap=for filtering |
Great, thanks! |
You can keep it open. |
weird things have happened for me when the foreground object occupies a small percentage of the image, and the background is either featureless or has a consistent pattern. If possible, zoom in even closer for the pics. |
Back from my holidays, I am looking into the mask issue again. I first tried simply masking the depth images using imagemagick, but only got errors from the meshing node. Now I am trying a python script which reads the .EXR images and masks them. The only thing I need to know now is with what values to replace the masked areas. Does someone know if this could work? Or any other values? |
Yes -1.0 in depth map means invalid. |
So I wrote a short python script which takes .PNG masks as input and masks the *_depthMap.exr output of DepthMapFilter. First results look promising: At the moment the script is called outside of Meshroom, I now have to convert it into a node. At the moment I just use mask files generated with ImageMagick from the source images. I put these into a separate folder and call my script with this folder, the depthmap folder, and the .sfm file as parameters. For now, I would like to build a node which does exactly this, meaning that the masks can be generated outside of meshroom, or - later - with a Meshroom node with a more sophisticated masking approach (adaptive thresholding, GrabCut, or something discussed in #188). So, is there a documentation how to implement a new node, or do I have to extract this from the source? |
I looked through the node sources and figured it out. Now I have a minimalistic node, which has the filtered depth maps and the undistorted masks as input and outputs masked depth maps for meshing: The PrepareScene2 node above has as input the folder of the mask images. DeptMaskFilter has only the following Attributes: Masks Folder is the output from PrepareScene2, the other inputs are the same as for Meshing. Currently I build the masks externally with Imagemagick floodfill, but a node for creating masks with one of this methods would be the logical next step. Anyone interested in testing this? |
Here is everything you should need for testing: Copy the "lib" folder from the ZIP in your meshroom root folder, merging it with the "lib" in there. The ZIP also contains a sample Meshroom graph "Meshroom-2019.1.0-win64", which shows the necessary nodes and connections. So you need only to drop your images in Meshroom, and put the folder where your masks are in the "DepthMaskFilter/Mask Folder" attribute. Please post your results in this thread. |
@natowi: I have now a first implementation of a "Depthmask" node, as well as quick implementation for the mask generation (python w/OpenCV) which I could also put in a node. I'd like to generate a pull request, since the patch I posted above is not something one could reliably use. Since my node is at the moment written in Python, I need some information how to add it to the build:
So, do I open a new issue, is there someone I can contact directly? I'd prefer a forum, so that others get these answers, too |
Hi @ALfuhrmann,
I'm still not sure if we want to that instead of just implementing it in C++ in AliceVision, but at least it's a good way to prototype. For small nodes that may evolves a lot, I agree that it could be cool to do it in python but I'm not sure if this short term simplification will not become a long term nightmare.
I would recommend to use logging for now even if it's not included in the log file for now. |
I submitted a pull request. I think everything should be included there, please contact me if I messed up. Re: implementing in C++ The Mask node is only a few lines in python and since it only takes a few seconds to execute, I use it as it is. It would be easy to convert it to C++, especially as all the necessary libraries are already in Alicevision, so if the Python dependencies prove to be a pain, I could do it again in C++. |
@ALfuhrmann I put together an ImageMasking node to auto-generate masks for images with white background based on ImageMagick. I did not test this with the DepthMapFilter graph yet, as I do not have a suitable photogrammetry dataset with white background, so there might still be some bugs. //I noticed there is a GrabCut implementation WIP https://github.com/alicevision/AliceVision/tree/imageMasking/src/aliceVision |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hi @ALfuhrmann, I am impressed by the detail on your scan of the key. I always struggle with metallic surfaces. Can you show some more details on your scanning setup and perhaps some more examples? |
@daktylus Use a Lightbox and a turntable. You need diffuse light to reduce reflections to a minimum. |
Exactly what @natowi said: I've been using a white plastic dome for light diffusion. Try to avoid reflections as much as possible- You can use a chalk spray (the stuff you can use for outdoor markings), but if you use it on keys it might gunk up the lock afterwards. Additionally, I use a heavy macro lens, which makes the scratches and imperfections visible. |
@natowi is there a good explanation for the fact that sparse reconstructions can accurately reproduce objects in a scene but further steps in the pipeline introduce noise? There are a handful of other issue reports pertaining to this, where objects disappear from a scene. You always give the same advice, to change the field of view and so on, but that is never actually necessary for the sparse reconstruction. Isn't there a way to use the almost always perfectly accurate SFM output to reject points outside of the sparse cloud? |
I'm a little late to the party, but interested in this topic. Once the lib files are extracted from the masking_patch.zip and merged into Meshroom-2019.2.0-win64\Meshroom-2019.2.0\lib, I opened Meshroom to receive the following error: Plugins loaded: CameraCalibration, CameraInit, CameraLocalization, CameraRigCalibration, CameraRigLocalization, ConvertSfMFormat, DepthMap, DepthMapFilter, DepthMaskFilter, ExportAnimatedCamera, ExportColoredPointCloud, ExportMaya, FeatureExtraction, FeatureMatching, ImageMatching, ImageMatchingMultiSfM, KeyframeSelection, LDRToHDR, MeshDecimate, MeshDenoising, MeshFiltering, MeshResampling, Meshing, PrepareDenseScene, Publish, SfMAlignment, SfMTransform, StructureFromMotion, Texturing
[00:24:04.159229][error] Invalid input database 'C:/Meshroom-2019.1.0/aliceVision/share/aliceVision/cameraSensors.db', please specify a valid file. Thanks in advance for your kind advice. PS. As soon as I posted this I realized that I could edit the path to cameraSensors.db in the CameraInit node attributes window. I'll leave this edit here, just in case someone else needs to see this. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This is still a huge issue. There needs to be a way of masking artifacts in the depth map without using the default filter that aggressively erases real features. |
The latest version of Meshroom supports Masking, which should provide a solution for the issue: If you can observe the "growing" behaviour in the latest release, feel free to reopen the issue. |
I am using a rotating stage in front of a diffusely lighted white background to avoid specular highlights. I get excellent image quality of the objects in front of a white saturated background.
While all the cameras get positioned correctly and the objects themselves get reconstructed without holes and with very little noise, they also exhibit strange artifacts "growing" from the end of the object into the background (image attached).
The artifacts "mirror" and continue part of the objects profile, even though the images show only evenly illuminated background.
I could avoid this easily by using an (automatically generated) mask for each image (see #188), but at the moment I could only post process the model by removing any geometry with background color.
So, how can I remove these artifacts?
The text was updated successfully, but these errors were encountered: