Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reconstruction "grows" artifacts into neutral background #566

Closed
ALfuhrmann opened this issue Jul 30, 2019 · 34 comments
Closed

Reconstruction "grows" artifacts into neutral background #566

ALfuhrmann opened this issue Jul 30, 2019 · 34 comments
Labels
do not close issue that should stay open (avoid automatically close because stale) type:question

Comments

@ALfuhrmann
Copy link

I am using a rotating stage in front of a diffusely lighted white background to avoid specular highlights. I get excellent image quality of the objects in front of a white saturated background.

While all the cameras get positioned correctly and the objects themselves get reconstructed without holes and with very little noise, they also exhibit strange artifacts "growing" from the end of the object into the background (image attached).
image

The artifacts "mirror" and continue part of the objects profile, even though the images show only evenly illuminated background.

I could avoid this easily by using an (automatically generated) mask for each image (see #188), but at the moment I could only post process the model by removing any geometry with background color.

So, how can I remove these artifacts?

@natowi
Copy link
Member

natowi commented Jul 30, 2019

I think this error is caused by texture/pattern similarity resulting in inaccurate feature detection in this area.
There will be a Features Viewer in the upcoming release which might help to find the cause for this problem.
Try with AKAZE and/or High settings to get more features detected and increase the accuracy.
You could also try to test changing the Min... values (cameras/image pairs) in some nodes to reduce outliers in using more cameras/image pairs for a more accurate reconstruction.

@ALfuhrmann
Copy link
Author

ALfuhrmann commented Jul 30, 2019

I will try this, but I already get quite a large amount of features in the sparse reconstruction, over 34000, and none of them in the volume where the artifacts appear:
image
All this additional geometry seems to appears at the depthmap stage:
image
It looks like a reconstruction of spurious features in the depth map of the background.

Edit: Which parameters do your mean by (Min values camera/pairs)? In the Depthmaps, the Neighbour cameras?

@natowi
Copy link
Member

natowi commented Jul 30, 2019

Which parameters do your mean by (Min values camera/pairs)? In the Depthmaps, the Neighbour cameras?

I would try increasing the SGM: Nb Neighbour Cameras, Refine: Nb Neighbour Cameras values and compare the results.

Basically use the information provided in the wiki to increase the density of your reconstruction in
Dense reconstruction.

You can create a new node graph variation to compare the different outputs. (Duplicate .. From here)

@ALfuhrmann
Copy link
Author

So I turned up the SFM with AKAZE and High setting, and switched the SGM and Refine neighbour to 10 and 6. Now I got 135.000 sparse points, a very nice model and still the same artifacts.

In the picture below I compare one depth map with the result model from approximately the same camera. The artifacts growing from the tip of the key seem to be parallel to the scanline artifacts in the depthmap:
image
My hypothesis is that the uniformly white background behind the key delivers no features, so the depthmap calculation extends via belief propagation some features of the object into the background. And since the features correspond loosely to the tip of the rotating object, the meshing node produces these artifacts.
What do you think?

@SquidLord
Copy link

This is one of the strong reasons that you should probably avoid featureless backgrounds for scanning. Once the algorithm is looking really hard for variations in a surface in order to get an idea of where it is, tiny become enormously important to it.

In most cases, you would be better off putting your target on a sheet of newspaper with all of its texture and type then you are putting it on a featureless white field. It's easy enough to clean up the model after the fact and remove a properly visible surface.

@ALfuhrmann
Copy link
Author

Point taken, but in this case I have reflective objects, which I can reconstruct only under a diffuse environment. I did this in Agisoft Photoscan by simply adding masks which I can easily pull from background.
So I'm thinking of three option:

  1. color-key the background and remove everything colored from the final model
  2. put in a rendered textured background generated for the camera positions
  3. mask the depthmaps with masks generated from the undistorted images

(1) is a dirty hack, (2) does essentially what you proposed in post, and (3) would probably fix the cause of the problem, but I am not sure if it works. If (3) worked, I could write a new node which takes masks, undistorts them and subtracts them from the depthmaps after filtering.

Any suggestions? Would (3) be feasible?

@dmontagu
Copy link

I'm also curious about depthmap masking -- if it plays nice with the subsequent meshing steps, I think it could help a lot of use cases (e.g. automated background removal) considering the extent of resources available for automated image processing relative to automated point-cloud/mesh processing.

@SquidLord
Copy link

It actually might be more effective to matte spray your reflective objects to remove their reflectivity. That's going to be problematic for a lot of reasons anyway so you'll be saving yourself a lot of reconstruction problems in the long run. A lightly tacky matte spray and some powder to actually give the system some detail to work with is always going to be better than trying to work with reflective objects.

Theoretically, dropping an object mask on the original images would only provide data for the object. In actual practice, however, I find that the resulting reconstruction even on near ideal objects like wood bark and textured rock to never be quite as good at capturing camera positions as having a real background which provides parallax with the object.

But that's my experience.

Putting in a rendered background based on what's ultimately an approximated camera position (because the digital environment is never going to perfectly match up with the real world camera) would seem to be borrowing the worst of all worlds. Even an empty mask has got to be better than that.

@ALfuhrmann
Copy link
Author

@SquidLord: For my application, a modification of the object is not possible. I already tried white spray chalk for getting a ground truth using a laser scanner, but removing this is a pain, which would not work in the proposed workflow. Additionally I am using extreme closeup macro to get the necessary resolution and precision. Adding any kind of layer thickness is not helping. Using cyclododecane spray would work, since this just vanishes by sublimation, but you need a relatively thick, waxy layer which exhibits subsurface scattering. For tiny objects, the layer thickness is prohibitive.

The reconstruction itself works fine, the only problems I get are with symmetric objects (naturally). Matching a rendered environment up with the real world camera is probably not so difficult, since I would do this after SFM, where I already have a calibrated camera and undistorted images, which I can use to align the render camera. But this would be a stupid workaround, not fixing the cause of the problem.

And since masking works in Photoscan & Reality Capture, I think it would be a useful enhancement to the workflow. I just need to know if the depthmaps are generated from the same point of view as the associated undistorted images. It would be nice to consider the masks for SFM, too, but I have no idea how.

@natowi
Copy link
Member

natowi commented Jul 31, 2019

@ALfuhrmann Yes, featureless background can have its drawbacks, as @SquidLord explained.

You could try enabling AKAZE in the FeatureMatching node. FeatureMatching matches the Features from FeatureExtraction, so it can work like a filter. Modifying the Distance Ratio can reduce outliers.
You can also experiment with the settings in DepthMapFilter.

One other thing you could try: do not place the key on a flat surface, place something underneath. This will enable you to better capture the edges.

If this does not work, I think masking would be the best way to go in your case.

I could write a new node which takes masks, undistorts them and subtracts them from the depthmaps after filtering.

This would be a good and welcome contribution #188

@ALfuhrmann
Copy link
Author

@natowi So now I just have to figure out WHICH maps to mask: What is the difference between depthMap, simMap, and nmodMap? Do I have to walk through the source code, or is this documented somewhere?

@natowi
Copy link
Member

natowi commented Aug 1, 2019

I think simMap=SGM optimized similarity volume, depthMap=depthMap, nmodMap=for filtering
You can look into the references at https://alicevision.org/#photogrammetry/depth_maps_estimation

@ALfuhrmann
Copy link
Author

Great, thanks!
I already tried masking all of them to zero, but it didn't work. Have to check out those references.
Shall I close the issue, or use this to pester you with further questions?

@natowi
Copy link
Member

natowi commented Aug 1, 2019

You can keep it open.

@hargrovecompany
Copy link

weird things have happened for me when the foreground object occupies a small percentage of the image, and the background is either featureless or has a consistent pattern. If possible, zoom in even closer for the pics.

@ALfuhrmann
Copy link
Author

Back from my holidays, I am looking into the mask issue again.

I first tried simply masking the depth images using imagemagick, but only got errors from the meshing node. Now I am trying a python script which reads the .EXR images and masks them.

The only thing I need to know now is with what values to replace the masked areas.
Both image types (depthmap & simmap) contain a border with constant values, I presume to compensate for the image edges when doing a convolution or comparison. These values (-1.0 and 1.0 respective) are possible candidates for masking values.

Does someone know if this could work? Or any other values?

@fabiencastan
Copy link
Member

Yes -1.0 in depth map means invalid.

@ALfuhrmann
Copy link
Author

So I wrote a short python script which takes .PNG masks as input and masks the *_depthMap.exr output of DepthMapFilter. First results look promising:
Source image and mask:
image
And here the result from Meshing, without and with masking:
image

At the moment the script is called outside of Meshroom, I now have to convert it into a node.

At the moment I just use mask files generated with ImageMagick from the source images. I put these into a separate folder and call my script with this folder, the depthmap folder, and the .sfm file as parameters. For now, I would like to build a node which does exactly this, meaning that the masks can be generated outside of meshroom, or - later - with a Meshroom node with a more sophisticated masking approach (adaptive thresholding, GrabCut, or something discussed in #188).

So, is there a documentation how to implement a new node, or do I have to extract this from the source?

@ALfuhrmann
Copy link
Author

I looked through the node sources and figured it out.

Now I have a minimalistic node, which has the filtered depth maps and the undistorted masks as input and outputs masked depth maps for meshing:

image

The PrepareScene2 node above has as input the folder of the mask images. DeptMaskFilter has only the following Attributes:

image

Masks Folder is the output from PrepareScene2, the other inputs are the same as for Meshing.

Currently I build the masks externally with Imagemagick floodfill, but a node for creating masks with one of this methods would be the logical next step.

Anyone interested in testing this?

@ALfuhrmann
Copy link
Author

Here is everything you should need for testing:

masking_patch.zip

Copy the "lib" folder from the ZIP in your meshroom root folder, merging it with the "lib" in there.
At the moment it is only tested with Meshroom-2019.1.0-win64, but it probably works with 2019.2.0, too.

The ZIP also contains a sample Meshroom graph "Meshroom-2019.1.0-win64", which shows the necessary nodes and connections.

So you need only to drop your images in Meshroom, and put the folder where your masks are in the "DepthMaskFilter/Mask Folder" attribute.
Masks are named like the source image they belong to, and have to be grayscale .PNG files. Every pixel with value 0 (black) is masked (see here).

Please post your results in this thread.

@ALfuhrmann
Copy link
Author

Reconstruction of a reflective key, real life case:
(taken with a turntable and diffuse illumination)

Input image (one of 40):
image

Without masks:
image

With masks (generated with Imagemagick):
image
(no post processing, only MeshFiltering node)

@ALfuhrmann
Copy link
Author

@natowi: I have now a first implementation of a "Depthmask" node, as well as quick implementation for the mask generation (python w/OpenCV) which I could also put in a node.

I'd like to generate a pull request, since the patch I posted above is not something one could reliably use.

Since my node is at the moment written in Python, I need some information how to add it to the build:

  • How and where to add the additional python modules I need? (OpenEXR, numpy, etc.)
  • How to cleanly implement logging in a pure Python node (without commandline executable in it)
  • and probably some other stuff, too.

So, do I open a new issue, is there someone I can contact directly? I'd prefer a forum, so that others get these answers, too

@fabiencastan
Copy link
Member

Hi @ALfuhrmann,
It would be good to start a PR as "WIP" when you start, so we can see your implementation and propose modifications.

  • "How and where to add the additional python modules I need? (OpenEXR, numpy, etc.)"

I'm still not sure if we want to that instead of just implementing it in C++ in AliceVision, but at least it's a good way to prototype. For small nodes that may evolves a lot, I agree that it could be cool to do it in python but I'm not sure if this short term simplification will not become a long term nightmare.
So as a start I would recommend to create a new folder in meshroom/nodes/YourNodeGroup with a python requirements.txt file in it. If it's a painful for packaging, we could move it to C++ later.
What do you think?

  • "How to cleanly implement logging in a pure Python node (without command line executable in it) and probably some other stuff, too."

I would recommend to use logging for now even if it's not included in the log file for now.
This PR would be a good way to do it, but needs to provide a standard logging object instead of a custom one.
So at the end, you would be able to replace your calls to logging by this object.

@ALfuhrmann
Copy link
Author

ALfuhrmann commented Sep 17, 2019

I submitted a pull request.

I think everything should be included there, please contact me if I messed up.

Re: implementing in C++

The Mask node is only a few lines in python and since it only takes a few seconds to execute, I use it as it is. It would be easy to convert it to C++, especially as all the necessary libraries are already in Alicevision, so if the Python dependencies prove to be a pain, I could do it again in C++.

@natowi
Copy link
Member

natowi commented Nov 16, 2019

@ALfuhrmann I put together an ImageMasking node to auto-generate masks for images with white background based on ImageMagick. I did not test this with the DepthMapFilter graph yet, as I do not have a suitable photogrammetry dataset with white background, so there might still be some bugs.

//I noticed there is a GrabCut implementation WIP https://github.com/alicevision/AliceVision/tree/imageMasking/src/aliceVision

@stale
Copy link

stale bot commented Mar 16, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale for issues that becomes stale (no solution) label Mar 16, 2020
@daktylus
Copy link

Hi @ALfuhrmann,

I am impressed by the detail on your scan of the key. I always struggle with metallic surfaces. Can you show some more details on your scanning setup and perhaps some more examples?

@stale stale bot removed the stale for issues that becomes stale (no solution) label Mar 23, 2020
@natowi
Copy link
Member

natowi commented Mar 23, 2020

@daktylus Use a Lightbox and a turntable. You need diffuse light to reduce reflections to a minimum.
(If possible, cover your metallic surfaces with a thin layer of paint or something similar.)
If you still have questions, please open a new issue with a detailed description of your current setup, and what object you are trying to capture, as your question is not directly related to this issue.

@ALfuhrmann
Copy link
Author

Exactly what @natowi said: I've been using a white plastic dome for light diffusion. Try to avoid reflections as much as possible-

You can use a chalk spray (the stuff you can use for outdoor markings), but if you use it on keys it might gunk up the lock afterwards.

Additionally, I use a heavy macro lens, which makes the scratches and imperfections visible.

@tkircher
Copy link

tkircher commented Apr 5, 2020

@natowi is there a good explanation for the fact that sparse reconstructions can accurately reproduce objects in a scene but further steps in the pipeline introduce noise? There are a handful of other issue reports pertaining to this, where objects disappear from a scene. You always give the same advice, to change the field of view and so on, but that is never actually necessary for the sparse reconstruction. Isn't there a way to use the almost always perfectly accurate SFM output to reject points outside of the sparse cloud?

@P3rf3ctDay
Copy link

P3rf3ctDay commented Apr 24, 2020

Please post your results in this thread.

I'm a little late to the party, but interested in this topic.
I attempted to use the masking_patch.zip that @ALfuhrmann posted on Sept 4, 2019. I'm using Meshroom-2019.2.0-win64 and encountered two issues. I understand that this is under development and appreciate the work that has been put into this project. If there is a more recent development that I should be using, I'd love to look at those resources.

Once the lib files are extracted from the masking_patch.zip and merged into Meshroom-2019.2.0-win64\Meshroom-2019.2.0\lib, I opened Meshroom to receive the following error:
Incompatible
The second issue arises when trying to load images, regardless if I upgrade the nodes to the current version (or not). The path to the camera sensor seems to have been hard coded into the patch.

Plugins loaded: CameraCalibration, CameraInit, CameraLocalization, CameraRigCalibration, CameraRigLocalization, ConvertSfMFormat, DepthMap, DepthMapFilter, DepthMaskFilter, ExportAnimatedCamera, ExportColoredPointCloud, ExportMaya, FeatureExtraction, FeatureMatching, ImageMatching, ImageMatchingMultiSfM, KeyframeSelection, LDRToHDR, MeshDecimate, MeshDenoising, MeshFiltering, MeshResampling, Meshing, PrepareDenseScene, Publish, SfMAlignment, SfMTransform, StructureFromMotion, Texturing
WARNING:root:Compatibility issue detected for node 'PrepareDenseScene_1': VersionConflict
WARNING:root:Compatibility issue detected for node 'DepthMapFilter_1': VersionConflict
WARNING:root:Compatibility issue detected for node 'MeshFiltering_1': VersionConflict
WARNING:root:Compatibility issue detected for node 'Texturing_1': VersionConflict
WARNING:root:Compatibility issue detected for node 'Meshing_1': VersionConflict
WARNING:root:Compatibility issue detected for node 'PrepareDenseScene_2': VersionConflict
WARNING:root:Compatibility issue detected for node 'Meshing_2': VersionConflict
WARNING:root:Compatibility issue detected for node 'MeshFiltering_2': VersionConflict
Program called with the following parameters:

  • allowSingleView = 1
  • defaultCameraModel = "" (default)
  • defaultFieldOfView = 45
  • defaultFocalLengthPix = -1 (default)
  • defaultIntrinsic = "" (default)
  • groupCameraFallback = Unknown Type "enum EGroupCameraFallback"
  • imageFolder = "" (default)
  • input = "C:\Users\vr2\AppData\Local\Temp\tmpl9mjm681/CameraInit/f1e2eb85ff47e36189d984889fbb64df6e81993a//viewpoints.sfm"
  • output = "C:/Users/vr2/AppData/Local/Temp/tmpl9mjm681/CameraInit/f1e2eb85ff47e36189d984889fbb64df6e81993a/cameraInit.sfm"
  • sensorDatabase = "C:/Meshroom-2019.1.0/aliceVision/share/aliceVision/cameraSensors.db"
  • verboseLevel = "info"

[00:24:04.159229][error] Invalid input database 'C:/Meshroom-2019.1.0/aliceVision/share/aliceVision/cameraSensors.db', please specify a valid file.
ERROR:root:Error while building intrinsics : Traceback (most recent call last):
File "C:\Users\DEV\AppData\Local\Temp\meshroom\meshroom\ui\reconstruction.py", line 399, in buildIntrinsics
File "C:\Users\DEV\AppData\Local\Temp\meshroom\meshroom\nodes\aliceVision\CameraInit.py", line 192, in buildIntrinsicsRuntimeError: CameraInit failed with error code 1. Command was: "aliceVision_cameraInit --sensorDatabase "C:/Meshroom-2019.1.0/aliceVision/share/aliceVision/cameraSensors.db" --defaultFieldOfView 45.0 --groupCameraFallback folder --verboseLevel info --output "C:/Users/vr2/AppData/Local/Temp/tmpl9mjm681/CameraInit/f1e2eb85ff47e36189d984889fbb64df6e81993a/cameraInit.sfm" --allowSingleView 1 --input "C:\Users\vr2\AppData\Local\Temp\tmpl9mjm681/CameraInit/f1e2eb85ff47e36189d984889fbb64df6e81993a//viewpoints.sfm""

Thanks in advance for your kind advice.

PS. As soon as I posted this I realized that I could edit the path to cameraSensors.db in the CameraInit node attributes window. I'll leave this edit here, just in case someone else needs to see this.

@stale
Copy link

stale bot commented Aug 22, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale for issues that becomes stale (no solution) label Aug 22, 2020
@tkircher
Copy link

This is still a huge issue. There needs to be a way of masking artifacts in the depth map without using the default filter that aggressively erases real features.

@natowi
Copy link
Member

natowi commented Jul 25, 2023

The latest version of Meshroom supports Masking, which should provide a solution for the issue:
https://github.com/alicevision/Meshroom/wiki/New-Features-in-Meshroom-2023.1#1-image-masking

If you can observe the "growing" behaviour in the latest release, feel free to reopen the issue.

@natowi natowi closed this as completed Jul 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do not close issue that should stay open (avoid automatically close because stale) type:question
Projects
None yet
Development

No branches or pull requests

10 participants