Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to select initial image pair in command line mode #1198

Open
kdthrive opened this issue Dec 27, 2020 · 15 comments
Open

How to select initial image pair in command line mode #1198

kdthrive opened this issue Dec 27, 2020 · 15 comments

Comments

@kdthrive
Copy link

How to select initial image pair in command line mode? When i get a result,how can i get the absolute scale and how to point a direction..

why sometimes the object is curved(Should actually be straight)
thanks!

@kdthrive
Copy link
Author

image

@natowi
Copy link
Member

natowi commented Dec 27, 2020

How to select initial image pair in command line mode?

meshroom_photogrammetry: "--paramOverrides NODE.param=value" allows to set a parameter directly from the command line.

When i get a result,how can i get the absolute scale

At the moment this is only supported when CCTAGs are being used.

and how to point a direction.

To apply the correct model orientation use a new SfMTranform node with Transformation Method manual transformation or Transformation Method "From_Single_Camera".

CCTAGs can also be used to align two models using the SFMAlignment node

sometimes the object is curved(Should actually be straight)

Probably due to lens distortion.
Make sure the parameters in the camera init node are correct.

To narrow the problem down, please provide more details on you capturing hardware and the object
(colour, texture) you are trying to reconstruct.

@kdthrive
Copy link
Author

How to select initial image pair in command line mode?

meshroom_photogrammetry: "--paramOverrides NODE.param=value" allows to set a parameter directly from the command line.

When i get a result,how can i get the absolute scale

At the moment this is only supported when CCTAGs are being used.

and how to point a direction.

To apply the correct model orientation use a new SfMTranform node with Transformation Method manual transformation or Transformation Method "From_Single_Camera".

CCTAGs can also be used to align two models using the SFMAlignment node

sometimes the object is curved(Should actually be straight)

Probably due to lens distortion.
Make sure the parameters in the camera init node are correct.

To narrow the problem down, please provide more details on you capturing hardware and the object
(colour, texture) you are trying to reconstruct.

I photographed coal,

image

How to select initial image pair in command line mode?

meshroom_photogrammetry: "--paramOverrides NODE.param=value" allows to set a parameter directly from the command line.

When i get a result,how can i get the absolute scale

At the moment this is only supported when CCTAGs are being used.

and how to point a direction.

To apply the correct model orientation use a new SfMTranform node with Transformation Method manual transformation or Transformation Method "From_Single_Camera".

CCTAGs can also be used to align two models using the SFMAlignment node

sometimes the object is curved(Should actually be straight)

Probably due to lens distortion.
Make sure the parameters in the camera init node are correct.

To narrow the problem down, please provide more details on you capturing hardware and the object
(colour, texture) you are trying to reconstruct.

I photographed the coal pile from top to bottom。The camera has large distortion. Is this the reason why the result turns into an arc

image

@natowi
Copy link
Member

natowi commented Dec 28, 2020

You need to make sure all your camera settings are correct in meshroom and your camera is being recognized.
Check if the camerainit intrinsics is set to fisheye.

how can i open the .abc files

Blender, 3DSMax
To convert abc to ply, add a ConvertSFM format node, select your output format and add unknown describer type.

The Meshing node outputs a dense pointcloud if that is what you are looking for (ply).

@kdthrive
Copy link
Author

You need to make sure all your camera settings are correct in meshroom and your camera is being recognized.
Check if the camerainit intrinsics is set to fisheye.

how can i open the .abc files

Blender, 3DSMax
To convert abc to ply, add a ConvertSFM format node, select your output format and add unknown describer type.

The Meshing node outputs a dense pointcloud if that is what you are looking for (ply).

Please give me an example of what parameters of the camera need to be changed. Sometimes some cameras will not be recognized, and sometimes cameras that are clearly not together will be estimated together. The main problem now is the bending of the results, looking forward to your reply!

@natowi
Copy link
Member

natowi commented Dec 28, 2020

To resolve the bending problem, Meshroom needs to know the camera parameters and lens to undistort the images.

What camera(s) are you using? Are they in the sensor database? Check the image metadata to get the details on the make and model.

@kdthrive
Copy link
Author

To resolve the bending problem, Meshroom needs to know the camera parameters and lens to undistort the images.

What camera(s) are you using? Are they in the sensor database? Check the image metadata to get the details on the make and model.

I use a surveillance camera of Dahua, and the collected images have lost metadata。I did not find it in the database. I only know that its focal length is 2.8mm

@natowi
Copy link
Member

natowi commented Dec 28, 2020

In CameraInit -> Intrinsics select CameraType: fisheye

You can try to get more detailed camera parameters by using the CameraCalibration node https://meshroom-manual.readthedocs.io/en/latest/node-reference/nodes/CameraCalibration.html

Print the pattern https://github.com/artoolkit/artoolkit5/blob/master/doc/patterns/Calibration%20chessboard%20(A4).pdf and record a video with your camera with changing pattern orientation like https://vimeo.com/141414129

Feed the images in the CameraCalibration node and select the correct calibration pattern settings. The result is a calibration file in the node folder. Open the folder and read the parameters. Copy the parameters in the cameraInit node Intrinsics

@kdthrive
Copy link
Author

In CameraInit -> Intrinsics select CameraType: fisheye

You can try to get more detailed camera parameters by using the CameraCalibration node https://meshroom-manual.readthedocs.io/en/latest/node-reference/nodes/CameraCalibration.html

Print the pattern https://github.com/artoolkit/artoolkit5/blob/master/doc/patterns/Calibration%20chessboard%20(A4).pdf and record a video with your camera with changing pattern orientation like https://vimeo.com/141414129

Feed the images in the CameraCalibration node and select the correct calibration pattern settings. The result is a calibration file in the node folder. Open the folder and read the parameters. Copy the parameters in the cameraInit node Intrinsics

I use matlab to calibrate the parameters and get the distortion parameters K1, K2. Is the focal length fx = F (actual focal length) * sx or F?
Looks like fx?
image
image
Why are there three distortion parameters..

@natowi
Copy link
Member

natowi commented Jan 4, 2021

// int #image width
// int #image height
// double #focal length
// double #ppx principal point x-coord
// double #ppy principal point y-coord
// DistortionParams: (The k coefficients are the radial components of the distortion model, here k1-k3)
// double #k0 
// double #k1
// double #k2

A good overview on the topic can be found here: https://de.mathworks.com/help/vision/ug/camera-calibration.html

@kdthrive
Copy link
Author

kdthrive commented Jan 8, 2021

There is a question: why do the same photos have different results

@natowi
Copy link
Member

natowi commented Jan 8, 2021

Good question, this should be added to the docs.

The choice of the initial image pair may change from one run to another. In that case, you can start the reconstruction from a very different part of the scene which may provide very different results.

(...) you can recompute the same dataset with the same version and get different results. On most datasets the output will be almost the same but on some datasets were there are multiple candidates for the initial pair with similar scores, you may start from a very different place and get very different results.

And in the long run, of course, improve the repeatability of the initial pair choice which is already somewhere in the todo list.

#495 (comment)

The pipeline is quite repeatable, except one sensitive aspect in the SfM pipeline: the selection of the initial image pair. The selection contains randomness in the robust 2-views estimation. So if you have multiple candidates with a similar score, from one run to another, the reconstruction may start in completely different place of the scene and then the results may change a lot. For instance, you can start in one part of the scene with few images correctly connected together but poorly connected to the rest of the scene, which may explain your very different results between 2 runs.

#484 (comment)

@kdthrive
Copy link
Author

Is there any way to avoid this situation? There are holes on the surface, but wrong redundancy is generated below

image

If my image has no EXIF information, and I don't know the camera parameters, is there any way to solve the problem that the above model becomes curved

@kdthrive
Copy link
Author

Good question, this should be added to the docs.

The choice of the initial image pair may change from one run to another. In that case, you can start the reconstruction from a very different part of the scene which may provide very different results.

(...) you can recompute the same dataset with the same version and get different results. On most datasets the output will be almost the same but on some datasets were there are multiple candidates for the initial pair with similar scores, you may start from a very different place and get very different results.

And in the long run, of course, improve the repeatability of the initial pair choice which is already somewhere in the todo list.

#495 (comment)

The pipeline is quite repeatable, except one sensitive aspect in the SfM pipeline: the selection of the initial image pair. The selection contains randomness in the robust 2-views estimation. So if you have multiple candidates with a similar score, from one run to another, the reconstruction may start in completely different place of the scene and then the results may change a lot. For instance, you can start in one part of the scene with few images correctly connected together but poorly connected to the rest of the scene, which may explain your very different results between 2 runs.

#484 (comment)

can i have your contact information, I would be very grateful

@kdthrive
Copy link
Author

Good question, this should be added to the docs.

The choice of the initial image pair may change from one run to another. In that case, you can start the reconstruction from a very different part of the scene which may provide very different results.

(...) you can recompute the same dataset with the same version and get different results. On most datasets the output will be almost the same but on some datasets were there are multiple candidates for the initial pair with similar scores, you may start from a very different place and get very different results.

And in the long run, of course, improve the repeatability of the initial pair choice which is already somewhere in the todo list.

#495 (comment)

The pipeline is quite repeatable, except one sensitive aspect in the SfM pipeline: the selection of the initial image pair. The selection contains randomness in the robust 2-views estimation. So if you have multiple candidates with a similar score, from one run to another, the reconstruction may start in completely different place of the scene and then the results may change a lot. For instance, you can start in one part of the scene with few images correctly connected together but poorly connected to the rest of the scene, which may explain your very different results between 2 runs.

#484 (comment)

I specified the initial image pair in the configuration file, but the reconstruction result will still be different!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants