-
-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENHANCEMENT] Proposal to allow detection from different perspectives #60
Comments
Hi @TimWalter,
Not right now, I mean you could have a bit of perspective but the expectation is to shoot the chart normal to the optical axis as much as possible, and run distortion and vignette correction possibly before detection. I don't have too much cycles to look into that right now but any improvement is appreciated! I would like to roll in the proposal from #53 too! Cheers, Thomas |
Hey, I believe that my revised strategy is far superior to the current implementation in terms of robustness of detection, although it is slower (approximately 5x). Nonetheless, being able to detect multiple checkers in one image and detecting them even if they are not strictly facing the camera, or if not all swatches are detected, is vital for certain applications I have done significant integration work on my local branch into the directory, including integrating it into segmentation.py, documenting it, and adapting the unit test and demo. The demo is a Jupyter notebook that I have attached to this comment for discussion. However, you may need to adapt the image and template paths to use it. The benefits of my revised strategy are significant, as it improves the robustness of checker detection. I would love to hear your feedback on the code and any further testing that you might be able to do. My hope is that we can integrate this into the library. Cheers Tim |
Hi @TimWalter, Thanks a lot! It is certainly on my radar to implement your updates, I will try to look at that in the coming weeks. Something I would like to do maybe, is use it as a fail-over:
Cheers, Thomas |
Sounds reasonable for the robustness for sure! But how would you handle the multiple checkers in one image approach? In any case let me know if I can help |
The current code should be able to detect many already. |
Or do you mean if one is detected but not the other? |
Multiple in one image, do you have to call the function multiple times with same image to get multiple? Or how would you detect multiple checkers with similar aspect ratio and discriminate between them to apply the correct mask. As an example we specifically had the problem that some checkerboards (e.g. calibrite passport2) have 2 boards with basically the same aspect ratio so discriminating between them with the current code is hard if not all swatches are detected (as Swatch count is the only other metric left) |
The code should already be doing swatches analysis to isolate them, we thus reject the white balance section of the passport. |
Hey @TimWalter, I haven't forgotten about that, been busy, very busy on many different projects so this got at the bottom of the stack. I look at your Notebook though, it is great but will take some effort to get added. I also saw that there is a dependency on sklearn.DBSCAN nothing dramatic but thought I would point out! Cheers, Thomas |
Yes that is true, I just needed some clustering algorithm to remove outliers and DBSCAN seemed to work most robustly |
Not sure if this is helpful but here is a script I wrote a while back for a project. It is not production-ready by any means, but might be helpful as a reference. It is a method for finding macbeth cards built on top of @KelSolaar 's awesome library. We found it to work really well for our use case, allowing us to find rotated colorcards (even upside down/flipped). Hopefully it helps? |
Hey @TimWalter, I'm working on this and I have a few notes / questions:
|
Just deleted a bunch of comments, I was not thinking clearly, long week... :) |
Looking at the Note that this is a bit faster: def are_three_collinear_2(points):
non_collinear = True
for p in combinations(points, 3):
non_collinear *= (
p[0][0] * (p[1][1] - p[2][1])
+ p[1][0] * (p[2][1] - p[0][1])
+ p[2][0] * (p[0][1] - p[1][1])
!= 0
)
return not bool(non_collinear) |
@TimWalter: I noted an issue in the definition that filters the stacked contours: It relies on the fact that the contours are ordered in a particular way by OpenCV. This works irrespective of the ordering in my rapid tests: def remove_stacked_contours(
contours: ArrayLike, keep_smallest: bool = True
) -> Tuple[NDArrayInt]:
"""
Remove amd filter out the stacked contours from given contours keeping
either the smallest or the largest ones.
Parameters
----------
contours
Stacked contours to filter.
keep_smallest
Whether to keep the smallest contours.
Returns
-------
:class:`tuple`
Filtered contours.
References
----------
:cite:`Walter2022`
Examples
--------
>>> contours = np.array([
... [[0, 0], [7, 0], [7, 7], [0, 7]],
... [[0, 0], [8, 0], [8, 8], [0, 8]],
... [[0, 0], [10, 0], [10, 10], [0, 10]],
... ])
>>> remove_stacked_contours(contours) # doctest: +ELLIPSIS
(array([[0, 0],
[7, 0],
[7, 7],
[0, 7]]...,)
>>> remove_stacked_contours(contours, False) # doctest: +ELLIPSIS
(array([[ 0, 0],
[10, 0],
[10, 10],
[ 0, 10]]...,)
"""
filtered_contours = []
for contour in contours:
centroid = contour_centroid(contour)
stacked_contours = [
filtered_contour
for filtered_contour in filtered_contours
if cv2.pointPolygonTest(filtered_contour, centroid, False) > 0
]
if not stacked_contours:
filtered_contours.append(contour)
else:
areas = as_float32_array(
[
cv2.contourArea(stacked_contour)
for stacked_contour in stacked_contours
]
)
if keep_smallest:
result = np.all(cv2.contourArea(contour) < areas)
index = 0
else:
result = np.all(cv2.contourArea(contour) > areas)
index = -1
if result:
stacked_contour = as_int32_array(stacked_contours)[
np.argsort(areas)
][0]
index = np.argwhere(
np.all(
as_int32_array(filtered_contours) == stacked_contour,
axis=(1, 2),
)
)[index][0]
filtered_contours[index] = contour
return tuple(
as_int32_array(filtered_contour)
for filtered_contour in filtered_contours
) |
Hey @TimWalter, I released a new version, I haven't incorporated your changes as I focused on a machine learning inference approach. That being said, I overhauled the segmentation method and it should be easier to slot your approach in there now if you feel like so! Cheers, Thomas |
Hey, Here is the implementation of is_convex_quadrilateral
The greedy_heuristic value was set to 10 The templates were generated quite manually, as you've seen the colours and corresponding centroids were hard-coded, and they do get quite massive which is the main issue. One can filter and sort them somewhat, such that the most promising candidates are evaluated first, which is done but this only relieves the issue partially. Regarding the collinear function, you are definitely right in that a valid correspondence is not allowed to include 3 collinear points. However, my function does also do that since full rank matrices confirm that but the documentation and naming of the function is wrong. But anyway since yours is faster the better! Regarding the filtering, could you specify the issue of ordering? Moreover, I am unsure how the filtering could be done just by the size, since it isn't consistently true that the larger or smaller contour represents our swatches since there are usually invalid contours on both ends of the spectrum , which is why I went for a clustering method. |
Thanks for the replies! Would be neat to also have your method :) |
Hey,
I've been tinkering around the problem of being able to detect the color-checker when its shot was taken, so that the approximate rectangle of the checker is distorted.
I am unsure if the detection of such a contour is currently possible with this package, but I would like to contribute in order to enable such a functionality.
As the masking of the checker should stay fixed, the idea is to warp the detected contour to fit the mask.
When the contour is warped even the size of the mask can stay fixed.
This can be done as follows:
As the checker orientation matters, it is necessary to ensure that the points of the detected contour are ordered in the same way as the new coordinates e.g.
My attempt to ensure such an ordering are the following functions:
This works under the assumption that the checker is not flipped but was recorded as the aspect ratio suggests. It is required to perform the ordering before the wrapping.
I've attached a notebook, with a demonstration of the described pipeline
Demo.zip
The text was updated successfully, but these errors were encountered: