-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
show "spiderweb" lines between auto-matched points in 2 overlapping images #2
Comments
That's an intense calculation, but such things are known and maybe even Question is whether that feature could be disabled on mobile devices? Or
|
I've done some preliminary checking and haven't found a JS library that implements feature detection... I'd also be for enabling the feature, since it's a pretty big calculation. |
I looked for JS libs for feature detection and got the same result. The closest I found was pixo-something-or-other. All the operations worked It's more work, but we could find some handy published algorithm for image The work of visualizing the match results would be on top of implementing On Tue, May 13, 2014 at 11:36 PM, Daniel Lee [email protected]:
|
This sounds like a cool google summer of code project, or maybe an Outreach
|
This could be less challenging than it sounds. If we follow the typical workflow it would go something like this: Back end
Frontend
You could make the frontend a bit smarter by having it reduce the number of points it shows, so you image isn't clogged with points. Also, you could perhaps have it to an initial best guess transform when first run that the user adjusts afterwards. It would probably be smart to only allow two images to be searched simultaneously so you don't freeze up the client though. |
You could also do some sorting like not showing matched points that are too On Wed, May 14, 2014 at 11:16 AM, Daniel Lee [email protected]:
|
I think initializing would be the most expensive part of the operation. Although you could reduce time there by capping the matches to a certain number of points, I wouldn't suggest it for the simple reason that you have no guarantee that the points are really good. Let's say I find three points that have good descriptors. I have no guarantee that I'll be able to match them in the second image, or that my first match is correct. Effectively you'd have to search the second image completely anyway and because you don't know beforehand where the overlap area is, it's hard to restrict the search area to that part. You could get pretty close to your idea by searching both images, filtering for good descriptors, matching them and discarding outliers, and then looking for clusters and only showing matches from the middle of clusters. That would mean one initial search, some filtering and then updating your spider webs for the remaining ones (perhaps with a defined maximum density) for everything else after the first guess. I think that could be doable for most browsers, as long as the users are okay with the longer start up time. |
1 similar comment
I think initializing would be the most expensive part of the operation. Although you could reduce time there by capping the matches to a certain number of points, I wouldn't suggest it for the simple reason that you have no guarantee that the points are really good. Let's say I find three points that have good descriptors. I have no guarantee that I'll be able to match them in the second image, or that my first match is correct. Effectively you'd have to search the second image completely anyway and because you don't know beforehand where the overlap area is, it's hard to restrict the search area to that part. You could get pretty close to your idea by searching both images, filtering for good descriptors, matching them and discarding outliers, and then looking for clusters and only showing matches from the middle of clusters. That would mean one initial search, some filtering and then updating your spider webs for the remaining ones (perhaps with a defined maximum density) for everything else after the first guess. I think that could be doable for most browsers, as long as the users are okay with the longer start up time. |
That makes sense. I think initialization could be done on a low-res version Part of limiting the # shown is a user interface issue -- lots and lots of On Wed, May 14, 2014 at 11:53 AM, Daniel Lee [email protected]:
|
I like the idea of storing the matches server side. I'm not sure about matching being cheaper than finding though. It's still not absolutely clear to me what images are matched. Do we match all to all at the beginning to find optimal candidates? That would be really expensive. Do we do some kind of preliminary matching, e.g. based on something simple like histogram similarity? Or do we find and store points for all images and only match them when the user drags them within a certain proximity or explicitly requests they be matched? Probably the last option would be less expensive, but it'll still not be cheap, since the matches are most likely found in a different order. Of you match them unsorted you're looking at something like O(n** I have no experience with doing this kind of thing in the browser, and JS engines are getting a lot faster. Also, I don't want to over-optimize at the cost of quality when there's no need. But maybe it would be smarter to split this up more? Like have the client upload low-res key points and have the server match and try to iteratively build the best conglomerate picture possible, and then pass hat back to the client as pre-placed and pre-stretched images? Then let the client do spider webs on images that are moved when in close proximity. I understand though, that when we go that route it's almost as easy to just do the whole shebang with SfM and point clouds server side - I'm sure many users are also interested in the third dimension as well, but I admit that it's a lot to ask from a server. |
If we could generate descriptors (ideally metrics that are linear in In most 3D shape matching systems, each object is converted into a linear I'm sure we could come up with (read: find research on) some kind of handy That's all very airy and not well defined. The devil is in the details. On Wed, May 14, 2014 at 9:59 PM, Daniel Lee [email protected]:
|
Regardless of implementation, you could probably cull the search space to Why try to match the currently dragged image against images off the screen? On Wed, May 14, 2014 at 10:03 PM, Bryan [email protected] wrote:
|
I think we could easily start by culling matching to only images which are On Thu, May 15, 2014 at 1:08 AM, Bryan Bonvallet
|
Agreed. My worries were mostly from the perspective of automated image
|
I also like it because it's a stepping stone towards more automation if we On Thu, May 15, 2014 at 1:28 AM, Daniel Lee [email protected]:
|
I think the "more automation" route is the 3D alignment and point cloud I really feel that the raison d'etre for MapKnitter is the On Thu, May 15, 2014 at 6:40 AM, Jeffrey Warren [email protected]:
|
if it's possible to identify matching interest points between two images, on the fly (client side would be AWESOME), then as someone drags an image, as it overlaps a neighboring image, the interface could try to find matches between the two images, and could draw spiderwebby red lines between possible matches, to help the user. It might even be possible to make those matches slightly "magnetic" if you know what I mean?
The text was updated successfully, but these errors were encountered: