-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ImageMonkey integration to existing annotation tool 'labelme'? #293
Comments
So you’ve tried to fork the labelme desktop tool to work with imagemonkey; that does sound like a neat idea. I’ve used labelmes web interface only - I’ll try and give their desktop version a go for comparison. the best thing about it is the polygons being in a tree eg parent =whole car outline, children = wheel,headlight etc (saves on reappeating the parent label per part, and directly associates the parts) As for how much effort you should put into it - what I like about imagemonkey is the data is there online , and anyone can just contribute instantly ... no need to download or install anything . You can always interoperate with other services and tools by importing/exporting data in their formats - eg I do think a “imagemonkey to labelme format export option” would be useful. If you submitted imagemonkey support as fork of labelme (short of them accepting a pull request) might that go some way to helping people discover your service ? Another idea for “different input to the imagemonkey database” would be bitmap data - A way to associate a colour coded overlay image . Any paint program with layers can be used as an annotation tool (gimp, photoshop, iPad “Procreate” with its pen).... a web tool could manage the colour coding (verify the colour mappings when you upload) and associated files. Color coded annotations must be mutually exclusive of course. I would personally rather complement imagemonkey with bitmap data versus using another annotation tool - because I have the pen device. But I can independently look at converters for that sort of thing myself. I have focussed on imagemonkey annotating because I like the data being in that publically accessible , extendable form , and I can get to it from any web connected device. |
I think the tool and the site do not have anything in common, besides the name (at least that's my impression from the github page). So, the labelme I've linked to, is actually just an offline annotation software to annotate images. It allows you to load images from the filesystem and annotate them. The annotations are stored alongside the image in a separate json file. What I am actually looking for, is a way to offload the "boring part" (i.e the actual drawing functionality; drawing polygons, zoom in/zoom out, drawing rectangles, move poly points, copy/paste polygons; etc). I think designing a good annotation software/framework is almost a complete project on its own, so offloading that would help me focus on the service aspects of ImageMonkey more. My "problem" is also, that I do not know Javascript/CSS/HTML well enough to be really productive with it. So, although I have some great ideas in mind (e.g I would really like to see a Photoshop like application where you can individually arrange the different toolboxes in order to optimize your workflow), I am often lacking the technical skills to implement that. So, I was thinking that maybe there are already some annotation tools out there that I could fork and build upon (instead of reinventing the wheels) But I totally understand that offline applications aren't as accessible as online services (maybe it's possible to compile the desktop application to webassembly and make it accessible that way, but I haven't tried that yet). I also wouldn't see the offline application as a replacement for the WebUI, but rather an extension for power users. So, in case you aren't satisfied with the unified mode, as it doesn't give you enough flexibility, you could install the application and work on the ImageMonkey dataset that way. At the moment, it's just an idea - I am myself not really sure whether it's worth it or not. I think a native desktop application could have some benefits (probably faster than a web application, easier to extend as one doesn't need to support that many devices and browsers; easier customizable via plugins & scripts, etc), but it also has some drawbacks (has some external software dependencies, needs a installation, etc). So, in case labelme is just slightly better than the existing unified mode, it's probably not worth it to add a ImageMonkey integration. But in case the experience is much better than with the unified mode, I guess a port could be worth it. Right now, I haven't invested that much time into the labelme fork, so in case it turns out that a ImageMonkey integration isn't worth it, I can easily drop it and look into some other (web based) alternatives. |
Makes sense.. you’re right that this kind of interactive geometric manipulation is indeed a project in its own right. But I think Imagemonkey does well by having a tool that’s “good enough” integrated. I agree you should be able to get the best of both worlds by exchanging data with other applications. Perhaps you could reduce the UI burden in imagemonkey by consolidating the features you ended up with , and focus on labelme Interop instead of the idea of an alternate “next gen” interface As you know I had that attempt at a js labelling tool myself but it wasn’t integrated with anything. Perhaps under this direction you’d just have a protocol and someone like me could just setup their own custom labelling UI, or use the labelme desktop tool as suggested . There’s also more that can be done with visualising and exploring the data you’ve accumulated ... it would definitely be worth opening it up for that . In light of the stories about error ridden databases.. good “explore +verify” would be useful |
Many thanks for testing - very much appreciated!
But wouldn't that be pretty complicated then? So, in order to annotate something that's already in the dataset, you would first need to download the image, open the image with the labelme application, do your annotations and then upload the json file again. My idea would be to really integrate that transparently into the labelme fork. So, when you open the labelme fork, it automatically loads a random image, with all the labels and annotations from the ImageMonkey dataset and renders it in the application. You do your annotations and when you press the "Save" button the data is pushed back to the ImageMonkey service and the next image is loaded (so you are really working on "live data"). In a next iteration we could even add something some additional features like:
When you annotate something via the labelme fork, it still shows up in the "activity chart" on the front page. I agree with you, the current labeling and annotation tools are good enough for now, so there's no immediate pressure to replace/extend the existing solution. But the last time I've tried to add a bigger feature (the "limb system") it was a real pain in the a** to work with the existing code base. It's the typical spaghetti code - grown over the years, no real structure, hard to extend and easy to break. So, in the long run I would really like to slowly build up a replacement and put the unified mode into "maintenance mode" (i.e just fix some bugs, but not add any more features to it - at least not big ones). I guess we have several options here:
|
Could just use labelme for new images in the immediate future. Scrape new photos, annotate, upload them with annotations. Downloading and having a local copy of the data would be handy. But you’re right .. integrating with the service will give you the best of all worlds. Also, if you have duplicate image detection, you could just absorb new submissions (although this would waste upload bandwidth admittedly) |
another idea..
advantages
Disadvantages
“Might as well just make the forked labelme tool do this” / “it’s a migration path, an easy way to test out how this will all work” Personally I think I’d still go for manual labelme json upload to the site as the first step |
That's an interesting idea! We could go even one step further and write our own FUSE filesystem. Here has someone implemented a filesystem for twitter: https://github.com/guilload/twitterfs We could do something similar for ImageMonkey. But I am not sure if that will be a pleasant experience. The user would then have 100k images, each image with a uuidv4 as filename, laying around in a folder. For every labeled/annotated image there would exist a json file containing all the labels and annotations. One of the biggest problems is probably to find those images that need work. I guess you could grep the json files and sort them somehow and then open the images individually in the labelme application, but I think that gets pretty annoying soon. Another option would be, as you already suggested, to limit its scope and only allow the upload of new contributions (so that we don't sync changes back from the service to the filesystem). But I think that kills a bit the collaboration spirit.
that's also a good point. I am also not sure if labelme guarantees backwards compatibility of their json format. So, we have to be careful, that they don't break the json format at some point leading to bunch of corrupt file uploads. After thinking a bit more about it, I think it's also a bit of a strategic decision and what we are aiming for. Do we want to create a new labeling/annotation tool primarily for our use case, i.e ImageMonkey, or do we want to attract some new contributors? For the latter, I think a sync daemon/FUSE filesystem implementation might do the trick. But to be honest, I am not sure if I would trust some random guy on the internet to implement a proper file sync mechanism. I think I would be a bit worried that due to a bug or my own mistake personal images (or other personal data) gets uploaded which wasn't supposed to be uploaded. |
short update: After some back and forth, I decided to rewrite the unified mode from scratch. Instead of using Semantic UI (which isn't maintained anymore) and jquery as frontend framework, I am now experimenting with Vue.js and Tailwind CSS. The first version will mostly be a rewrite of the existing functionality - I do not want to add any new features at this point. Until the new version is mature enough, both UIs will coexist - that makes it possible to fall back to the old UI in case the new one has a bug. My main goals for the new UI are:
In case you are interested, here's a short preview: At the moment everything is still in alpha state and a lot of the functionality isn't ported over yet. Any suggestions and improvements are really appreciated! |
looks nice, that screen layout with the toolbar on the side does seem to give more work area. |
I think I am now almost done re-writing the unified mode from scratch. So far I am quite pleased with the result - I think it doesn't look too bad and the code quality is now much much better (which is big win for me). At the moment I am doing some extensive testing and bug fixing of the remaining bugs. :) |
looks good.. been continuing with my 3d experiments ( got some character animation going), and doing a wasm port . eventually that will end up online somewhere. |
That sounds great - looking forward to hear/see more about that! :) |
I've just pushed a new update to production:
|
The past couple of weeks I was evaluating whether it would be possible to integrate the ImageMonkey backend into the labelme annotation tool. The idea is to use the existing annotation functionality of labelme, but instead of loading the image and the annotation data directly from the disk, the data will be loaded from the ImageMonkey web service.
Originally I had planned to rework the application in a way that it's possible to choose between different backends (filesystem and ImageMonkey) in the labelme application. That way it would have been possible to merge the changes upstream and let the original maintainer maintain the labeling/annotation part of the application. But unfortunately there isn't a clear separation between the filesystem backend and the business logic, so there's no easy way to swap out the filesystem backend with the ImageMonkey backend without some major refactoring. As this major refactoring results in a huge diff (I already moved thousands of lines of code and added a bunch of abstraction layers for my PoC), I think it's unlikely that the original maintainer would accept such a huge diff.
So, I think if we want to go that way we would have to create a "hard fork" of the project and maintain the application ourselves (maybe we can cherry pick some bug fixes from the original project). But I think it wouldn't be such a big problem, as labelme seems to be quite mature.
After spending some weeks with the sourcecode, I think it should be possible to add a ImageMonkey backend to labelme. My PoC is already capable of loading images from the web. The next thing would be to fix all the stuff I broke during the refactoring (as it's a python application it's mostly "run the application", "press some buttons", "check where the application segfaults" & "fix the crash"). After that, the next(last?) big thing would be to load/persist the labels & annotations from/to the ImageMonkey service. At that point we should now have a simple alternative for the unified mode (it lacks the properties system & browse functionality, but that could be added later).
For me the big question is now: Should I invest more time into that or is it not worth it?
@dobkeratops in case you have a minute to spare, I would really appreciate it, if you could give labelme a try and play with it a bit. Would really be interested to hear what you think of it.
At the moment I am pretty unhappy with the code quality of the unified mode. The unified mode has definitely grown over the past years and that's also visible in the sourcecode (i.e a lot of spaghetti code; hard to extend & maintain; a bit slow when annotations with a lot of poly points are loaded; etc). So, in the long run I would like to either replace/extend it with a more powerful annotation tool (targeted towards power users) or completely rework the Web UI.
The text was updated successfully, but these errors were encountered: