Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert detectors to factory pattern, ability to set different model for each detector #4635

Merged
merged 14 commits into from
Dec 15, 2022

Conversation

dennispg
Copy link
Contributor

@dennispg dennispg commented Dec 7, 2022

Just thought a bit of refactoring could be helpful in adding future detectors..

Hope it's helpful!

@netlify
Copy link

netlify bot commented Dec 7, 2022

Deploy Preview for frigate-docs ready!

Name Link
🔨 Latest commit 68e001a
🔍 Latest deploy log https://app.netlify.com/sites/frigate-docs/deploys/639731103ae51d0009c4998b
😎 Deploy Preview https://deploy-preview-4635--frigate-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site settings.

Copy link
Sponsor Collaborator

@NickM-27 NickM-27 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple things, and need to take a look at the tests which are failing

frigate/detectors/tensorrt.py Outdated Show resolved Hide resolved
frigate/object_detection.py Outdated Show resolved Hide resolved
@NateMeyer
Copy link
Contributor

Hey, I like this idea. I think we can improve on the current abstraction implementation with this "factory" design pattern. It would be great if all the changes for adding a new detector were limited to adding the new implementation python file, and changes to an __init__.py. I'll add a few more comments inline.

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Dec 8, 2022

@dennispg You need to run python3 -m black frigate so your code is formatted and python checks pass

@dennispg
Copy link
Contributor Author

dennispg commented Dec 9, 2022

I have added some changes for the models that I hope will work. I have several questions/doubts around the unit tests though.

The biggest one being that I'm not sure what to do about the currently failing tests. There is a reference to a "/test/modelpath" file that doesn't exist anywhere, and I'm actually not sure how that test ever passed even before my changes.

Also there's a few assertions that I think might not be needed, but I'm not sure the original intention behind them. For example, while testing the CPU detector there's an assert that the edgetpu detector wasn't called and vice versa. Probably some work do for those unit tests.

@NateMeyer
Copy link
Contributor

I have added some changes for the models that I hope will work. I have several questions/doubts around the unit tests though.

The biggest one being that I'm not sure what to do about the currently failing tests. There is a reference to a "/test/modelpath" file that doesn't exist anywhere, and I'm actually not sure how that test ever passed even before my changes.

Also there's a few assertions that I think might not be needed, but I'm not sure the original intention behind them. For example, while testing the CPU detector there's an assert that the edgetpu detector wasn't called and vice versa. Probably some work do for those unit tests.

The purpose of the object detector unit test is to verify that the correct detector is initialized, and that is is passed the correct parameters. The dummy path /test/modelpath is just a test value that is checked, it doesn't need to be real. The asserts for edgetpu not called are part of testing the correct detector type was created. The @patch decorator is how we setup the a mock interface for the given detector type in this test.

We likely need to tweak the tests with the changes to the passing the model config in the initializer, and the extra layer of the create_detector function. I think we should also add a few tests for the create_detector function itself, this could be a new test class.

@dennispg
Copy link
Contributor Author

dennispg commented Dec 9, 2022

I have added some changes for the models that I hope will work. I have several questions/doubts around the unit tests though.
The biggest one being that I'm not sure what to do about the currently failing tests. There is a reference to a "/test/modelpath" file that doesn't exist anywhere, and I'm actually not sure how that test ever passed even before my changes.
Also there's a few assertions that I think might not be needed, but I'm not sure the original intention behind them. For example, while testing the CPU detector there's an assert that the edgetpu detector wasn't called and vice versa. Probably some work do for those unit tests.

The purpose of the object detector unit test is to verify that the correct detector is initialized, and that is is passed the correct parameters. The dummy path /test/modelpath is just a test value that is checked, it doesn't need to be real. The asserts for edgetpu not called are part of testing the correct detector type was created. The @patch decorator is how we setup the a mock interface for the given detector type in this test.

We likely need to tweak the tests with the changes to the passing the model config in the initializer, and the extra layer of the create_detector function. I think we should also add a few tests for the create_detector function itself, this could be a new test class.

I agree, I'll get started on some new tests for create_detector. I would assume the tests for create_detector would obviate the need for those other checks making sure the correct detector is initialized, is that a safe assumption? Because otherwise as more detectors are added, that could grow unwieldy. Based on your feedback, I'll clean those up as well.

But about the dummy path, the tests are still failing for me saying that the file cannot be found. I see that they pass somehow on the dev branch, so I suspect it is something about the way the detectors are now created isn't working with the way they are mocked. I'll check into that, but any pointers would be appreciated there.

For the config models, do those look ok? I left in the global detection model config and fill it into any detectors which don't have their own customized model defined. Kind of like how the cameras can inherit global configuration.

@dennispg
Copy link
Contributor Author

dennispg commented Dec 9, 2022

I believe the mock patches aren't doing what's expected because the create_detector function is using the dictionary which keeps references to the original unmocked classes. I think fixing that should clear up the failing tests.

@NateMeyer
Copy link
Contributor

I agree, I'll get started on some new tests for create_detector. I would assume the tests for create_detector would obviate the need for those other checks making sure the correct detector is initialized, is that a safe assumption? Because otherwise as more detectors are added, that could grow unwieldy. Based on your feedback, I'll clean those up as well.

Yes, I think you have that right.

For the config models, do those look ok? I left in the global detection model config and fill it into any detectors which don't have their own customized model defined. Kind of like how the cameras can inherit global configuration.

I agree it's best if the global model config is still possible. This will make it easier for people upgrading with an existing configuration. I'll have to digest the config changes a bit more.

One more thing that might be needed with the model config change, there are a few places we are using the model input format to prepare data for the detector (shape, order, etc). We may want to have a common method of getting this data when there is a detector-specific model?

I believe the mock patches aren't doing what's expected because the create_detector function is using the dictionary which keeps references to the original unmocked classes. I think fixing that should clear up the failing tests.

That could cause that issue, if the test is trying to actually create a cpu or edgetpu detector.

@dennispg
Copy link
Contributor Author

dennispg commented Dec 9, 2022

As you pointed out about the model input data for shape, order, and such, I wonder; does it make sense for all of the model's properties to be customizable per detector or a subset, such as the path. I am not 100% familiar with how all that works, so I'm genuinely asking from a place of ignorance.

In FrigateApp.start_detectors, first it creates some SharedMemory for each camera, and then it creates an ObjectDetectProcess for each defined detector. The SharedMemory's size is based on the global model config's height and width right now. Unless we fixed the model height and width globally, wouldn't this mean we would need to create a SharedMemory for each camera/detector combination?

Also, in object_processing.py:CameraState the global model config's colormap is referenced. Could this also probably be fixed globally?

If only some subset of parameters should be customizable per detector, I can create a new DetectorModelConfig class instead of it is right now that contains only that set and fill them in from the global one if empty.

@NateMeyer
Copy link
Contributor

Yea, I think we shouldn't make any assumptions about the model properties, or that anything would be common between them.

The shm creation brings up a good question, maybe @NickM-27 can weigh in. If we have multiple detectors available, how does a camera get assigned to a detector? Is it a static mapping? Do we come up with a way to load-balance detectors?

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Dec 9, 2022

The shm creation brings up a good question, maybe @NickM-27 can weigh in. If we have multiple detectors available, how does a camera get assigned to a detector? Is it a static mapping? Do we come up with a way to load-balance detectors?

Currently all the cameras use all the detectors. It is basically a round robin and when a detector finishes an inference it becomes available for another one. I think we definitely would want a way to have all cameras run all detectors as well as a way to pair specific cameras with specific detectors

@NickM-27 NickM-27 changed the title refactor detectors Convert detectors to factory pattern, ability to set different model for each detector Dec 9, 2022
@NateMeyer
Copy link
Contributor

The shm creation brings up a good question, maybe @NickM-27 can weigh in. If we have multiple detectors available, how does a camera get assigned to a detector? Is it a static mapping? Do we come up with a way to load-balance detectors?

Currently all the cameras use all the detectors. It is basically a round robin and when a detector finishes an inference it becomes available for another one. I think we definitely would want a way to have all cameras run all detectors as well as a way to pair specific cameras with specific detectors

Would it work just to initialize the shm to the largest model size for now?

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Dec 9, 2022

Would it work just to initialize the shm to the largest model size for now?

That's probably a good way to go about it, AFAIK

@dennispg
Copy link
Contributor Author

dennispg commented Dec 9, 2022

The shm creation brings up a good question, maybe @NickM-27 can weigh in. If we have multiple detectors available, how does a camera get assigned to a detector? Is it a static mapping? Do we come up with a way to load-balance detectors?

Currently all the cameras use all the detectors. It is basically a round robin and when a detector finishes an inference it becomes available for another one. I think we definitely would want a way to have all cameras run all detectors as well as a way to pair specific cameras with specific detectors

I adjusted the SharedMemory to take the largest model shape as suggested, and fixed a few other places I had missed that needed to pass around the detector configuration. But I'm not sure how best to proceed from here now.

As you pointed out above, currently the detectors are completely decoupled from the frame processing with a queue in between them, and the frame processing has no notion of which detector is going to ultimately do the detection. But it needs that information if the model config is going to be configurable per detector.

Maybe the tensor input creation can be moved in to the LocalObjectDetector.detect phase where it does know it's detector's model config? Seems like that would solve half the problems there, but I think it also could tie up the detector more than necessary and eat away at performance? Also, I only just barely know what I am talking about lol.

And then after detection we need the labelmap from the detector too, but thats also decoupled away.

I feel a bit out of my depth, but maybe with some direction I could keep going. Or, maybe just for now we can just fix the model size and labelmap to be global and deal with making them configurable later?

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Dec 9, 2022

Maybe the tensor input creation can be moved in to the LocalObjectDetector.detect phase where it does know it's detector's model config? Seems like that would solve half the problems there, but I think it also could tie up the detector more than necessary and eat away at performance?

Take a look at the previous discussion #3656 (comment)

I would prefer that each model implementation defines it's own function to prepare the input data in shared memory and then translates the outputs into a common format. Keep in mind that the detection process is extremely sensitive to small increases in processing time. When using a Coral, an extra copy of a frame in memory is enough to increase the total inference times by a significant percentage.

@NateMeyer
Copy link
Contributor

Have we scope-creeped this change too far? I think the original detector change can stand alone, with relying on a global model for now. The detector-specific model configuration could come later to allow multiple mixed-detector support.

@NickM-27
Copy link
Sponsor Collaborator

Have we scope-creeped this change too far? I think the original detector change can stand alone, with relying on a global model for now. The detector-specific model configuration could come later to allow multiple mixed-detector support.

I think a bit, I agree it may be better to do in pieces. Might have more clear answers to some of these questions once tesnorrt is implemented

@dennispg
Copy link
Contributor Author

dennispg commented Dec 10, 2022

Have we scope-creeped this change too far? I think the original detector change can stand alone, with relying on a global model for now. The detector-specific model configuration could come later to allow multiple mixed-detector support.

I think a bit, I agree it may be better to do in pieces. Might have more clear answers to some of these questions once tesnorrt is implemented

I think that is a really good idea!

So, since I believe it is still important to be able to configure a custom model path per detector at the minimum, I started down the road of removing the full model config per detector and replacing it with just the path. But I got to thinking and went ahead implemented what I hope is something of a compromise instead.

I am leaving the full model config per detector in place, merging in any missing config from the global model config, using the detector model config where we can, and leaving the places where we can't as is for now. The compromise is that I am logging an unsupported warning if a user tries to customize any other than the path on the detector model config. (The language of the warning probably needs some word-smithing)

Thoughts?

@dennispg dennispg force-pushed the detectors_refactor branch 3 times, most recently from b999185 to 490717c Compare December 10, 2022 14:46
@dennispg
Copy link
Contributor Author

I've made some new changes that now allow for a fully pluggable detector mechanism. To add a detector, you need only add a module under detectors/plugins with a class that inherits from DetectionApi and exposes a string attribute type_key. No other changes anywhere else required, everything should just pick up from there.

I'll leave the re-architecting to allow for more fully configurable detector specfic model configurations for a later time.

If this is acceptable, should I go ahead and add to the docs about being able to configure the model path on the detector?

@dennispg
Copy link
Contributor Author

Finished the docs!

docs/docs/configuration/detectors.md Outdated Show resolved Hide resolved
docs/docs/configuration/detectors.md Outdated Show resolved Hide resolved
docs/docs/configuration/index.md Outdated Show resolved Hide resolved
frigate/app.py Outdated Show resolved Hide resolved
frigate/app.py Outdated Show resolved Hide resolved
frigate/config.py Outdated Show resolved Hide resolved
docs/docs/configuration/detectors.md Outdated Show resolved Hide resolved
@herostrat
Copy link
Contributor

Hey @dennispg, I really like this refactoring.

Just brainstorming here (which is probably a bit/a lot too outside of the box). For external/manuel trigger (#1063 or #3184) I think your plugins would be a good fit.
E.g. a external_trigger (mqtt, web api) or audio_trigger plugin(detection in cams audio stream)?
Is this too far fetched? On first sight I think this would be doable and kind of reasonable from an architecture point of view?

@dennispg
Copy link
Contributor Author

Hey @dennispg, I really like this refactoring.

Just brainstorming here (which is probably a bit/a lot too outside of the box). For external/manuel trigger (#1063 or #3184) I think your plugins would be a good fit. E.g. a external_trigger (mqtt, web api) or audio_trigger plugin(detection in cams audio stream)? Is this too far fetched? On first sight I think this would be doable and kind of reasonable from an architecture point of view?

To be fully honest, I only very recently started getting aquainted with this codebase. Have been using it for a while at home, but with detection turned off since I couldn't find a Coral. I had been hoping that a way to use my GPU would eventually be implemented, but finally decided to poke around and tinker myself to see if there's anything I could contribute. That's what led me to doing this.

All that just to say that I'm not sure I'm qualified to give a proper answer with my limited exposure, but still, I think that what I've done here is too specific to just the detectors and I don't think it would translate well to what you're suggesting.

@dennispg
Copy link
Contributor Author

@NickM-27 @NateMeyer I am toying with a mechanism that builds on this pull request which enables detection requests to be sent off to a remote server for processing.

I have been playing around with trying to get TensorRT working, but going in circles with trying to get a container image built with the proper dependencies. I was thinking to set it up as a seperate tag or something so as not to clutter up the main image with too much junk you may not need...

WIth this in place, you can build an image with TensorRT from Nvidia's base images that only runs a server listening for requests for detection. But more than that, it enables being able to use many machines together to distribute detection as well.

I've kind of got the client/server mechanism working now, and would love to run it by you guys for some feedback. Since it builds on this pull request though, should I wait till this is merged and start a new pull request? Or maybe just add the commits here?

@NateMeyer
Copy link
Contributor

@dennispg I have also been playing with updating the tensorRT patches the last couple weeks. I think I have the libraries in place, but agree that it really bloats the amd64 image if you aren't using it. I was considering how to make it a separate component along the same lines you are thinking, but with using the shm to move data between containers. Although I am weary that such a setup would be too difficult for someone who isn't an experienced docker user.

That said, my first goal is to get something working on the amd64 image, then we can optimize it from there. I'll push what I have to GitHub tomorrow if you want to take a look. I'm still going through and fixing some cuda memory buffers and it doesn't yet run without crashing.

I would keep this PR focused to the detector changes you have here.

@NickM-27
Copy link
Sponsor Collaborator

I'll push what I have to GitHub tomorrow if you want to take a look

Looking forward to it

I would keep this PR focused to the detector changes you have here.

100% agree I think this PR should stay focused on what it is now

@dennispg
Copy link
Contributor Author

@NateMeyer I actually did also built in an option to share the data either via shared memory or via streaming the bytes over a socket. The socket is much easier to configure and can be used even across machines, but slower. The shm requires to be on the same machine, but like you said, maybe not so straightforward to set up for a regular user.

Anyways, I agree with you both too keep this PR focused. I will push what I have to a branch in my fork soon and then maybe take a look from there. I'd love to take a look at your TensorRT work also once you've pushed that.

@blakeblackshear
Copy link
Owner

I had considered TensorRT in a separate container with shared shm when I looked into it originally a long time ago. It all got so complicated that I decided it wasn't worth the extra complexity. Not saying to stop going down that path, but we will have to think carefully about any additional support burdens it could bring. Also, @NickM-27 and I have been discussing adding zeromq in a future version to replace all the interprocess communication and allow certain processes to be written in more performant languages where beneficial. That could make sense for things like this.

@blakeblackshear blakeblackshear merged commit 420bcd7 into blakeblackshear:dev Dec 15, 2022
@dennispg dennispg deleted the detectors_refactor branch December 15, 2022 13:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants