-
-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Camera Stream Stops #1037
Comments
Can you see if setting |
I added that variable. Streams still stop occasionaly, but do seem to start back up. When I noticed the issue, I would be in Home Assistant viewing my streams and 1 or 2 cameras wouldn't load and then I'd check the webui for WyzeBridge and the camera wouldn't be streaming. It would just have the Pause Icon next to the camera. The stream wouldn't come back unless I restarted the container or clicked on the pause icon in the webui. I'll keep an eye on the cameras now that I have ON_DEMAND set to False. Here were the logs.
|
I had this issue a while ago. I added For the past few days every time I visit my Dashboard the streams have stopped. I did installed updates to both Supervisor and Core around the time that I started noticing this, but who knows? The only way to get them going again is to visit Docker Wyze Bridge > Controls > (camera) > Enable + Start. |
Is it possible to Enable + Start the streams using an hourly automation? |
You should be able to via the API, but setting ON_DEMAND to false should be doing that already. Potentially related to some underlying change to with the updated MediaMTX v2.5.x #1036 |
I was still having the cameras stop streaming with On_Demand=false. I have since deleted that variable and the cameras are still working 24 hours later. I have debug logging enabled and will be watching to see what happens. |
That absolutely does not make sense but I tried it myself just now, and it seems to be a fix on this end, too. None of my streams were working (unless I hit "Enable" and "Start" for each one). But after removing |
I've had the same problem with ON_DEMAND enabled and disabled. I have four V3 Wyzecams and one V2 Wyzecam. Cameras will randomly stop recording after several hours. I've written a script to detect whether a camera has stopped recording and then the script restarts the docker container if any of my cameras haven't created a file in the last 11 minutes. I run the script via a cronjob every 9 minutes. A restart of the container always fixes the problem (temporarily). I have a feeling that this might be caused by poor WiFi connectivity to one of my cameras. However, I've tried excluding cameras in the docker-compose.yml and I still have problems with recordings stopping. I've had debug enabled for a while and my script moves the old log file so I have a snapshot of the log right up to the point where the container is restarted. I can't see anything obvious in the log file or a pattern that would explain why the recording stops. I've attached an example debug file that shows a 40 minute run from restart to restart. |
Still going without any streams stopping after removing that variable. |
I have done the same and it seems to have fixed the issue. |
Hey @mrlt8, thanks for the quick update (2.5.1) to address this issue. |
I was using 2.5.1 when I noticed the issue. Removing On_Demand fixed it for me. |
Hmm it should be fixed in 2.5.1. Can you try rebuilding the container? |
I see that removing On_Demand is the same as setting On_Demand=true. I am using the RTSP feeds in tinyCam. If I set On_Demand=false two of my cameras stop responding. |
edit: Until this is fixed, I am brute forcing it. I have homeassistant send a rest command every minute to all cameras to start. Looks like it works. Adding to this. I have a few cameras and they have been hanging a lot more often. Manytimes a hung camera can be brought back with a rest call: http://192.168.1.XX:5000/api/basement-hall/state/start My logs are here. Running 2.6.0 on homeassistant with ondemand=true (I tried false and also have problems). All cameras go to frigate, so the stream should never be brought down. Anyways to have it restart automatically after a tutk error? sometimes it comes back itself, sometimes not. I plan to try a homeassistant automation to try to reset them. Rest API calls work, mqtt does not for me Below you can see the errors, then me running the restart rest to bring them back I also found my v3 Pan needs quality HD120 or it will complain about bitrate [basement-hall] [CONTROL] ERROR - error=TutkError(-20018), cmd=('param_info', '1,2,3,4,5,6,7,21,22,27,50') |
I noticed yesterday, one of my cameras was 20 minutes behind. I saw a car pull in my driveway. Then I saw me getting out. 😂 |
Docker Container v 2.5.0
All cameras in my configuration run fine for a while and then a camera will no longer be streaming after 12+ hours. This has occurred twice since upgrading to 2.5.0 . The cameras are still connected on wifi and are live streaming in the Wyze app. Restarting the container brings them back online.
I didn't have debug logging enabled and only had the following info in my log for when it stopped. I changed logging to debug and will share those logs when the cameras go offline again.
The text was updated successfully, but these errors were encountered: