Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Barriers starts eating up all my RAM fast! #470

Open
wari opened this issue Oct 17, 2019 · 26 comments
Open

Barriers starts eating up all my RAM fast! #470

wari opened this issue Oct 17, 2019 · 26 comments
Labels
bug Something isn't working

Comments

@wari
Copy link

wari commented Oct 17, 2019

Operating Systems

Server: Manjaro Linux
Client: Mac OS Catalina/Windows 10 (Not the problem)

Barrier Version

2.3.2 Stable Release as well as tested on current git master.

Steps to reproduce bug

Just run barrier as a server (no TLS configured), Click on start.

I notice that barrier runs hot on all my CPUs and quickly east up the RAM until the computer freezes. Below is a snapshot from htop before I killed it, it was taking up 51% of my 16GB RAM and CPU jumps from 100% to 500%++.

  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
 8897 wwahab     20   0 8242M 8159M  6280 S 148. 51.2  7:20.44 /usr/bin/barriers -f --no-tray --debug INFO --name dealio -c /tmp/Barrier.duDERV --address :24800

Other info

  • When did the problem start to occur? When I upgraded to this stable release.
  • Is there a way to work around it? Not that I know of
  • Does this bug prevent you from using Barrier entirely? No, I'll try an older version instead, or go back to Synergy.
@AdrianKoshka AdrianKoshka added the bug Something isn't working label Oct 17, 2019
@AdrianKoshka
Copy link

Thanks, memory leaks are known about.

@wari
Copy link
Author

wari commented Oct 21, 2019

2.3.2-alpha was bad as well, using 2.3.1, it was good. So I started git bisect and I got:

[I]  wwahab@dealio ~/s/barrier  $ git bisect bad                                                                                                                   13 changed files  a841b285 
a841b2858f178a0da41efa520e87b73fb8b24189 is the first bad commit
commit a841b2858f178a0da41efa520e87b73fb8b24189
Author: Povilas Kanapickas <[email protected]>
Date:   Sat Aug 17 16:17:50 2019 +0300

    Make ownership of SocketMultiplexerJob explicit

 src/lib/net/ISocketMultiplexerJob.h       | 31 +++++++++----
 src/lib/net/SecureSocket.cpp              | 43 +++++++++---------
 src/lib/net/SecureSocket.h                | 12 ++----
 src/lib/net/SocketMultiplexer.cpp         | 72 +++++++++++++++----------------
 src/lib/net/SocketMultiplexer.h           |  7 ++-
 src/lib/net/TCPListenSocket.cpp           | 27 ++++++------
 src/lib/net/TCPListenSocket.h             |  6 +--
 src/lib/net/TCPSocket.cpp                 | 61 ++++++++++++++------------
 src/lib/net/TCPSocket.h                   | 17 +++-----
 src/lib/net/TSocketMultiplexerMethodJob.h | 18 +++-----
 10 files changed, 148 insertions(+), 146 deletions(-)

After that I don't really know how to proceed from there other than using the last good commit.

@AdrianKoshka
Copy link

Thanks, wasn't aware we had a regression with 2.3.2-alpha.

@wjtk4444
Copy link

wjtk4444 commented Oct 23, 2019

It gets pretty terrible, I've hit almost 30GiB today...
I think it happens after disconnecting from a client, but further tests are needed. Tried both barrier-headless 2.3.1 and current git master.
out

This gif looks pretty hilarious when sized down to fit a github comment, lol. Click on it to see the full size for more readability. This is x1 speed, it leaks at around ~25MiB/s.

EDIT:
Pretty sure it didn't happen ~2 weeks ago (before I updated my system), so on whatever version Arch Linux repositories had by then 2.3.1?). There's a chance that I misread 2.3.2-1 for 2.3.1 when I reported earlier in that post.
I'm basing my assumptions on the fact, that I had my pc running for a good two or three weeks straight (with barriers running all the time) and I've connected to and disconnected from clients at least 2 times a day during that period. If previous version was leaking on disconnect, it'd crash my pc numerous times.

EDIT2:
It seems like using the gui to start barrier rather than just calling barriers from a script prevents the leak from happening. I tried it only a few times, but always the same thing happens.

EDIT3:
Nope, it started leaking again even after being started using the GUI. Not everytime, but it still happens.

image

@wari
Copy link
Author

wari commented Oct 29, 2019

Before the a841b28 commit, the CPU does not hit more than 9% for me, and the RAM consumption is manageable, starting at 9K and then going up to 200MB after a week. I can easily restart barriers. When I apply that patch, CPU consumption immediately goes up very high and I can hit 8GB of RSS memory within 7 minutes. So even if there's a memory leak, at least it's a "sane" creep up, not like what it is now.

@maxiberta
Copy link

Had this issue several times recently. Mostly when waking the laptop with barriers from sleep. Last time barriers RSS reached 17GB in a matter of minutes, and kept growing fast. Managed to run a quick strace (via sudo htop) and captured a screenshot of each of the 3 threads. One thread was stuck, while the two other were in a seemingly infinite loop. Hope it helps.

Running barrier snap on channel edge, Ubuntu 19.10.

Screenshot_20191119_152129
Screenshot_20191119_152251
Screenshot_20191119_152208
Screenshot_20191119_152240

@galkinvv
Copy link

Same problem appaears for me while using 2.3.2 with ssl disabled in config. I'm not sure if it appears while ssl is used.

Eaelier in this issue the bisection was done and it showed

2.3.2-alpha was bad as well, using 2.3.1, it was good. So I started git bisect and I got:

[I]  wwahab@dealio ~/s/barrier  $ git bisect bad                                                                                                                   13 changed files  a841b285 
a841b2858f178a0da41efa520e87b73fb8b24189 is the first bad commit
commit a841b2858f178a0da41efa520e87b73fb8b24189
Author: Povilas Kanapickas <[email protected]>
Date:   Sat Aug 17 16:17:50 2019 +0300

    Make ownership of SocketMultiplexerJob explicit

After that I don't really know how to proceed from there other than using the last good commit.

I used another methos and it confirms that issue is related to SocketMultiplexerJob. I' d attached gdb to the barriers process eating Memory & 150% of cpu core and found a callstack of active threads.

Thread 3 (Thread 0x7f6b5e071700 (LWP 32346)): EATS 100% of core

#0  __GI___writev (iovcnt=3, iov=0x7f6b5e070670, fd=3) at ../sysdeps/unix/sysv/linux/writev.c:26
#1  __GI___writev (fd=3, iov=0x7f6b5e070670, iovcnt=3) at ../sysdeps/unix/sysv/linux/writev.c:24
#2  0x00007f6b5ff12fdd in ?? () from /usr/lib/x86_64-linux-gnu/libxcb.so.1
#3  0x00007f6b5ff133b1 in ?? () from /usr/lib/x86_64-linux-gnu/libxcb.so.1
#4  0x00007f6b5ff1343d in xcb_writev () from /usr/lib/x86_64-linux-gnu/libxcb.so.1
#5  0x00007f6b61ee197e in _XSend () from /usr/lib/x86_64-linux-gnu/libX11.so.6
#6  0x00007f6b61ee1cf0 in _XFlush () from /usr/lib/x86_64-linux-gnu/libX11.so.6
#7  0x00007f6b61ec35ea in XFlush () from /usr/lib/x86_64-linux-gnu/libX11.so.6
#8  0x000056473f53e88a in XWindowsImpl::XFlush (this=0x5647401377e0, display=0x564740166440) at /home/sealion/lapa/barrier/src/lib/platform/XWindowsImpl.cpp:165
#9  0x000056473f55e673 in XWindowsEventQueueBuffer::flush (this=0x5647401845e0) at /home/sealion/lapa/barrier/src/lib/platform/XWindowsEventQueueBuffer.cpp:290
#10 0x000056473f55e47a in XWindowsEventQueueBuffer::addEvent (this=0x5647401845e0, dataID=3972640) at /home/sealion/lapa/barrier/src/lib/platform/XWindowsEventQueueBuffer.cpp:245
#11 0x000056473f51a484 in EventQueue::addEventToBuffer (this=0x7ffd9774bfc0, event=...) at /home/sealion/lapa/barrier/src/lib/base/EventQueue.cpp:323
#12 0x000056473f51a417 in EventQueue::addEvent (this=0x7ffd9774bfc0, event=...) at /home/sealion/lapa/barrier/src/lib/base/EventQueue.cpp:310
#13 0x000056473f5cb2e8 in TCPSocket::sendEvent (this=0x5647401cdc10, type=48) at /home/sealion/lapa/barrier/src/lib/net/TCPSocket.cpp:444
#14 0x000056473f5cb7ba in TCPSocket::serviceConnected (this=0x5647401cdc10, job=0x5647401d8750, read=true, write=false, error=true) at /home/sealion/lapa/barrier/src/lib/net/TCPSocket.cpp:549
#15 0x000056473f5ccba9 in TSocketMultiplexerMethodJob<TCPSocket>::run (this=0x5647401d8750, read=true, write=false, error=true) at /home/sealion/lapa/barrier/src/./lib/net/TSocketMultiplexerMethodJob.h:78
(does not exit this frame, loops here) #16 0x000056473f5c55a5 in SocketMultiplexer::serviceThread (this=0x564740164e70) at /home/sealion/lapa/barrier/src/lib/net/SocketMultiplexer.cpp:219
#17 0x000056473f5c97a1 in TMethodJob<SocketMultiplexer>::run (this=0x564740165020) at /home/sealion/lapa/barrier/src/./lib/base/TMethodJob.h:66
#18 0x000056473f5d5767 in Thread::threadFunc (vjob=0x564740165020) at /home/sealion/lapa/barrier/src/lib/mt/Thread.cpp:157
#19 0x000056473f513cc5 in ArchMultithreadPosix::doThreadFunc (this=0x7ffd9774c2e8, thread=0x564740164bf0) at /home/sealion/lapa/barrier/src/lib/arch/unix/ArchMultithreadPosix.cpp:718
#20 0x000056473f513c4d in ArchMultithreadPosix::threadFunc (vrep=0x564740164bf0) at /home/sealion/lapa/barrier/src/lib/arch/unix/ArchMultithreadPosix.cpp:698
#21 0x00007f6b626a1182 in start_thread (arg=<optimized out>) at pthread_create.c:486
#22 0x00007f6b610dbb1f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 2 (Thread 0x7f6b5e872700 (LWP 32345)): Does not eat cpu

#0  0x00007f6b61002ebc in __GI___sigtimedwait (set=set@entry=0x7f6b5e871bf0, info=info@entry=0x7f6b5e871b20, timeout=timeout@entry=0x0) at ../sysdeps/unix/sysv/linux/sigtimedwait.c:29
#1  0x00007f6b626abb0c in __sigwait (set=0x7f6b5e871bf0, sig=0x7f6b5e871bec) at ../sysdeps/unix/sysv/linux/sigwait.c:28
#2  0x000056473f513e12 in ArchMultithreadPosix::threadSignalHandler () at /home/sealion/lapa/barrier/src/lib/arch/unix/ArchMultithreadPosix.cpp:776
#3  0x00007f6b626a1182 in start_thread (arg=<optimized out>) at pthread_create.c:486
#4  0x00007f6b610dbb1f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 1 (Thread 0x7f6b5e8bef80 (LWP 32344)): EATS 50% of core

#0  __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f6b626a3945 in __GI___pthread_mutex_lock (mutex=0x7ffd9774bfd0) at ../nptl/pthread_mutex_lock.c:80
#2  0x000056473f511ca4 in __gthread_mutex_lock (__mutex=0x7ffd9774bfd0) at /usr/include/x86_64-linux-gnu/c++/8/bits/gthr-default.h:748
#3  0x000056473f511cf4 in std::mutex::lock (this=0x7ffd9774bfd0) at /usr/include/c++/8/bits/std_mutex.h:103
#4  0x000056473f511d50 in std::lock_guard<std::mutex>::lock_guard (this=0x7ffd9774bba0, __m=...) at /usr/include/c++/8/bits/std_mutex.h:162
#5  0x000056473f51a1d7 in EventQueue::getEvent (this=0x7ffd9774bfc0, event=..., timeout=-1) at /home/sealion/lapa/barrier/src/lib/base/EventQueue.cpp:262
(does not exit this frame, loops here) #6  0x000056473f519b2d in EventQueue::loop (this=0x7ffd9774bfc0) at /home/sealion/lapa/barrier/src/lib/base/EventQueue.cpp:130
#7  0x000056473f537394 in ServerApp::mainLoop (this=0x7ffd9774bf20) at /home/sealion/lapa/barrier/src/lib/barrier/ServerApp.cpp:787
#8  0x000056473f5378ea in ServerApp::standardStartup (this=0x7ffd9774bf20, argc=2, argv=0x7ffd9774c4d8) at /home/sealion/lapa/barrier/src/lib/barrier/ServerApp.cpp:858
#9  0x000056473f5390fa in standardStartupStatic (argc=2, argv=0x7ffd9774c4d8) at /home/sealion/lapa/barrier/src/lib/barrier/unix/AppUtilUnix.cpp:33
#10 0x000056473f537720 in ServerApp::runInner (this=0x7ffd9774bf20, argc=2, argv=0x7ffd9774c4d8, outputter=0x0, startup=0x56473f5390c1 <standardStartupStatic(int, char**)>) at /home/sealion/lapa/barrier/src/lib/barrier/ServerApp.cpp:831
#11 0x000056473f53914c in AppUtilUnix::run (this=0x7ffd9774bf60, argc=2, argv=0x7ffd9774c4d8) at /home/sealion/lapa/barrier/src/lib/barrier/unix/AppUtilUnix.cpp:39
#12 0x000056473f53285f in App::run (this=0x7ffd9774bf20, argc=2, argv=0x7ffd9774c4d8) at /home/sealion/lapa/barrier/src/lib/barrier/App.cpp:109
#13 0x000056473f510998 in main (argc=2, argv=0x7ffd9774c4d8) at /home/sealion/lapa/barrier/src/cmd/barriers/barriers.cpp:49

log before barriers gone to "bad loop" state:

 % ./barriers -f
[2019-12-09T02:06:52] DEBUG: opening configuration "/home/sealion/.local/share/barrier/.barrier.conf"
        /home/sealion/lapa/barrier/src/lib/barrier/ServerApp.cpp,226
[2019-12-09T02:06:52] DEBUG: configuration read successfully
        /home/sealion/lapa/barrier/src/lib/barrier/ServerApp.cpp,237
[2019-12-09T02:06:52] DEBUG: XOpenDisplay(":0")
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsScreen.cpp,888
[2019-12-09T02:06:52] DEBUG: xscreensaver window: 0x00000000
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsScreenSaver.cpp,350
[2019-12-09T02:06:52] DEBUG: screen shape: 0,0 2560x1440 
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsScreen.cpp,122
[2019-12-09T02:06:52] DEBUG: window is 0x01600004
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsScreen.cpp,123
[2019-12-09T02:06:52] DEBUG: adopting new buffer
        /home/sealion/lapa/barrier/src/lib/base/EventQueue.cpp,179
[2019-12-09T02:06:52] DEBUG: opened display
        /home/sealion/lapa/barrier/src/lib/barrier/Screen.cpp,49
[2019-12-09T02:06:52] DEBUG: registered hotkey ScrollLock (id=ef14 mask=0000) as id=1
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsScreen.cpp,718
[2019-12-09T02:06:52] NOTE: started server (IPv4), waiting for clients
        /home/sealion/lapa/barrier/src/lib/barrier/ServerApp.cpp,558
[2019-12-09T02:06:52] DEBUG: event queue is ready
        /home/sealion/lapa/barrier/src/lib/base/EventQueue.cpp,117
[2019-12-09T02:06:52] DEBUG: add pending events to buffer
        /home/sealion/lapa/barrier/src/lib/base/EventQueue.cpp,119
[2019-12-09T02:06:52] DEBUG: screen "skala" shape changed
        /home/sealion/lapa/barrier/src/lib/server/Server.cpp,1195
[2019-12-09T02:06:52] DEBUG: Opening new socket: 401A40E0
        /home/sealion/lapa/barrier/src/lib/net/TCPSocket.cpp,69
[2019-12-09T02:06:52] NOTE: accepted client connection
        /home/sealion/lapa/barrier/src/lib/server/ClientListener.cpp,152
[2019-12-09T02:06:52] DEBUG: received client "win10-on160gb-nbdisk" info shape=0,0 2560x1440 at -1344,485
        /home/sealion/lapa/barrier/src/lib/server/ClientProxy1_0.cpp,416
[2019-12-09T02:06:52] NOTE: client "win10-on160gb-nbdisk" has connected
        /home/sealion/lapa/barrier/src/lib/server/Server.cpp,336
[2019-12-09T02:06:57] INFO: switch from "skala" to "win10-on160gb-nbdisk" at 1005,1439
        /home/sealion/lapa/barrier/src/lib/server/Server.cpp,463
[2019-12-09T02:06:57] INFO: leaving screen
        /home/sealion/lapa/barrier/src/lib/barrier/Screen.cpp,131
[2019-12-09T02:06:57] DEBUG: open clipboard 0
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsClipboard.cpp,328
[2019-12-09T02:06:57] DEBUG: ICCCM fill clipboard 0
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsClipboard.cpp,508
[2019-12-09T02:06:57] DEBUG:   available targets: TIMESTAMP (455), TARGETS (453), SAVE_TARGETS (449), MULTIPLE (446)
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsClipboard.cpp,527
[2019-12-09T02:06:57] DEBUG: added format 0 for target UTF8_STRING (311) (6 bytes)
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsClipboard.cpp,570
[2019-12-09T02:06:57] DEBUG: close clipboard 0
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsClipboard.cpp,363
[2019-12-09T02:06:57] INFO: screen "skala" updated clipboard 0
        /home/sealion/lapa/barrier/src/lib/server/Server.cpp,1543
[2019-12-09T02:06:57] DEBUG: open clipboard 1
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsClipboard.cpp,328
[2019-12-09T02:06:57] DEBUG: ICCCM fill clipboard 1
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsClipboard.cpp,508
[2019-12-09T02:06:57] DEBUG:   available targets: text/plain (499), UTF8_STRING (311), STRING (31), TEXT (454), text/html (498)
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsClipboard.cpp,527
[2019-12-09T02:06:57] DEBUG: added format 1 for target text/html (498) (136 bytes)
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsClipboard.cpp,570
[2019-12-09T02:06:57] DEBUG: added format 0 for target UTF8_STRING (311) (28 bytes)
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsClipboard.cpp,570
[2019-12-09T02:06:57] DEBUG: close clipboard 1
        /home/sealion/lapa/barrier/src/lib/platform/XWindowsClipboard.cpp,363
[2019-12-09T02:06:57] INFO: screen "skala" updated clipboard 1
        /home/sealion/lapa/barrier/src/lib/server/Server.cpp,1543
[2019-12-09T02:06:57] DEBUG: sending clipboard 0 to "win10-on160gb-nbdisk"
        /home/sealion/lapa/barrier/src/lib/server/ClientProxy1_6.cpp,58
[2019-12-09T02:06:57] DEBUG: sent clipboard size=18
        /home/sealion/lapa/barrier/src/lib/barrier/StreamChunker.cpp,156
[2019-12-09T02:06:57] DEBUG: sending clipboard 1 to "win10-on160gb-nbdisk"
        /home/sealion/lapa/barrier/src/lib/server/ClientProxy1_6.cpp,58
[2019-12-09T02:06:57] DEBUG: sent clipboard size=252
        /home/sealion/lapa/barrier/src/lib/barrier/StreamChunker.cpp,156
[2019-12-09T02:07:03] NOTE: client "win10-on160gb-nbdisk" has disconnected
        /home/sealion/lapa/barrier/src/lib/server/ClientProxy1_0.cpp,213
[2019-12-09T02:07:03] DEBUG: Closing socket: 401A40E0
        /home/sealion/lapa/barrier/src/lib/net/TCPSocket.cpp,104
[2019-12-09T02:07:03] INFO: jump from "win10-on160gb-nbdisk" to "skala" at 1280,720
        /home/sealion/lapa/barrier/src/lib/server/Server.cpp,2254
[2019-12-09T02:07:03] INFO: entering screen
        /home/sealion/lapa/barrier/src/lib/barrier/Screen.cpp,113
[2019-12-09T02:07:07] DEBUG: Opening new socket: 401A40E0
        /home/sealion/lapa/barrier/src/lib/net/TCPSocket.cpp,69
[2019-12-09T02:07:07] NOTE: accepted client connection
        /home/sealion/lapa/barrier/src/lib/server/ClientListener.cpp,152
[2019-12-09T02:07:07] DEBUG: received client "win10-on160gb-nbdisk" info shape=0,0 2560x1440 at 11424,686
        /home/sealion/lapa/barrier/src/lib/server/ClientProxy1_0.cpp,416
[2019-12-09T02:07:07] NOTE: client "win10-on160gb-nbdisk" has connected
        /home/sealion/lapa/barrier/src/lib/server/Server.cpp,336
[2019-12-09T02:07:16] DEBUG: Opening new socket: 401D3EE0
        /home/sealion/lapa/barrier/src/lib/net/TCPSocket.cpp,69
[2019-12-09T02:07:19] NOTE: accepted client connection
        /home/sealion/lapa/barrier/src/lib/server/ClientListener.cpp,152

@galkinvv
Copy link

Note that near 2019-12-09T02:06:59 the windows client started shutdown (via start menu) and the last message accepted client connection looks to be send by a client process that appears only for 1-2 seconds on windows after logoff and before shutdown. So this may be issue with "client suddenly disappeared during connection".

@fuhry
Copy link

fuhry commented Jan 29, 2020

I can confirm this is affecting me with SSL enabled, and the circumstances leading up to the extreme memory leak are similar to what @galkinvv describes. For now I'm running barriers under systemd as a user service with memory limited to 128MB.

@wjtk4444
Copy link

@fuhry
Now that's pretty smart, did you run into any problems when the limit is reached? I expect at least clipboard sharing to stop working.

galkinvv pushed a commit to galkinvv/barrier that referenced this issue Feb 9, 2020
The commit a841b28 changed the condition for removing job from processing.
New flag MultiplexerJobStatus::continue_servicing become used
instead of checking pointer for NULL.
However for cases when TCPSocket::newJob() returns nullptr
the behaviour changed: earlier the job was removed, but after change
it is called again, since MultiplexerJobStatus equal to {true, nullptr}
means "run this job again".

This leads to problem with eating CPU and RAM on linux
debauchee#470

There is similar windows problem, but not sure it is related.
debauchee#552

Since it looks that the goal of a841b28 was only clarifying
object ownership and not changing job deletion behaviour,
this commit tries to get original behaviour and fix the bugs above
by returning {false, nullptr} instead of {true, nullptr}
when TCPSocket::newJob() returns nullptr.
@kreezxil
Copy link

kreezxil commented Mar 5, 2020

I just experienced this today on 2.3.2 in

Operating System: Manjaro Linux 
KDE Plasma Version: 5.17.5
KDE Frameworks Version: 5.66.0
Qt Version: 5.14.1
Kernel Version: 5.4.23-1-MANJARO
OS Type: 64-bit
Processors: 8 × AMD FX(tm)-8350 Eight-Core Processor
Memory: 31.3 GiB of RAM

@wjtk4444
Copy link

wjtk4444 commented Mar 5, 2020

@kreezxil
#557 fixed it for me, try building from source or using barrier-git from AUR

@kreezxil
Copy link

kreezxil commented Mar 5, 2020 via email

@chewi
Copy link

chewi commented Mar 11, 2020

This happened to my Linux system when my Windows 10 client started rebooting for updates. It's pretty bad so I'd like to patch the Gentoo Linux package I just published but perhaps it warrants a new release?

@kreezxil
Copy link

kreezxil commented Mar 12, 2020 via email

@galkinvv
Copy link

I agree. This issue lacks the example of a description bad case of user-visible behaviour due to this problem. For me it was recent-work-loss situation:

  • Was using barriers with a Debian server and win 10 client.
  • Shutdown machine with win 10.
  • Continue using Debian machine only, make some unsaved progress.
  • After several minutes Debian machine suddenly slows down, and then nearly in 5 seconds before I undestood what happened - become completely unresponsive due to extreme swapping-to-disk activity. I tried to save unfinished work, but the editing app didn't responce in a 5 minutes, so I just used reset.
    • The time-before-hang is dependent on machine RAM size. I think its near "hangs in N minutes if machine have N Gb of RAM).
    • Not sure why OOM-killer doesn't help here, maybe I have some strange settings due to wine-gaming sometimes.
  • The next week the problem reappears in the same situation with the same consequences.
  • So I've fixed it with a PR above))

@ascii78
Copy link

ascii78 commented Mar 17, 2020

This fixed exactly the problem on my setup with manjaro server and windows 10 virtual with gpu passthrough. Stopped working today, my guess is that a windows update changed something.

@kreezxil
Copy link

kreezxil commented Mar 18, 2020 via email

@fuhry
Copy link

fuhry commented Apr 15, 2020

@galkinvv I have pretty standard OOM-killer settings and still did not see barriers get properly selected for reaping by the OOM killer when the memory leak triggered.

Anyways, the issue is resolved since I integrated the patch from #557 into my PKGBUILD. Thank you!

@github-actions
Copy link

Is this issue still an issue for you? Please do comment and let us know! Alternatively, you may close the issue yourself if it is no longer an problem

@kreezxil
Copy link

lol, the bot should ping the person that opened it. the solution provided in here works, but is it in the main branch tho?

@galkinvv
Copy link

#557 was merged, so yes it is in main branch and released as 2.3.3.
Since this thread stopped getting reports of such problem since that - I think this should be closed

@kreezxil
Copy link

kreezxil commented Oct 15, 2020 via email

@darmbrust
Copy link

Just adding some comments to help searchers....

For me, I would notice (what I hope) is this issue, with my barrier server running on Linux / KDE - when I would mouse over to my windows 10 client, and click the shutdown button. It would immediately hang my mouse / keyboard, as barrier still seemed to think they were on the windows system, that was now disconnected.

I could still Ctrl+Alt+F key over to a command console, notice that barriers was eating CPU, and growing RAM usage... and kill -9 - at which point my KDE session would recover (though KDE would warn of various things being restarted)

I'll upgrade from 2.3.2 in hopes that the issue goes away with the fix above...

Thanks for maintaining this opensource tool, by the way. I missed it when the old one went paid.... until I found it here.

@darmbrust
Copy link

This hasn't been brought into the Ubuntu package manager yet, unfortunately, for the LTS releases. I've filed a bug there, hoping that they will backport it. There is a PPA with the 2.3.3 release, which I'm testing now.

@bjohas
Copy link

bjohas commented Dec 11, 2020

I had the same problem with installing on Ubuntu 20.04 with apt (barrier 2.3.2). Installing the snap gives barrier 2.3.3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests