-
-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes for the journal #16
Commits on Jan 11, 2019
-
Revert "units: use
systemctl exit
to kill the user manager (#8648)"This reverts commit add384d. The problem is that using systemctl to send the signal is nice, but requires a working dbus. #10414 fixes this in a nicer way. See systemd/systemd#10414 (comment), https://bugzilla.redhat.com/show_bug.cgi?id=1664491.
Configuration menu - View commit details
-
Copy full SHA for 2e99526 - Browse repository at this point
Copy the full SHA 2e99526View commit details -
Configuration menu - View commit details
-
Copy full SHA for a655cb0 - Browse repository at this point
Copy the full SHA a655cb0View commit details -
journald: do not store the iovec entry for process commandline on stack
This fixes a crash where we would read the commandline, whose length is under control of the sending program, and then crash when trying to create a stack allocation for it. CVE-2018-16864 https://bugzilla.redhat.com/show_bug.cgi?id=1653855 The message actually doesn't get written to disk, because journal_file_append_entry() returns -E2BIG. (cherry picked from commit 084eeb8)
Configuration menu - View commit details
-
Copy full SHA for 2cdbbae - Browse repository at this point
Copy the full SHA 2cdbbaeView commit details -
basic/process-util: limit command line lengths to _SC_ARG_MAX
This affects systemd-journald and systemd-coredump. Example entry: $ journalctl -o export -n1 'MESSAGE=Something logged' __CURSOR=s=976542d120c649f494471be317829ef9;i=34e;b=4871e4c474574ce4a462dfe3f1c37f06;m=c7d0c37dd2;t=57c4ac58f3b98;x=67598e942bd23dc0 __REALTIME_TIMESTAMP=1544035467475864 __MONOTONIC_TIMESTAMP=858200964562 _BOOT_ID=4871e4c474574ce4a462dfe3f1c37f06 PRIORITY=6 _UID=1000 _GID=1000 _CAP_EFFECTIVE=0 _SELINUX_CONTEXT=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 _AUDIT_SESSION=1 _AUDIT_LOGINUID=1000 _SYSTEMD_OWNER_UID=1000 [email protected] _SYSTEMD_SLICE=user-1000.slice _SYSTEMD_USER_SLICE=-.slice _SYSTEMD_INVOCATION_ID=1c4a469986d448719cb0f9141a10810e _MACHINE_ID=08a5690a2eed47cf92ac0a5d2e3cf6b0 _HOSTNAME=krowka _TRANSPORT=syslog SYSLOG_FACILITY=17 SYSLOG_IDENTIFIER=syslog-caller MESSAGE=Something logged _COMM=poc _EXE=/home/zbyszek/src/systemd-work3/poc _SYSTEMD_CGROUP=/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service _SYSTEMD_USER_UNIT=gnome-terminal-server.service SYSLOG_PID=4108 SYSLOG_TIMESTAMP=Dec 5 19:44:27 _PID=4108 _CMDLINE=./poc AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA> _SOURCE_REALTIME_TIMESTAMP=1544035467475848 $ journalctl -o export -n1 'MESSAGE=Something logged' --output-fields=_CMDLINE|wc 6 2053 2097410 2MB might be hard for some clients to use meaningfully, but OTOH, it is important to log the full commandline sometimes. For example, when the program is crashing, the exact argument list is useful. (cherry picked from commit 2d5d2e0)
Configuration menu - View commit details
-
Copy full SHA for d9fd283 - Browse repository at this point
Copy the full SHA d9fd283View commit details -
coredump: fix message when we fail to save a journald coredump
If creation of the message failed, we'd write a bogus entry: systemd-coredump[1400]: Cannot store coredump of 416 (systemd-journal): No space left on device systemd-coredump[1400]: MESSAGE=Process 416 (systemd-journal) of user 0 dumped core. systemd-coredump[1400]: Coredump diverted to (cherry picked from commit f0136e0)
Configuration menu - View commit details
-
Copy full SHA for 4459d86 - Browse repository at this point
Copy the full SHA 4459d86View commit details -
journald: set a limit on the number of fields (1k)
We allocate a iovec entry for each field, so with many short entries, our memory usage and processing time can be large, even with a relatively small message size. Let's refuse overly long entries. CVE-2018-16865 https://bugzilla.redhat.com/show_bug.cgi?id=1653861 What from I can see, the problem is not from an alloca, despite what the CVE description says, but from the attack multiplication that comes from creating many very small iovecs: (void* + size_t) for each three bytes of input message. (cherry picked from commit 052c57f)
Configuration menu - View commit details
-
Copy full SHA for 9ba92b6 - Browse repository at this point
Copy the full SHA 9ba92b6View commit details -
journald: when processing a native message, bail more quickly on over…
…big messages We'd first parse all or most of the message, and only then consider if it is not too large. Also, when encountering a single field over the limit, we'd still process the preceding part of the message. Let's be stricter, and check size limits early, and let's refuse the whole message if it fails any of the size limits. (cherry picked from commit 964ef92)
Configuration menu - View commit details
-
Copy full SHA for 8d27c62 - Browse repository at this point
Copy the full SHA 8d27c62View commit details -
journald: lower the maximum entry size limit to ½ for non-sealed fds
We immediately read the whole contents into memory, making thigs much more expensive. Sealed fds should be used instead since they are more efficient on our side. (cherry picked from commit 6670c9d)
Configuration menu - View commit details
-
Copy full SHA for e499ddb - Browse repository at this point
Copy the full SHA e499ddbView commit details -
µhttpd: use a cleanup function to call MHD_destroy_response
(cherry picked from commit d101fb2)
Configuration menu - View commit details
-
Copy full SHA for a0bd383 - Browse repository at this point
Copy the full SHA a0bd383View commit details -
journal-remote: verify entry length from header
Calling mhd_respond(), which ulimately calls MHD_queue_response() is ineffective at point, becuase MHD_queue_response() immediately returns MHD_NO signifying an error, because the connection is in state MHD_CONNECTION_CONTINUE_SENT. As Christian Grothoff kindly explained: > You are likely calling MHD_queue_repsonse() too late: once you are > receiving upload_data, HTTP forces you to process it all. At this time, > MHD has already sent "100 continue" and cannot take it back (hence you > get MHD_NO!). > > In your request handler, the first time when you are called for a > connection (and when hence *upload_data_size == 0 and upload_data == > NULL) you must check the content-length header and react (with > MHD_queue_response) based on this (to prevent MHD from automatically > generating 100 continue). If we ever encounter this kind of error, print a warning and immediately abort the connection. (The alternative would be to keep reading the data, but ignore it, and return an error after we get to the end of data. That is possible, but of course puts additional load on both the sender and reciever, and doesn't seem important enough just to return a good error message.) Note that sending of the error does not work (the connection is always aborted when MHD_queue_response is used with MHD_RESPMEM_MUST_FREE, as in this case) with libµhttpd 0.59, but works with 0.61: https://src.fedoraproject.org/rpms/libmicrohttpd/pull-request/1 (cherry picked from commit 7fdb237)
Configuration menu - View commit details
-
Copy full SHA for 193f92f - Browse repository at this point
Copy the full SHA 193f92fView commit details -
journal-remote: set a limit on the number of fields in a message
Existing use of E2BIG is replaced with ENOBUFS (entry too long), and E2BIG is reused for the new error condition (too many fields). This matches the change done for systemd-journald, hence forming the second part of the fix for CVE-2018-16865 (https://bugzilla.redhat.com/show_bug.cgi?id=1653861). (cherry picked from commit ef4d6ab)
Configuration menu - View commit details
-
Copy full SHA for d6686f4 - Browse repository at this point
Copy the full SHA d6686f4View commit details