Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

salt-master process leaks memory when running in a container #50313

Closed
mikeadamz opened this issue Oct 30, 2018 · 38 comments · Fixed by #60386
Closed

salt-master process leaks memory when running in a container #50313

mikeadamz opened this issue Oct 30, 2018 · 38 comments · Fixed by #60386
Assignees
Labels
Bug broken, incorrect, or confusing behavior P1 Priority 1 Regression The issue is a bug that breaks functionality known to work in previous releases. severity-high 2nd top severity, seen by most users, causes major problems Silicon v3004.0 Release code name ZD The issue is related to a Zendesk customer support ticket.

Comments

@mikeadamz
Copy link
Contributor

Description of Issue/Question

When running in docker/kubernetes the salt-master process leaks memory over time

image

I have confirmed the same behavior with or without our custom engine installed

Setup

Dockerfile:

FROM centos:latest as base

RUN yum -y update && \
    yum -y install python-ldap python-setproctitle epel-release git && \
    yum -y install https://repo.saltstack.com/yum/redhat/salt-repo-2018.3-1.el7.noarch.rpm  && \
    yum clean all

FROM base

RUN yum -y install salt-master virt-what python-pygit2 python-pip && \
    yum clean all

RUN pip install pika

ADD Dockerfiles/salt-master/entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh

RUN useradd saltapi
RUN echo "salt" | passwd --stdin saltapi

EXPOSE 4505/tcp 4506/tcp

ENTRYPOINT ["/entrypoint.sh"]

CMD ["salt-master", "-l", "info"]

Enterypoint.sh

#!/bin/bash
# Sync gitfs
/usr/bin/salt-run saltutil.sync_all

# This may be redundant, but ensure we sync the
# engines after we've got the latest code from gitfs
/usr/bin/salt-run saltutil.sync_engines

touch /tmp/entrypoint_ran

# Ensure that the saltapi password matches the
# $SALTAPI_PASSWORD environment variable
stty -echo
if [ -n "$SALTAPI_PASSWORD" ];
    then echo ${SALTAPI_PASSWORD} |  passwd --stdin saltapi;
fi
stty echo

# Run command
exec "$@"

master.conf

    hash_type: sha512
    state_aggregate: True
    log_level_logfile: info

    fileserver_backend:
      - roots
      - git
    gitfs_remotes:
      - https://states.gitfs.repo
        - user: secret
        - password: secret
    ext_pillar:
      - git:
        # HTTPS authentication
        - master https://pillar.gitfs.repo:
          - user: secret
          - password: secret
    external_auth:
      pam:
        saltapi:
          - .*
          - '@runner'
          - '@wheel'
    custom:
      rabbitmq:
        user: salt
        password: super secret
        server: superdupersecret
        exchange: salt-events
        vhost: salt-events
        queue: salt-events
    engines:
       - custom-salt: {}

Steps to Reproduce Issue

  1. Launch container
  2. Add one or more minions
  3. Watch as graph slowly rises

Versions Report

$ kubectl exec -it saltstack-54c9c75cc4-6mlf9 -c salt-master -- salt --versions-report
Salt Version:
           Salt: 2018.3.3

Dependency Versions:
           cffi: 1.6.0
       cherrypy: Not Installed
       dateutil: Not Installed
      docker-py: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.7.2
        libgit2: 0.26.3
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.5.6
   mysql-python: Not Installed
      pycparser: 2.14
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: 0.26.4
         Python: 2.7.5 (default, Jul 13 2018, 13:06:57)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 15.3.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.1.4

System Versions:
           dist: centos 7.5.1804 Core
         locale: ANSI_X3.4-1968
        machine: x86_64
        release: 4.4.0-133-generic
         system: Linux
        version: CentOS Linux 7.5.1804 Core
@doesitblend doesitblend added the ZD The issue is related to a Zendesk customer support ticket. label Oct 30, 2018
@doesitblend
Copy link
Collaborator

ZD-2933

@dwoz dwoz added Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around labels Oct 30, 2018
@dwoz dwoz added this to the Approved milestone Oct 30, 2018
@dwoz dwoz added the P2 Priority 2 label Oct 30, 2018
@goodwillcoding
Copy link

I have nearly identical issue though not in container. I have completely vanila Salt 2018.3.2 deployed with only 2 minions, only configuration changes from the default are the hostname of the salt master.

After a week of NOTHING running at all from the master the memory usage is nearly 5GB. At this point we are considering installing a cron task that restarts the service just to avoid this, which is frankly an insane solution. Is this really a P2 and not a P1?

@goodwillcoding
Copy link

Wow, I've been using salt since 0.14 ... had all kind of issues, this is literally the WORST. @rallytime @DmitryKuzmenko do you have any instructions on what to do, how to fix ... or should we all just abandon all hope .. cause currently salt is just unusable. How can a DEFAULT install on widely used OS (Ubuntu 16.04 LTS) be having such issues?

@mikeadamz
Copy link
Contributor Author

I'm not sure if this helps, but I have two salt masters running on two different kubernetes clusters in two different datacenter in two different states. One master has a single minion and the other master has zero minions. The memory leak is exactly the same.

image

It's interesting to me that regardless of which cluster or if the master has a minion or not, the leak grows at the same rate. The bars are parallel at the exact same trajectory.

@goodwillcoding
Copy link

wow the silence on tis bug is completely deafening ... despite the obvious P1 status

@DmitryKuzmenko
Copy link
Contributor

Sorry for the silence guys. I'm here and working on it.

@DmitryKuzmenko
Copy link
Contributor

@mikeadamz is it possible to get output of ps aux | grep salt on one of VMs where master already grown enough to see what subprocess of master has that problem?

@mikeadamz
Copy link
Contributor Author

[root@saltstack-54c9c75cc4-6mlf9 /]# ps aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.3 310484 52384 ?        Ss   Oct26   0:37 /usr/bin/python /usr/bin/salt-master -l info ProcessManager
root       135  0.0  0.1 207264 22372 ?        S    Oct26   0:00 /usr/bin/python /usr/bin/salt-master -l info MultiprocessingLoggingQueue
root       140  0.0  0.2 389680 39788 ?        Sl   Oct26   0:00 /usr/bin/python /usr/bin/salt-master -l info ZeroMQPubServerChannel
root       143  0.0  0.2 308548 39520 ?        S    Oct26   0:00 /usr/bin/python /usr/bin/salt-master -l info EventPublisher
root       144  0.0  0.2 314232 46728 ?        S    Oct26   3:15 /usr/bin/python /usr/bin/salt-master -l info
root       145  0.5  1.6 608720 270652 ?       S    Oct26 102:29 /usr/bin/python /usr/bin/salt-master -l info Maintenance
root       146  0.0  0.2 310348 40676 ?        S    Oct26   0:36 /usr/bin/python /usr/bin/salt-master -l info ReqServer_ProcessManager
root       147  0.0  0.2 687344 42700 ?        Sl   Oct26   0:10 /usr/bin/python /usr/bin/salt-master -l info MWorkerQueue
root       148  0.0  0.5 583584 85080 ?        Sl   Oct26   0:30 /usr/bin/python /usr/bin/salt-master -l info MWorker-0
root       153  0.0  0.5 583816 85500 ?        Sl   Oct26   0:31 /usr/bin/python /usr/bin/salt-master -l info MWorker-1
root       156  0.0  0.5 434684 83908 ?        Sl   Oct26   0:31 /usr/bin/python /usr/bin/salt-master -l info MWorker-2
root       157  0.0  0.5 584080 85508 ?        Sl   Oct26   0:32 /usr/bin/python /usr/bin/salt-master -l info MWorker-3
root       158  0.0  0.5 583628 85044 ?        Sl   Oct26   0:30 /usr/bin/python /usr/bin/salt-master -l info MWorker-4
root       159  0.1  1.5 657024 250480 ?       Sl   Oct26  29:22 /usr/bin/python /usr/bin/salt-master -l info FileserverUpdate
root     29856  0.0  0.0  11832  3048 pts/0    Ss   13:13   0:00 /bin/bash
root     29877  0.0  1.5 607692 261288 ?       R    13:13   0:00 /usr/bin/python /usr/bin/salt-master -l info Maintenance
root     29878  0.0  0.0  51720  3536 pts/0    R+   13:13   0:00 ps aux
[root@saltstack-54c9c75cc4-6mlf9 /]#

@DmitryKuzmenko
Copy link
Contributor

@mikeadamz thank you! It's very helpful.

@dwoz
Copy link
Contributor

dwoz commented Nov 14, 2018

@mikeadamz Are you guys using any gitfs backends and if so, which library is being used (GitPython or pygit2)?

@mikeadamz
Copy link
Contributor Author

mikeadamz commented Nov 14, 2018 via email

@goodwillcoding
Copy link

@dwoz : no backend

@isbm
Copy link
Contributor

isbm commented Nov 16, 2018

@mikeadamz how big is your gitrepo? branches, refs etc? If you flip the library from pygit2 to GitPython, what happens?

@doesitblend
Copy link
Collaborator

doesitblend commented Nov 16, 2018

@isbm I am able to reproduce this issue without using Git in my environment at all.

I have created https://github.com/doesitblend/quicklab to help provide an environment where you can reproduce the issue. Just update build the images as stated in the repo and then let it run. Use docker stats and watch memory usage grow over about an hour.

This was only tested on MacOS, so if on Linux you may need to add in the /sys mount to run systemd.

@doesitblend
Copy link
Collaborator

I know that @DmitryKuzmenko has been working on this issue and has reproduced the issue in his lab environment. I believe that he is making progress, but no solution to this issue just yet.

@DmitryKuzmenko
Copy link
Contributor

My results for this moment.
I made code review and found nothing in the core logic. Probably the issue is in some modules specifics but more likely I've just not found.
Trying to reproduce it in some ways I've got some positive results running salt over the weekend in the quicklab environment. But as I saw it stopped to grow after 1 day of working. I modified configuration and ran it again (till this moment) to check that.
Tomorrow if I'll proof the fact of reproduction I'll re-run it with memory monitoring tools.

@goodwillcoding
Copy link

@DmitryKuzmenko anything new on this. Still working around this via cron job which restarts master

@waynew
Copy link
Contributor

waynew commented Mar 7, 2019

I've started poking at this, and while I'm not seeing the same results you're getting, so far there does appear to be an increase. My docker is running on a Mac, if that makes a difference 🤷‍♂️

My increase doesn't appear to be as drastic... I did launch via docker-compose rather than kubernetes, but that doesn't seem like it should cause a problem.

@waynew
Copy link
Contributor

waynew commented Mar 8, 2019

I tried an experiment without the git repos enabled like you have in this Dockerfile. Here's what I'm seeing (after over an hour of running):

image

That's just hovering around 300MiB RAM. Pretty clear GC cycles. At this point, I have some suspicions around the gitfs/ext_pillar backends - will look into it further tomorrow.

@waynew
Copy link
Contributor

waynew commented Mar 9, 2019

Oh yeah. That'll do it. It looks like the pygit2 backend also has a memory leak 😢

image

@waynew
Copy link
Contributor

waynew commented Mar 11, 2019

Interesting follow-up, leaving it running all weekend:

image

Memory use leveled out around 800mb yesterday and pretty much just stayed there. I'm not sure why exactly it leveled off. I'm really curious if a larger repository will change the amount of memory usage.

@vin01
Copy link
Contributor

vin01 commented Aug 22, 2019

I just ran into this with gitfs and pygit2 provider.

I believe it is not specific to salt running in containers but salt running with gitfs enabled using pygit2 (as is also indicated in above comments). I have not tested gitpython yet however before enabling gitfs, memory usage was stable and now with gitfs enabled, it just keeps going up.

I will be happy to share core dump and memory profile from my vagrant box if that can help.

@DmitryKuzmenko it will be great to hear an update on this.

@sagetherage sagetherage added the Aluminium Release Post Mg and Pre Si label Oct 6, 2020
@sagetherage
Copy link
Contributor

this one is too big to get into Magnesium now and needs more attention, removing from Magnesium scope and back to planning for Aluminium

@sagetherage
Copy link
Contributor

the Core team will not be able to get to all of this in Aluminium but we will review a PR if anyone in the community wants to submit one and moving to Silicon

@sagetherage sagetherage modified the milestones: Aluminium, Silicon Feb 17, 2021
@sagetherage sagetherage added Silicon v3004.0 Release code name and removed Aluminium Release Post Mg and Pre Si labels Feb 17, 2021
@sagetherage sagetherage added the P1 Priority 1 label May 18, 2021
dwoz added a commit to dwoz/salt that referenced this issue Jun 22, 2021
At this time we do not have the ability to fix the upstream memory leaks
in the gitfs backend providers. Work around their limitations by
periodically restarting the file server update proccess. This will at
least partially address saltstack#50313
Ch3LL pushed a commit that referenced this issue Jun 23, 2021
At this time we do not have the ability to fix the upstream memory leaks
in the gitfs backend providers. Work around their limitations by
periodically restarting the file server update proccess. This will at
least partially address #50313
dwoz added a commit to dwoz/salt that referenced this issue Jul 12, 2021
At this time we do not have the ability to fix the upstream memory leaks
in the gitfs backend providers. Work around their limitations by
periodically restarting the file server update proccess. This will at
least partially address saltstack#50313
garethgreenaway pushed a commit that referenced this issue Jul 20, 2021
At this time we do not have the ability to fix the upstream memory leaks
in the gitfs backend providers. Work around their limitations by
periodically restarting the file server update proccess. This will at
least partially address #50313
garethgreenaway added a commit that referenced this issue Sep 23, 2021
* Merge 3002.6 bugfix changes (#59822)

* Pass `CI_RUN` as an environment variable to the test run.

This allows us to know if we're running the test suite under a CI
environment or not and adapt/adjust if needed

* Migrate `unit.setup` to PyTest

* Backport ae36b15 just for test_install.py

* Only skip tests on CI runs

* Always store git sha in _version.py during installation

* Fix PEP440 compliance.

The wheel metadata version 1.2 states that the package version MUST be
PEP440 compliant.

This means that instead of `3002.2-511-g033c53eccb`, the salt version
string should look like `3002.2+511.g033c53eccb`, a post release of
`3002.2` ahead by 511 commits with the git sha `033c53eccb`

* Fix and migrate `tests/unit/test_version.py` to PyTest

* Skip test if `easy_install` is not available

* We also need to be PEP440 compliant when there's no git history

* Allow extra_filerefs as sanitized kwargs for SSH client

* Fix regression on cmd.run when passing tuples as cmd

Co-authored-by: Alexander Graul <[email protected]>

* Add unit tests to ensure cmd.run accepts tuples

* Add unit test to check for extra_filerefs on SSH opts

* Add changelog file

* Fix comment for test case

* Fix unit test to avoid failing on Windows

* Skip failing test on windows

* Fix test to work on Windows

* Add all ssh kwargs to sanitize_kwargs method

* Run pre-commit

* Fix pylint

* Fix cmdmod loglevel and module_names tests

* Fix pre-commit

* Skip ssh tests if binary does not exist

* Use setup_loader for cmdmod test

* Prevent argument injection in restartcheck

* Add changelog for restartcheck fix

* docs_3002.6

* Add back tests removed in merge

Co-authored-by: Pedro Algarvio <[email protected]>
Co-authored-by: Megan Wilhite <[email protected]>
Co-authored-by: Bryce Larson <[email protected]>
Co-authored-by: Pablo Suárez Hernández <[email protected]>
Co-authored-by: Alexander Graul <[email protected]>
Co-authored-by: Frode Gundersen <[email protected]>

* Remove glance state module in favor of glance_image

* update wording in changelog

* bump deprecation warning to Silicon.

* Updating warnutil version to Phosphorous.

* Update salt/modules/keystone.py

Co-authored-by: Megan Wilhite <[email protected]>

* Check $HOMEBREW_PREFIX when linking against libcrypto

When loading `libcrypto`, Salt checks for a Homebrew installation of `openssl`
at Homebrew's default prefix of `/usr/local`. However, on Apple Silicon Macs,
Homebrew's default installation prefix is `/opt/homebrew`. On all platforms,
the prefix is configurable.  If Salt doesn't find one of those `libcrypto`s,
it will fall back on the un-versioned `/usr/lib/libcrypto.dylib`, which will
cause the following crash:

    Application Specific Information:
    /usr/lib/libcrypto.dylib
    abort() called
    Invalid dylib load. Clients should not load the unversioned libcrypto dylib as it does not have a stable ABI.

This commit checks $HOMEBREW_PREFIX instead of hard-coding `/usr/local`.

* Add test case

* Add changelog for 59808

* Add changelog entry

* Make _find_libcrypto fail on Big Sur if it can't find a library

Right now, if `_find_libcrypto` can't find any externally-managed versions of
libcrypto, it will fall back on the pre-Catalina un-versioned system libcrypto.
This does not exist on Big Sur and it would be better to raise an exception
here rather than crashing later when trying to open it.

* Update _find_libcrypto tests

This commit simplifies the unit tests for _find_libcrypto by mocking out the
host's filesystem and testing the common libcrypto installations (brew, ports,
etc.) on Big Sur. It simplifies the tests for falling back on system versions
of libcrypto on previous versions of macOS.

* Fix description of test_find_libcrypto_with_system_before_catalina

* Patch sys.platform for test_rsax931 tests

* modules/match: add missing "minion_id" in Pillar example

The documented Pillar example for `match.filter_by` lacks the `minion_id` parameter. Without it, the assignment won't work as expected.
- fix documentation
- add tests:
  - to prove the misbehavior of the documented example
  - to prove the proper behaviour when supplying `minion_id`
  - to ensure some misbehaviour observed with compound matchers doesn't occur

* Fix for issue #59773

- When instantiating the loader grab values of grains and pillars if
  they are NamedLoaderContext instances.
- The loader uses a copy of opts.
- Impliment deepcopy on NamedLoaderContext instances.

* Add changelog for #59773

* _get_initial_pillar function returns pillar

* Fix linter issues

* Clean up test

* Bump deprecation release for neutron

* Uncomment Sulfur release name

* Removing the _ext_nodes deprecation warning and alias.

* Adding changelog.

* Renaming changelog file.

* Update 59804.removed

* Initial pass at fips_mode config option

* Fix pre-commit

* Fix tests and add changelog

* update docs 3003

* update docs 3003 - newline

* Fix warts in changelog

* update releasenotes 3003

* add ubuntu-2004-amd64 m2crypto pycryptodome and tcp tests

* add distro_arch

* changing the cloud platforms file missed in 1a9b7be

* Update __utils__ calls to import utils in azure

* Add changelog for 59744

* Fix azure unit tests and move to pytest

* Use contextvars from site-packages for thin

If a contextvars package exists one of the site-packages locations use
it for the generated thin tarball. This overrides python's builtin
contextvars and allows salt-ssh to work with python <=3.6 even when the
master's python is >3.6 (Fixes #59942)

* Add regression test for #59942

* Add changelog for #59942

* Update filemap to include test_py_versions

* Fix broken thin tests

* Always install the `contextvars` backport, even on Py3.7+

Without this change, salt-ssh cannot target systems with Python <= 3.6

* Use salt-factories to handle the container. Don't override default roster

* Fix thin tests on windows

* No need to use warn log level here

* Fix getsitepackages for old virtualenv versions

* Add explicit pyobjc reqs

* Add back the passthrough stuff

* Remove a line so pre-commit will run

* Bugfix release docs

* Bugfix release docs

* Removing pip-compile log files

* Bump requirements to address a few security issues

* Address traceback on macOS

```
Traceback (most recent call last):
  File "setup.py", line 1448, in <module>
    setup(distclass=SaltDistribution)
  File "/Users/jenkins/setup-tests/.venv/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup
    return distutils.core.setup(**attrs)
  File "/opt/salt/lib/python3.7/distutils/core.py", line 108, in setup
    _setup_distribution = dist = klass(attrs)
  File "setup.py", line 1068, in __init__
    self.update_metadata()
  File "setup.py", line 1074, in update_metadata
    attrvalue = getattr(self, attrname, None)
  File "setup.py", line 1182, in _property_install_requires
    install_requires += _parse_requirements_file(reqfile)
  File "setup.py", line 270, in _parse_requirements_file
    platform.python_version(), _parse_op(op), _parse_ver(ver)
  File "setup.py", line 247, in _check_ver
    return getattr(operator, "__{}__".format(op))(pyver, wanted)
  File "/opt/salt/lib/python3.7/distutils/version.py", line 46, in __eq__
    c = self._cmp(other)
  File "/opt/salt/lib/python3.7/distutils/version.py", line 337, in _cmp
    if self.version < other.version:
TypeError: '<' not supported between instances of 'str' and 'int'
```

* Replace `saltstack.com` with `saltproject.io` on URLs being tested

* Add back support to load old entrypoints by iterating instead of type checking

Fixes #59961

* Fix issue #59975

* Fix pillar serialization for jinja #60083

* Fix test

* Add changelog for #60083

* Update changelog and release for 3003.1

* Remove the changelog source refs

* Add connect to IPCMessageSubscriber's async_methods

Fixes #60049 by making sure an IPCMessageSubscriber that is wrapped by
SyncWrapper has a connect method that runs the coroutine rather than
returns a fugure.

* Add changelog for #60049

* Update 60049.fixed

* Fix coroutine spelling error

Co-authored-by: Wayne Werner <[email protected]>

* IPC on windows cannot use socket paths

Fixes #60298

* Update Jinja2 and lxml due to security related bugfix releases

Jinja2
------

CVE-2020-28493
moderate severity
Vulnerable versions: < 2.11.3
Patched version: 2.11.3

This affects the package jinja2 from 0.0.0 and before 2.11.3. The ReDOS vulnerability of the regex is mainly due to the sub-pattern [a-zA-Z0-9.-]+.[a-zA-Z0-9.-]+ This issue can be mitigated by Markdown to format user content instead of the urlize filter, or by implementing request timeouts and limiting process memory.

lxml
----

CVE-2021-28957
moderate severity
Vulnerable versions: < 4.6.3
Patched version: 4.6.3

An XSS vulnerability was discovered in the python lxml clean module versions before 4.6.3. When disabling the safe_attrs_only and forms arguments, the Cleaner class does not remove the formaction attribute allowing for JS to bypass the sanitizer. A remote attacker could exploit this flaw to run arbitrary JS code on users who interact with incorrectly sanitized HTML. This issue is patched in lxml 4.6.3.

* fix github actions jobs on branch until bullseye comes out

* Upgrade to `six==1.16.0` to avoid problems on CI runs

```
13:59:02  nox > Session invoke-pre-commit was successful.
13:59:02  nox > Running session invoke-pre-commit
13:59:02  nox > pip install --progress-bar=off -r requirements/static/ci/py3.7/invoke.txt
13:59:02  Collecting blessings==1.7
13:59:02    Using cached blessings-1.7-py3-none-any.whl (18 kB)
13:59:02  Collecting invoke==1.4.1
13:59:02    Using cached invoke-1.4.1-py3-none-any.whl (210 kB)
13:59:02  Collecting pyyaml==5.3.1
13:59:02    Using cached PyYAML-5.3.1.tar.gz (269 kB)
13:59:02  Collecting six==1.15.0
13:59:02    Using cached six-1.15.0-py2.py3-none-any.whl (10 kB)
13:59:02  Building wheels for collected packages: pyyaml
13:59:02    Building wheel for pyyaml (setup.py) ... -� �\� �|� �/� �-� �\� �|� �done
13:59:02    Created wheel for pyyaml: filename=PyYAML-5.3.1-cp37-cp37m-linux_x86_64.whl size=546391 sha256=e42e1d66cc32087f4d33ceb81268c86b59f1a97029b19459f91b8d6ad1430167
13:59:02    Stored in directory: /var/jenkins/.cache/pip/wheels/5e/03/1e/e1e954795d6f35dfc7b637fe2277bff021303bd9570ecea653
13:59:02  Successfully built pyyaml
13:59:02  Installing collected packages: six, pyyaml, invoke, blessings
13:59:02    Attempting uninstall: six
13:59:02      Found existing installation: six 1.16.0
13:59:02      Uninstalling six-1.16.0:
13:59:02  ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/var/jenkins/.cache/pre-commit/repomw8oee1s/py_env-python3/lib/python3.7/site-packages/__pycache__/six.cpython-37.pyc'
13:59:02
13:59:02  nox > Command pip install --progress-bar=off -r requirements/static/ci/py3.7/invoke.txt failed with exit code 1
13:59:02  nox > Session invoke-pre-commit failed.
```

* add changelog for #59982

* Regression test for #56273

* Fix race condition in batch. #56273

* Add changelog for #56273

* Update salt/client/__init__.py

Co-authored-by: Pedro Algarvio <[email protected]>

* Update doc for salt/client

* Update changelog/56273.fixed

Thoreau said, "Simplify, Simplify"

* Update docs

* Update docs

* Update CHANGELOG.md

* Update 3003.1.rst

* Ignore configuration for 'enable_fqdns_grains' for AIX, Solaris and Juniper

* Added changelog

* Let Mac OS Mojave run for 8 hours to avoid timeout

* Remove FreeBSD-12.2

* Use Popen for VT

* Still allow shell True

* Drop shlex split

* Add crypto re-init

* Fix pre-commit

* Do not call close in isalive

* Skip tests not valid on windows

* Cleanup things that are not really needed

* We do not support irix

* Fix pre-commit

* Remove commented out lines

* Add changelog for #60504

* Fix pre-commit issues

* pyupgrade does not remove six imports

* Fix OSErrors in some test cases

* Remove un-needed args processing

* Make state_running test more reliable

* Removing tmpfs from Fedora 33.

* Address leaks in fileserver caused by git backends

At this time we do not have the ability to fix the upstream memory leaks
in the gitfs backend providers. Work around their limitations by
periodically restarting the file server update proccess. This will at
least partially address #50313

* Remove un-used import

* Fix warts caused by black version

* Add changelog

* We don't need two changelogs

* Also pin the ``pip`` upgrade to be ``<21.2``

* Update the external ipaddress to the latest 3.9.5 version which has some security fixes.  Updating the compat.p to use the vendored version if the python version is below 3.9.5 and only run the test_ipaddress.py tests if below 3.9.5.

* Adding changelog

* Requested changes.

* Add shh_timeout to ssh_kwargs

* move to with blocks

* one with block

* reight crypto

* add back test file

* add changelog

* change log file number

* add m2crypt support

* only check m2crpto

* Delete 60571.fixed

* add back log

* add newline

* add newline for log file

* Work around pypa/pip#9450

See pypa/pip#10212

* Drop six and Py2

* [3003.2] Add server alive (#60573)

* add server alive

* rename log

* change default alive time

* add requested changes

* format string

* reformat string again

* run pre

* customize

* space

* remove EOF dead space

* fix pre-commit

* run pre

Co-authored-by: Megan Wilhite <[email protected]>

* Changelog for 3003.2

* Man pages update for 3003.2

* Allow CVE entries in `changelog/`

* Add security type for towncrier changelog

* Add security type for changelog entries pre-commit check

* Pin to ``pip>=20.2.4,<21.2``

Refs pypa/pip#9450

* Drop six and Py2

* Fix bug introduced in #59648

Fixes #60046

* Add changelog

* Fix doc builds

* fix release notes about dropping ubuntu 16.04

* update file client

* add changelog file

* update changelog

* Check permissions of minion config directory

* Fix some wording in the messagebox and in comments

* Add changelog

* Fix extension for changelog

* Add missing commas. It also worked, but now is better

* docs_3003.3

* fixing version numbers in man pages.

* removing newlines.

* removing newlines.

* Fixing release notes.

* Fix changelog file for 3003.2 release

* Fix test_state test using loader.context

* Re-add test_context test

* Allow Local System account, add timestamp

* swaping the git-source for vsphere-automation-sdk-python

* Remove destroy, handled in context manager

Co-authored-by: Daniel Wozniak <[email protected]>
Co-authored-by: Pedro Algarvio <[email protected]>
Co-authored-by: Bryce Larson <[email protected]>
Co-authored-by: Pablo Suárez Hernández <[email protected]>
Co-authored-by: Alexander Graul <[email protected]>
Co-authored-by: Frode Gundersen <[email protected]>
Co-authored-by: Gareth J. Greenaway <[email protected]>
Co-authored-by: Gareth J. Greenaway <[email protected]>
Co-authored-by: Hoa-Long Tam <[email protected]>
Co-authored-by: krionbsd <[email protected]>
Co-authored-by: Elias Probst <[email protected]>
Co-authored-by: Daniel A. Wozniak <[email protected]>
Co-authored-by: Frode Gundersen <[email protected]>
Co-authored-by: twangboy <[email protected]>
Co-authored-by: twangboy <[email protected]>
Co-authored-by: ScriptAutomate <[email protected]>
Co-authored-by: Wayne Werner <[email protected]>
Co-authored-by: David Murphy < [email protected]>
Co-authored-by: Joe Eacott <[email protected]>
Co-authored-by: cmcmarrow <[email protected]>
Co-authored-by: Twangboy <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior P1 Priority 1 Regression The issue is a bug that breaks functionality known to work in previous releases. severity-high 2nd top severity, seen by most users, causes major problems Silicon v3004.0 Release code name ZD The issue is related to a Zendesk customer support ticket.
Projects
None yet
Development

Successfully merging a pull request may close this issue.