Skip to content

Description of the aurman dependency solving

Jonni Westphalen edited this page Mar 9, 2018 · 6 revisions

Overview

The dependency solving of aurman differs strongly from other dependency solvers.

deep_search

aurman is able to calculate solutions for dependency problems independent from the currently installed system. Hence the solving is much more powerful, since it allows to find solutions which are not influenced by the system of the user.

This may be useful, if there are conflicts between packages you wish to install and packages which are already installed, so that they do not allow the installing of the new packages.

These conflicts may only be solved adequate, if the dependency solving itself calculates the complete dependency chains and hence the complete solutions starting with the assumption: 0 packages installed.

Notice: Since this is not always needed, it is not activated by default, but may be enabled with --deep_search

bootstrapping of packages

There are packages which may not be installed by a naive dependency solving approach.

e.g. mingw-w64-gcc (https://aur.archlinux.org/packages/mingw-w64-gcc)

In order to install this package, mingw-w64-gcc-base needs to be installed in order to allow building of mingw-w64-crt but must be removed afterwards, since it is conflicting with mingw-w64-gcc (the package one wants to install)

aurman is capable of solving those dependency chains.

hypothetical appending of packages

In order to validate solutions in the last instance, aurman has implemented hypothetical appending of packages.

Hence aurman is able to calculate resulting systems without the need to install anything.

Benchmarks - these results are on my machine, your mileage may vary

normal

  • one package
    • -S ros-indigo-desktop-full: 0.13 seconds (valid solution)
    • -S plasma-git-meta: 0.09 seconds (invalid solution)
    • -S mingw-w64-gcc: 0.05 seconds (valid solution)
  • multiple packages
    • -S ros-indigo-desktop-full plasma-git-meta mingw-w64-gcc: 0.72 seconds (invalid solution)
    • -S ros-indigo-desktop-full ros-lunar-desktop mingw-w64-gcc: 0.19 seconds (valid solution)

deep_search

  • one package
    • -S --deep_search ros-indigo-desktop-full: 2.73 seconds (valid solution)
    • -S --deep_search plasma-git-meta: 1.41 seconds (invalid solution)
    • -S --deep_search mingw-w64-gcc: 0.02 seconds (valid solution)
  • multiple packages
    • -S --deep_search ros-indigo-desktop-full plasma-git-meta mingw-w64-gcc: 4.81 seconds (invalid solution)
    • -S --deep_search ros-indigo-desktop-full ros-lunar-desktop mingw-w64-gcc: 6.74 seconds (valid solution)

Algorithm description

  • Two expac queries yield the names of all installed packages and the names of all known repository packages.
repo_packages_names = set(expac("-S", ('n',), ()))
installed_packages_names = set(expac("-Q", ('n',), ()))

Hence I am able to classify:

installed_repo_packages_names = installed_packages_names & repo_packages_names
unclassified_installed_names = installed_packages_names - installed_repo_packages_names

3 expac queries and 1 aur rpc query yield all needed information:

# installed repo packages
if installed_repo_packages_names:
    return_list.extend(
        Package.get_packages_from_expac("-Q", list(installed_repo_packages_names), PossibleTypes.REPO_PACKAGE))

# installed aur packages
installed_aur_packages_names = set(
    [package.name for package in Package.get_packages_from_aur(list(unclassified_installed_names))])

if installed_aur_packages_names:
    return_list.extend(
        Package.get_packages_from_expac("-Q", list(installed_aur_packages_names), PossibleTypes.AUR_PACKAGE))

unclassified_installed_names -= installed_aur_packages_names

# installed not repo not aur packages
if unclassified_installed_names:
    return_list.extend(Package.get_packages_from_expac("-Q", list(unclassified_installed_names),
                                                       PossibleTypes.PACKAGE_NOT_REPO_NOT_AUR))

In general the following things are received and saved for packages:

self.name = name  # %n
self.version = version  # %v
self.depends = depends  # %D
self.conflicts = conflicts  # %H
self.optdepends = optdepends  # %o
self.provides = provides  # %P
self.replaces = replaces  # %R
self.pkgbase = pkgbase  # %e
self.install_reason = install_reason  # %w (only with -Q)
self.makedepends = makedepends  # aur only
self.checkdepends = checkdepends  # aur only
self.type_of = type_of  # PossibleTypes Enum value
self.repo = repo  # %r (only useful for upstream repo packages)
self.groups = groups  # %G
  • Now for the upstream packages: 1 expac query receives all known upstream packages from repositories.
Package.get_packages_from_expac("-S", (), PossibleTypes.REPO_PACKAGE)

Easy, isn't it? Now for the upstream aur packages: In the beginning i am collection "names". "names" the user put on the command line + names of known (installed) aur packages. I want to receive information about packages just once, so I fetch everything I might possibly need at once. No fetching is needed during dependency solving. The algo for that is quite short and intuitive:

packages_names = set([strip_versioning_from_name(name) for name in packages_names])
packages_names_to_fetch = [name for name in packages_names if name not in self.all_packages_dict]

while packages_names_to_fetch:
    fetched_packages = Package.get_packages_from_aur(packages_names_to_fetch)
    self.append_packages(fetched_packages)

    deps_of_the_fetched_packages = []
    for package in fetched_packages:
        deps_of_the_fetched_packages.extend(package.relevant_deps())

    relevant_deps = list(set([strip_versioning_from_name(dep) for dep in deps_of_the_fetched_packages]))

    packages_names_to_fetch = [dep for dep in relevant_deps if dep not in self.all_packages_dict]

Notice: Since I request everything I might possibly need, aurman is able to solve the dependency chain of ros-indigo-desktop-full since "the missing dep" is listed elsewhere, and hence aurman knows of the existence.

That's it. All needed information about packages, great. Now for the interesting part: Datastructures for the performance. The main concept I came up with is the concept of a "System". A "System" is nothing more than a collection of packages, but if packages "belong together" you may calculate many things before you actually need them. A system consists of:

self.all_packages_dict = {}  # names as keys and packages as values
self.repo_packages_list = []  # list containing the repo packages
self.aur_packages_list = []  # list containing the aur but not devel packages
self.devel_packages_list = []  # list containing the aur devel packages
self.not_repo_not_aur_packages_list = []  # list containing the packages that are neither repo nor aur packages

# reverse dict for finding providings. names of providings as keys and providing packages as values in lists
self.provides_dict = {}
# same for conflicts
self.conflicts_dict = {}

Every time a package is being added to a system, it is classified as repo, aur but not devel, devel or nor aur nor repo package. Also are the relevant entries added for the reverse dicts. Notice: Appending packages does not delete any of those information, hence you do not recalculate anything of that, if you only append. Code for the appending to the reverse dicts, since that may be interesting:

def __append_to_x_dict(self, packages: Sequence['Package'], dict_name: str):
    dict_to_append_to = getattr(self, "{}_dict".format(dict_name))

    for package in packages:
        relevant_package_values = getattr(package, dict_name)

        for relevant_value in relevant_package_values:
            value_name = strip_versioning_from_name(relevant_value)
            if value_name in dict_to_append_to:
                dict_to_append_to[value_name].append(package)
            else:
                dict_to_append_to[value_name] = [package]

called like self.__append_to_x_dict(packages, 'provides')

Checking if e.g. all deps of a package are fulfilled is hence done quite easily:

for dep in package.relevant_deps(only_make_check=only_make_check, only_depends=only_depends):
    if not self.provided_by(dep):
        return False
else:
    return True

self.provided_by(dep) is just searching the reverse dict which is very performant. Notice: Differentiating between depends, make_depends and check_depends is no problem since we differentiated when receiving the packages.

Before we get to the dep solving algorithm itself, one more important thing: Hypothetical appending of packages to a system. It allows to calculate what happens when installing packages without having to install them. Seems like a trivial idea, which it is, but this really is one of the most important things of aurman. The algo for that works like this ((german) english first, then python):

Split the packages to append into chunks: AUR packages are installed one by one, repo packages together. E.g. you want to install Repo1, AUR1, Repo2, Repo3, Repo4, AUR2 in that order: The 4 chunks will be: Repo1, AUR1, Repo2 AND Repo3 AND Repo4, AUR2

For every chunk do the following:

  • Check if packages in chunk conflict each other (really only relevant for chunks containing more than one package). If they conflict, do not append, next chunk.
  • Calculate all conflicting packages of the system to append to. Includes packages with same names, they will be "upgraded" but technically removed first. Not to have special cases keeps the code sane.
  • Remove the conflicting packages and append to packages to append.
  • Now check, if there are packages in the system, whose deps are not fulfilled anymore. Iterative and done until there are no such packages.

Python code:

new_system = System(list(self.all_packages_dict.values()))
if not packages:
    return new_system

chunked_packages = System.calc_install_chunks(packages)
last_index = len(chunked_packages) - 1

for i, package_chunk in enumerate(chunked_packages):
    # check if packages in chunk conflict each other
    package_chunk_system = System(())
    for package in package_chunk:
        if package_chunk_system.conflicting_with(package):
            break
        package_chunk_system.append_packages((package,))
    # no conflicts
    else:
        # calculate conflicting packages
        conflicting_new_system_packages = []
        for package in package_chunk:
            # print why packages will be removed
            if packages_names_print_reason is not None:
                will_be_deleted = new_system.conflicting_with(package)
                for package_to_be_removed in will_be_deleted:
                    if package_to_be_removed.name in packages_names_print_reason:
                        aurman_note(
                            "Package {} will be removed due to a conflict with {}".format(
                                Colors.BOLD(Colors.LIGHT_MAGENTA(package_to_be_removed.name)),
                                Colors.BOLD(Colors.LIGHT_MAGENTA(package.name))))

            # save the found packages for later deletion
            conflicting_new_system_packages.extend(new_system.conflicting_with(package))

        # remove duplicates
        conflicting_new_system_packages = set(conflicting_new_system_packages)

        # print what will be done
        if print_way:
            to_delete_packages_names = set()
            to_upgrade_packages_names = set()
            to_reinstall_packages_names = set()
            packages_chunk_names = set([package.name for package in package_chunk])

            for package in conflicting_new_system_packages:
                if package.name not in packages_chunk_names:
                    to_delete_packages_names.add(package.name)
                else:
                    old_package = new_system.all_packages_dict[package.name]
                    new_package = [chunk_pack for chunk_pack in
                                   package_chunk if package.name == chunk_pack.name][0]
                    if old_package.version == new_package.version:
                        to_reinstall_packages_names.add(package.name)
                    else:
                        to_upgrade_packages_names.add(package.name)

            if to_upgrade_packages_names:
                print("   {}   : {}"
                      "".format(Colors.BOLD(Colors.LIGHT_CYAN("Upgrade"))
                                , ", ".join([Colors.BOLD(Colors.LIGHT_MAGENTA(name))
                                             for name in sorted(to_upgrade_packages_names)])))

            if to_reinstall_packages_names:
                print("   {} : {}"
                      "".format(Colors.BOLD(Colors.LIGHT_MAGENTA("Reinstall"))
                                , ", ".join([Colors.BOLD(Colors.LIGHT_MAGENTA(name))
                                             for name in sorted(to_reinstall_packages_names)])))

            if to_delete_packages_names:
                print("   {}    : {}"
                      "".format(Colors.BOLD(Colors.LIGHT_RED("Remove"))
                                , ", ".join([Colors.BOLD(Colors.LIGHT_MAGENTA(name))
                                             for name in sorted(to_delete_packages_names)])))

            to_install_packages_names = packages_chunk_names - set.union(
                *[to_upgrade_packages_names, to_reinstall_packages_names])

            if to_install_packages_names:
                print("   {}   : {}"
                      "".format(Colors.BOLD(Colors.LIGHT_GREEN("Install"))
                                , ", ".join([Colors.BOLD(Colors.LIGHT_MAGENTA(name))
                                             for name in sorted(to_install_packages_names)])))

        # remove conflicting packages
        if conflicting_new_system_packages:
            deleted_packages = True

            for package in conflicting_new_system_packages:
                del new_system.all_packages_dict[package.name]
            new_system = System(list(new_system.all_packages_dict.values()))
        else:
            deleted_packages = False

        # append packages
        new_system.append_packages(package_chunk)

        # last exit brooklyn
        # final check for sanity of the whole solution
        # we do not accept mistakes!
        if not deleted_packages and not (i == last_index):
            continue

        # delete packages whose deps are not fulfilled anymore
        while True:
            to_delete_packages = []
            for package in new_system.all_packages_dict.values():
                if packages_names_print_reason is not None and package.name in packages_names_print_reason:
                    if not new_system.are_all_deps_fulfilled(package, only_depends=True, print_reason=True):
                        to_delete_packages.append(package)
                else:
                    if not new_system.are_all_deps_fulfilled(package, only_depends=True):
                        to_delete_packages.append(package)

            if not to_delete_packages:
                break

            # print what will be done
            if print_way:
                packages_names_to_del = set([package.name for package in to_delete_packages])

                print("   {}    : {}"
                      "".format(Colors.BOLD(Colors.LIGHT_RED("Remove"))
                                , ", ".join([Colors.BOLD(Colors.LIGHT_MAGENTA(name))
                                             for name in sorted(packages_names_to_del)])))

            # actually delete the packages
            for package in to_delete_packages:
                del new_system.all_packages_dict[package.name]
            new_system = System(list(new_system.all_packages_dict.values()))

return new_system

Notice: last exit Brooklyn should not be needed, as stated in the comment, but there may be cases where the installed system of the user is in an inconsistent state and although nothing gets deleted, there are still packages without all deps fulfilled. so: better safe than sorry... @rmarquis ;)

Finally: The dependency solving... When are you able to say, that there really is no possible way of finding a solution to a dependency problem? A really important observation is: Since there are depends, make- and checkdepends, you HAVE to differentiate between them, to be able to tell if there is no solution. Since I am a mathematician at heart I tried to solve a simpler problem first... So, forget about check and make. Everything is a normal dependency.

Another problem: "Normal" graph theory does not provide a useful representation (as far as I know), because there may be multiple providers for dependencies.

But another thing first: What are dependencies? At first I tried to classify: Package, library, ... but as I said before: Special cases suck. Hence dependencies are names. Strings. Nothing more. It may be the case, that a dependency is being provided by a package with the name of the dependency itself, great, but we do not really care.

Let us simplify more. There are no problems in our simplified world. No dep-cycles, no conflicts, deps which are not providable. This is simple enough to state the following: You want to install packages A, B, C, ... The dependencies of the packages are A_a, A_b, A_c, ...., B_a, B_b, ... (no duplicates), (no deps of deps) The number of possible providers for the dep X_x is denoted as |X_x|. Hence there are |A_a| * |A_b| * ... * |B_a| * ... solutions for our dep problem.

Now let's go back, step by step. Back to the real world... That there may be deps of deps and duplicates is not a real problem, since we will take a recursive approach and track for which deps we already chose a provider.

So, there are "real problems" now. We do not differentiate between the different types of problems as of now. A valid solution for the dependency problem consists of an ordered list of packages. You install the packages in order and you are done. No chunking yet (nevertheless only needed because of dep-cycles in repos like between mesa and libglvnd - damnit)

Let us assume you have a partial solution consisting of two packages A and B. You want to append C and a problem occurs. We do not know anything about problems with concrete packages, there are just problems. What can be done? Well, this is pretty straight forward. All dependencies need to be fulfilled for the solution to be valid, simple boolean algebra AND. A single dependency needs to be provided by at least one provider, so simple boolean OR between the providers. So e.g. we have all in all the deps X, Y and Z. X is being provided by A, A_1, Y by B, B_1 and Z by C, C_1. Hence we get: (A OR A_1) AND (B OR B_1) AND (C OR C_1) We already tried A, B, C which yields a problem, hence we can try the finite number of other possibilites.

This is fine, but impracticable. Let's say we know between which packages the problem persists. E.g. we know the problem is between B and C. We can conclude that trying A_1 instead of A will not change anything, hence we are able to reduce the number of other solutions to try. Seems trivial, but this is an important oberservation.

Back to dependencies of dependencies. Now this trivial observation becomes mighty. For every package in the partial solution is clear, why it is in the solution. The package has been chosen as a provider for a dependency. Hence we are able to track the way of decisions which lead to this exact package being in the partial solution. Hence we are able to conclude which dependencies are worth to try other providers for. If a dependency is not included in the decision chain which lead to the problem packages being in the solution, changing anything there will in fact not change anything about the problem. In practice this reduces the number of other solutions to try drastically.

Simplest case: You got a package, which depends on something for which no provider exists. The only way to solve this problem is, to track the path backwards, to check, if there is a possibility not wanting to install this package.

Second case: Conflict. You check for all conflicting packages the decision trees to see if you are able to decide so that no conflict occurs.

Third and to be honest the coolest case, since it's about one package but maybe two decision trees: Dep cycle. When finding the dep-cycle, it is possible that you reached the dep on different paths. E.g.

A -> B -> C -> D -> E -> C

The paths are A -> B -> C and D -> E -> C

Funny to track recursively, but possible and the solution is quite nice

Notice: Hence it is indeed possible to solve dep-cycles... If it is for example possible to provide E by F and F depends on nothing, we found a valid solution.

Great, we have almost covered everything, just one more thing. Deps, make- and checkdeps. This really required quite a bit of thinking, well, the other things too, but this one... meh... The solution is: Get rid of the make- and check depends. That means: Assume you only want to install a single package. If there is no solution for that, fine, then we are done. If there is, we are able to BUILD that package and hence do not have to worry about make- and check depends. Do that for every package. When you have done that, guess what? Only depends matter... Now the algo in python: Have commented everything, so you will be surely finding the puzzle pieces I have discussed in the python code.

Calling of the algo

@staticmethod
def dep_solving(packages: Sequence['Package'], installed_system: 'System', upstream_system: 'System') -> List[
    List['Package']]:
    """
    Solves deps for packages.

    :param packages:                The packages in a sequence
    :param installed_system:        The system containing the installed packages
    :param upstream_system:         The system containing the known upstream packages
    :return:                        A list containing the solutions.
                                    Every inner list contains the packages for the solution topologically sorted
    """

    deps_to_deep_check = set()
    single_first = False

    while True:
        current_solutions = [DepAlgoSolution([], [], set())]
        found_problems = set()

        # calc solutions
        # for every single package first
        if single_first:
            for package in packages:
                new_solutions = []
                for solution in current_solutions:
                    solution.dict_call_as_needed = {package.name: True}
                    new_solutions.extend(
                        package.solutions_for_dep_problem(solution, found_problems, installed_system,
                                                          upstream_system, deps_to_deep_check))
                current_solutions = new_solutions

        # now for all packages together
        for solution in current_solutions:
            solution.dict_call_as_needed = {}
            for package in packages:
                solution.dict_call_as_needed[package.name] = True
        for package in packages:
            new_solutions = []
            for solution in current_solutions:
                new_solutions.extend(
                    package.solutions_for_dep_problem(solution, found_problems, installed_system, upstream_system,
                                                      deps_to_deep_check))
            current_solutions = new_solutions

        # delete invalid solutions
        current_solutions = [solution for solution in current_solutions if solution.is_valid]

        # in case of at least one solution, we are done
        if current_solutions:
            break

        deps_to_deep_check_length = len(deps_to_deep_check)
        for problem in found_problems:
            problem_packages_names = set([package.name for package in problem.relevant_packages])
            deps_to_deep_check |= problem_packages_names

        # if there are no new deps to deep check, we are done, too
        if len(deps_to_deep_check) == deps_to_deep_check_length and single_first:
            break
        elif len(deps_to_deep_check) == deps_to_deep_check_length:
            if len(packages) > 1:
                single_first = True
            else:
                break

    # output for user
    if found_problems and not current_solutions:
        aurman_error("While searching for solutions the following errors occurred:\n"
                     "{}\n".format("\n".join([aurman_note(problem, False, False) for problem in found_problems])),
                     True)

    return [solution.packages_in_solution for solution in current_solutions]

Algo itself

def solutions_for_dep_problem(self, solution: 'DepAlgoSolution', found_problems: Set['DepAlgoFoundProblems'],
                              installed_system: 'System', upstream_system: 'System',
                              deps_to_deep_check: Set[str]) -> List['DepAlgoSolution']:
    """
    Heart of this AUR helper. Algorithm for dependency solving.
    Also checks for conflicts, dep-cycles and topologically sorts the solutions.

    :param solution:                The current solution
    :param found_problems:          A set containing found problems while searching for solutions
    :param installed_system:        The currently installed system
    :param upstream_system:         The system containing the known upstream packages
    :param deps_to_deep_check:      Set containing deps to check all possible dep providers of
    :return:                        The found solutions
    """

    def filter_solutions(solutions: Sequence['DepAlgoSolution']) -> List['DepAlgoSolution']:
        """
        Filter given solutions so that only valid solutions are left
        or in case of no valid solutions only one invalid solution

        :param solutions:   The solutions to filter
        :return:            The filtered solutions
        """
        return_solutions: List['DepAlgoSolution'] = []

        for solution in solutions:
            if not return_solutions:
                return_solutions.append(solution)
                continue

            first_solution = return_solutions[0]
            if first_solution.is_valid and solution.is_valid:
                return_solutions.append(solution)
            elif first_solution.is_valid:
                continue
            elif solution.is_valid:
                return_solutions = [solution]

        return return_solutions

    if self in solution.installed_solution_packages:
        return [solution.solution_copy()]

    # dep cycle
    # dirty... thanks to dep cycle between mesa and libglvnd
    if self in solution.visited_packages and not (self.type_of is PossibleTypes.REPO_PACKAGE):
        # problem only relevant
        # if the solution is not already invalid
        if solution.is_valid:
            index_of_self = solution.visited_packages.index(self)
            cycle_packages = []
            for i in range(index_of_self, len(solution.visited_packages)):
                cycle_packages.append(solution.visited_packages[i])
            cycle_packages.append(self)

            # create the problem
            cycle_problem = DepAlgoCycle(cycle_packages)
            for package in cycle_packages:
                cycle_problem.relevant_packages.add(package)
                cycle_problem.relevant_packages |= set(solution.dict_to_way.get(package.name, []))
            found_problems.add(cycle_problem)
        invalid_sol = solution.solution_copy()
        invalid_sol.is_valid = False
        return [invalid_sol]

    # pacman has to handle dep cycles between repo packages
    elif self in solution.visited_packages:
        return [solution.solution_copy()]

    # copy solution and add self to visited packages
    solution: 'DepAlgoSolution' = solution.solution_copy()
    is_build_available: bool = self in solution.packages_in_solution
    own_way: List['Package'] = solution.dict_to_way.get(self.name, [])
    own_not_to_delete_deps: Set[str] = set()
    solution.visited_packages.append(self)
    current_solutions: List['DepAlgoSolution'] = [solution]

    # filter not fulfillable deps
    relevant_deps = self.relevant_deps()
    for dep in relevant_deps[:]:

        # skip since already provided
        if installed_system.provided_by(dep):
            continue

        # skip since built package available and dep is not a normal dependency
        # so it's make and/or check dep
        if is_build_available and dep not in self.relevant_deps(only_depends=True):
            continue

        # dep not fulfillable, solutions not valid
        if not upstream_system.provided_by(dep):
            for solution in current_solutions:
                solution.is_valid = False

            # create problem
            dep_problem = DepAlgoNotProvided(dep, self)
            dep_problem.relevant_packages.add(self)
            dep_problem.relevant_packages |= set(own_way)
            found_problems.add(dep_problem)

            relevant_deps.remove(dep)

    # AND - every dep has to be fulfilled
    # we filtered the unfulfillable deps,
    # hence at least one dep provider is available
    for dep in relevant_deps:

        # skip since already provided
        if installed_system.provided_by(dep):
            continue

        # skip since built package available and dep is not a normal dependency
        # so it's make and/or check dep
        if is_build_available and dep not in self.relevant_deps(only_depends=True):
            continue

        # fetch dep providers
        dep_providers = upstream_system.provided_by(dep)
        dep_providers_names = [package.name for package in dep_providers]
        dep_stripped_name = strip_versioning_from_name(dep)

        # we only need relevant dep providers
        # deps_to_deep_check will be filled
        # when we encounter problems as dep-cycle, conflicts ...
        if dep_stripped_name in dep_providers_names and dep not in deps_to_deep_check:
            dep_providers = [package for package in dep_providers if package.name == dep_stripped_name]

        # OR - at least one of the dep providers needs to provide the dep
        finished_solutions = [solution for solution in current_solutions if dep in solution.visited_names]
        not_finished_solutions = [solution for solution in current_solutions if dep not in solution.visited_names]

        # check if dep provided by one of the packages already in a solution
        new_not_finished_solutions = []
        for solution in not_finished_solutions:
            if System(list(solution.installed_solution_packages)).provided_by(dep):
                finished_solutions.append(solution)
            else:
                new_not_finished_solutions.append(solution)
        not_finished_solutions = new_not_finished_solutions

        # track deps which may not be deleted
        for solution in current_solutions:
            if dep not in solution.not_to_delete_deps:
                solution.not_to_delete_deps.add(dep)
                own_not_to_delete_deps.add(dep)

        # calc and append new solutions
        current_solutions = finished_solutions
        # used for tracking problems
        new_problems_master: List[Set['DepAlgoFoundProblems']] = []
        found_problems_copy: Set['DepAlgoFoundProblems'] = set(found_problems)
        for solution in not_finished_solutions:

            # add dep to visited names
            # and create another container
            # for problem tracking
            solution.visited_names.add(dep)
            new_problems: List[Set['DepAlgoFoundProblems']] = []

            for dep_provider in dep_providers:
                # way to the package being called in the current solution
                if dep_provider.name not in solution.dict_to_way:
                    way_added = True
                    solution.dict_to_way[dep_provider.name] = own_way[:]
                    solution.dict_to_way[dep_provider.name].append(self)
                else:
                    way_added = False
                # tracking for which deps the package being called has been chosen as provider
                if dep_provider.name not in solution.dict_to_deps:
                    solution.dict_to_deps[dep_provider.name] = set()
                solution.dict_to_deps[dep_provider.name].add(dep)

                # call this function recursively on the dep provider
                # and yield an empty found_problems set instance
                found_problems.clear()
                current_solutions.extend(
                    dep_provider.solutions_for_dep_problem(solution, found_problems, installed_system,
                                                           upstream_system, deps_to_deep_check))
                # save the new problems
                new_problems.append(set(found_problems))
                # remove added things
                solution.dict_to_deps[dep_provider.name].remove(dep)
                if way_added:
                    del solution.dict_to_way[dep_provider.name]

            # reset the problems to the problems
            # we had before calling the dep
            found_problems.clear()
            for problem in found_problems_copy:
                found_problems.add(problem)

            # if there is at least one valid solution
            # problems are not relevant
            # hence add an empty set containing no problems
            for problems in new_problems:
                if not problems:
                    new_problems_master.append(set())
                    break

            # if there are problems contained in all return values
            # show them to the user
            # will most likely be unfulfillable deps in general
            else:
                prob_in_all_ret = set.intersection(*new_problems)
                if prob_in_all_ret:
                    new_problems_master.append(prob_in_all_ret)
                # otherwise append all found problems
                else:
                    new_problems_master.append(set.union(*new_problems))

        # again - at least one valid solution
        # means new problems are not relevant
        if not_finished_solutions:
            for problems in new_problems_master:
                if not problems:
                    break
            else:
                for problem in set.union(*new_problems_master):
                    found_problems.add(problem)

        # filter solutions so that irrelevant solutions are not being
        # used anymore
        # great impact on the performance
        current_solutions = filter_solutions(current_solutions)

    # conflict checking
    for solution in current_solutions:
        # as with dep cycles,
        # conflicts are only relevant
        # if the solution is not already invalid
        if not solution.is_valid:
            continue

        # generate hypothetic system containing the packages of the current solution
        # and check for conflicts with that system
        installed_packages = list(solution.installed_solution_packages)
        conf_system = System(installed_packages).conflicting_with(self)

        # if there are no conflicts, nothing will get deleted, so we may
        # safely assume that we do not get an invalid solution
        if not conf_system:
            continue

        # append the whole current solution to the currently
        # installed system
        # may be empty in case of deep_search
        packages_to_append = solution.packages_in_solution[:]
        packages_to_append.append(self)
        new_system = installed_system.hypothetical_append_packages_to_system(packages_to_append)

        # prepare message for conflict
        additional_message = ""

        # if self cannot be added, this solution
        # is clearly not valid
        if self.name not in new_system.all_packages_dict:
            additional_message = "Tried to install {}, " \
                                 "but it was not possible." \
                                 "".format(Colors.BOLD(Colors.LIGHT_MAGENTA(self.name)))
            is_possible = False
        else:
            is_possible = True

        # these deps have to remain provided,
        # since they are needed for a package which
        # has not been installed yet
        # e.g. A needs B and C, B has been solved with this algo
        # but C not, hence B must remain provided
        # otherwise A cannot be installed
        for dep in solution.not_to_delete_deps:
            if not is_possible:
                break
            if not new_system.provided_by(dep):
                additional_message = "While trying to install {}, " \
                                     "the needed dependency {} has been removed." \
                                     "".format(Colors.BOLD(Colors.LIGHT_MAGENTA(self.name))
                                               , Colors.BOLD(Colors.LIGHT_MAGENTA(dep)))
                is_possible = False
                break

        # same for packages which have to remain installed
        for package in installed_packages:
            if not is_possible:
                break
            if solution.dict_call_as_needed.get(package.name, False) \
                    and package.name not in new_system.all_packages_dict:
                additional_message = "The package {} had to remain installed, " \
                                     "but has been removed.\n" \
                                     "The package which lead to the removal is {}" \
                                     "".format(Colors.BOLD(Colors.LIGHT_MAGENTA(package.name))
                                               , Colors.BOLD(Colors.LIGHT_MAGENTA(self.name)))

                break

        # solution possible at this point if there are no installed packages
        else:
            # check which packages have been removed
            # due to adding the packages
            for package in installed_packages:
                # remove all remainings of the package
                # besides the knowledge that the package
                # has already been built
                if package.name not in new_system.all_packages_dict:
                    solution.installed_solution_packages.remove(package)
                    if package.name in solution.dict_to_deps:
                        for dep in solution.dict_to_deps[package.name]:
                            solution.visited_names.remove(dep)
                        del solution.dict_to_deps[package.name]
                    if package.name in solution.dict_to_way:
                        del solution.dict_to_way[package.name]

            # for the case that there are no installed packages
            if is_possible:
                continue

        # solution not possible!
        solution.is_valid = False
        conflicting_packages = set(conf_system)
        conflicting_packages.add(self)
        ways_to_conflict = []
        for package in conflicting_packages:
            way_to_conflict = solution.dict_to_way.get(package.name, [])[:]
            way_to_conflict.append(package)
            ways_to_conflict.append(way_to_conflict)

        # create the problem
        conflict_problem = DepAlgoConflict(conflicting_packages, ways_to_conflict)
        conflict_problem.additional_message = additional_message
        for way_to_conflict in ways_to_conflict:
            for package in way_to_conflict:
                conflict_problem.relevant_packages.add(package)
        found_problems.add(conflict_problem)

    # we have valid solutions left, so the problems are not relevant
    if [solution for solution in current_solutions if solution.is_valid]:
        found_problems.clear()

    # add self to packages in solution, those are always topologically sorted
    for solution in current_solutions:
        solution.not_to_delete_deps -= own_not_to_delete_deps
        solution.installed_solution_packages.add(self)
        solution.packages_in_solution.append(self)
        solution.visited_packages.remove(self)

    # may contain invalid solutions !!!
    # but also filtered
    return filter_solutions(current_solutions)