Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: prepare and pool processes #87

Merged
merged 19 commits into from
Apr 21, 2024
Merged

feat: prepare and pool processes #87

merged 19 commits into from
Apr 21, 2024

Conversation

WinPlay02
Copy link
Contributor

@WinPlay02 WinPlay02 commented Apr 17, 2024

Closes #85

Summary of Changes

  • Use a process pool to keep started processes waiting
  • The max. amount of pipeline processes is now set to 4.
  • Reuse started processes. This should be correct, as the same pipeline process cannot be used by multiple pipelines at the same time. As the metapath is reset to remove the custom generated Safe-DS pipeline code, only global library imports (and settings) should remain. If this is a concern, maxtasksperchild can be set to 1, in which case pipeline processes are not reused.
  • Reuse shared memory location for saving placeholders, if the memoization infrastructure has added such a location to the object being saved

Copy link

github-actions bot commented Apr 17, 2024

🦙 MegaLinter status: ✅ SUCCESS

Descriptor Linter Files Fixed Errors Elapsed time
✅ PYTHON black 2 0 0 0.78s
✅ PYTHON mypy 2 0 2.24s
✅ PYTHON ruff 2 0 0 0.03s
✅ REPOSITORY git_diff yes no 0.02s

See detailed report in MegaLinter reports
Set VALIDATE_ALL_CODEBASE: true in mega-linter.yml to validate all sources, not only the diff

MegaLinter is graciously provided by OX Security

Copy link

codecov bot commented Apr 17, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 100.00%. Comparing base (50d831f) to head (fa53a52).

Additional details and impacted files
@@            Coverage Diff            @@
##              main       #87   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           14        14           
  Lines          733       750   +17     
=========================================
+ Hits           733       750   +17     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@lars-reimann
Copy link
Member

The max. amount of pipeline processes is now set to the amount of available CPU cores.

We probably don't need that many:

image

Most of the time, the processes will idle. As long as the runner is local, 2 workers should be enough. I don't think the user would fire off pipeline runs without waiting for completion of the previous one.

@WinPlay02
Copy link
Contributor Author

Most of the time, the processes will idle. As long as the runner is local, 2 workers should be enough. I don't think the user would fire off pipeline runs without waiting for completion of the previous one.

I see, that's a lot of memory wasted. But two processes are easily saturated, if two tables are opened in the EDA view (waiting for stats). A normal execution would only be queued and needs to wait for any running pipelines to complete first.
The statistical analysis of the EDA view does another pipeline execution after the table has been fetched, according to my observations.
I'd say 4 would probably be a good middle ground to limit the processes to.

@WinPlay02
Copy link
Contributor Author

Also, coverage seems broken again. It doesn't seem to like multiprocessing pools very much.

@lars-reimann
Copy link
Member

lars-reimann commented Apr 17, 2024

I'd say 4 would probably be a good middle ground to limit the processes to.

Sure, seems good to me. Can easily be tweaked later.

The statistical analysis of the EDA view does another pipeline execution after the table has been fetched, according to my observations.

But then the original worker is already free again, no?

@lars-reimann
Copy link
Member

lars-reimann commented Apr 17, 2024

Also, coverage seems broken again. It doesn't seem to like multiprocessing pools very much.

I would not consider this a blocker for this PR. I'll investigate whether that can be fixed later.

Edit: Maybe this already fixes the issue.

@WinPlay02
Copy link
Contributor Author

But then the original worker is already free again, no?

When viewing one table, then this is true. But everything would break, as soon as two tables are waiting for stats. This might be too cautious, but since gathering the stats for the EDA view takes a bit of time, this could be irritating.

@lars-reimann
Copy link
Member

But everything would break, as soon as two tables are waiting for stats.

Break in what way?

@WinPlay02
Copy link
Contributor Author

But everything would break, as soon as two tables are waiting for stats.

Break in what way?

Maybe break is not the right description.
Manual executions wait for as long as stats and plots are being calculated. For large tables, this can take some time.
As a user, I'd think something broke, as nothing (obvious) is happening anymore.

@lars-reimann
Copy link
Member

But everything would break, as soon as two tables are waiting for stats.

Break in what way?

Maybe break is not the right description. Manual executions wait for as long as stats and plots are being calculated. For large tables, this can take some time. As a user, I'd think something broke, as nothing (obvious) is happening anymore.

I see, that's an issue for another day (and someone else 😉).

@WinPlay02
Copy link
Contributor Author

This PR should lead to a modest improvement in startup time, and a small improvement (mostly depending on the content) in runtime.

I don't know what the target is, and how much potential for startup-optimization is left, but I was able to get small and medium tables after a few 100ms, after pressing the "Explore" lens.

@WinPlay02 WinPlay02 marked this pull request as ready for review April 17, 2024 18:36
@WinPlay02 WinPlay02 requested a review from a team as a code owner April 17, 2024 18:36
@lars-reimann
Copy link
Member

This PR should lead to a modest improvement in startup time, and a small improvement (mostly depending on the content) in runtime.

I don't know what the target is, and how much potential for startup-optimization is left, but I was able to get small and medium tables after a few 100ms, after pressing the "Explore" lens.

It already feels a lot better. The last idea I'd have is to add an initializer to the process pool doing something like

def _init():
    from safeds.data.tabular.containers import Table

    Table()

Then all processes can immediately display a Table.

@lars-reimann
Copy link
Member

There seems to be an issue with the memoization of block & expression lambdas now:

package test

pipeline whoSurvived {
    val titanic = Table
        .fromCsvFile("titanic.csv")
        .removeColumns(["id", "ticket", "cabin", "port_embarked", "fare"]);

    val filtered = titanic.filterRows((row) ->
        row.getValue("age") as Float <= 100
    );
}

If you change the 100 and explore filtered a couple of times, you eventually stop getting updates.

@WinPlay02
Copy link
Contributor Author

There seems to be an issue with the memoization of block & expression lambdas now:

package test

pipeline whoSurvived {
    val titanic = Table
        .fromCsvFile("titanic.csv")
        .removeColumns(["id", "ticket", "cabin", "port_embarked", "fare"]);

    val filtered = titanic.filterRows((row) ->
        row.getValue("age") as Float <= 100
    );
}

If you change the 100 and explore filtered a couple of times, you eventually stop getting updates.

I can't reproduce this right now (but I'm seeing another somewhat related error, Error during pipeline execution: Document not found, during exploration).
I suppose this is the problem from above, that exploring causes stats collection to fill up all available pipeline processes, which causes everything after that to stall.
Considering that, I don't know what about the lambdas would be special to break the memoization/runner.

@lars-reimann
Copy link
Member

lars-reimann commented Apr 21, 2024

I've started looking into this a little: It seems like inspect.getsource itself returns outdated values:

2024-04-21 20:57:34.616 [debug] root:Received Message: {"type":"program","id":"166d653e-e280-4455-946c-374a60f4e2a1","data":{"code":{"demo":{"gen_titanic":"# Imports ----------------------------------------------------------------------\r\n\r\nimport safeds_runner\r\nfrom safeds.data.tabular.containers import Column\r\n\r\n# Pipelines --------------------------------------------------------------------\r\n\r\ndef example3():\r\n    column = safeds_runner.memoized_static_call(\"safeds.data.tabular.containers.Column\", lambda *_ : Column('test', data=[1, 2, 3]), ['test', [1, 2, 3]], [])\r\n    safeds_runner.save_placeholder('column', column)\r\n    def __gen_lambda_0(param1):\r\n        return (param1) < (2)\r\n    allMatch = safeds_runner.memoized_dynamic_call(\"all\", None, [column, __gen_lambda_0], [])\r\n    safeds_runner.save_placeholder('allMatch', allMatch)\r\n","gen_titanic_example3":"from .gen_titanic import example3\r\n\r\nif __name__ == '__main__':\r\n    example3()\r\n"}},"main":{"modulepath":"demo","module":"titanic","pipeline":"example3"},"cwd":"c:\\Users\\Lars\\OneDrive\\Desktop\\test"}}
2024-04-21 20:57:34.621 [debug] root:Looking up value for key ('safeds.data.tabular.containers._column.Column.all', (ExplicitIdentityWrapperLazy(_value=Column('test', [1, 2, 3]), memory=SharedMemory('wnsm_78221081', size=4096), id=UUID('ae7dbd79-497e-4ebb-ac2f-6aaf47184952'), hash=504026043636193121), '    def __gen_lambda_0(param1):\n        return (param1) < (4)\n'), ())

In the program message, the lambda code is

def __gen_lambda_0(param1):
    return (param1) < (2)

but in the key it's

def __gen_lambda_0(param1):
    return (param1) < (4)

I've also had it throw an exception after adding more lines to a pipeline that contain lambdas:

2024-04-21 21:00:18.067 [debug] [Runner] [9fd84cfe-e31c-4201-b80f-86cef175ae0a] lineno is out of bounds
	at E:\Repositories\safe-ds\Runner\src\safeds_runner\server\_pipeline_manager.py line 298
	at <frozen runpy> line 226
	at <frozen runpy> line 98
	at <frozen runpy> line 88
	at demo/gen_titanic_example3 line 4
	at demo/gen_titanic line 19 (mapped to 'titanic.sds' line 7)
	at E:\Repositories\safe-ds\Runner\src\safeds_runner\server\_pipeline_manager.py line 419
	at E:\Repositories\safe-ds\Runner\src\safeds_runner\server\_memoization_map.py line 146
	at E:\Repositories\safe-ds\Runner\src\safeds_runner\server\_memoization_utils.py line 396
	at E:\Repositories\safe-ds\Runner\src\safeds_runner\server\_memoization_utils.py line 343
	at E:\Repositories\safe-ds\Runner\src\safeds_runner\server\_memoization_utils.py line 343
	at E:\Repositories\safe-ds\Runner\src\safeds_runner\server\_memoization_utils.py line 345
	at C:\Users\Lars\AppData\Local\Programs\Python\Python312\Lib\inspect.py line 1282
	at C:\Users\Lars\AppData\Local\Programs\Python\Python312\Lib\inspect.py line 1264
	at C:\Users\Lars\AppData\Local\Programs\Python\Python312\Lib\inspect.py line 1128

@lars-reimann
Copy link
Member

Looks like this Python bug is back. Got it working now 🎉.

Copy link
Member

@lars-reimann lars-reimann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Execution time feels great now, awesome stuff!

@lars-reimann lars-reimann merged commit e5e7011 into main Apr 21, 2024
8 checks passed
@lars-reimann lars-reimann deleted the process-prime branch April 21, 2024 20:30
@WinPlay02
Copy link
Contributor Author

Thanks for investigating further, and great that it is now working 🎉

lars-reimann pushed a commit that referenced this pull request Apr 22, 2024
## [0.12.0](v0.11.0...v0.12.0) (2024-04-22)

### Features

* handle list of filenames in `absolute_path` and `file_mtime` ([#89](#89)) ([50d831f](50d831f)), closes [#88](#88)
* prepare and pool processes ([#87](#87)) ([e5e7011](e5e7011)), closes [#85](#85)
@lars-reimann
Copy link
Member

🎉 This PR is included in version 0.12.0 🎉

The release is available on:

Your semantic-release bot 📦🚀

@lars-reimann lars-reimann added the released Included in a release label Apr 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
released Included in a release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Improve startup time of workers
3 participants