Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance regression in 2.7.0 / 3.0.0 (maybe caused by #626?) #719

Closed
sratz opened this issue Mar 4, 2022 · 4 comments · Fixed by #724
Closed

Performance regression in 2.7.0 / 3.0.0 (maybe caused by #626?) #719

sratz opened this issue Mar 4, 2022 · 4 comments · Fixed by #724
Milestone

Comments

@sratz
Copy link
Contributor

sratz commented Mar 4, 2022

I've noticed that in 2.7.0 the dependency and classpath resolution phases are slower than in 2.6.0.

For a non-parallelized build this adds about 7-10 minutes to our build time (727 artifacts):
2.6.0: ~ 25 minutes
2.7.0: ~ 35 minutes

I am currently trying to get some more concrete numbers with hot caches and populated ~/.m2/repository.

First bisecting show that this could be caused by #626.

@sratz
Copy link
Contributor Author

sratz commented Mar 4, 2022

So I did some testing using 2.7.0 vs. 2.7.0 + reverted 774e135 from #626:
Same machine, first a build using mvn clean verify to warm up, then a 2nd run using
mvn clean verify -o to force offline mode and work on caches only:

Here are the numbers from the 2nd runs:

                              2.7.0       2.7.1-SNAPSHOT (2.7.0 + reverted 774e135844b20b46ef71b6de4463126aa0f6da9b)
dependency resolution phase   00:08:17    00:03:33
build phase (clean verify)    00:19:25    00:18:58
--
total                         00:27:42    00:22:31

@laeubi
Copy link
Member

laeubi commented Mar 4, 2022

Can you set a breakpoints to see how many units are queried? I think we might use some caching because most of the units should be the same for all projects.

laeubi added a commit to laeubi/tycho that referenced this issue Mar 7, 2022
… to classpath

Currently a very simple approach is used that has a very bad
runtime-complexity. This si now improved in the following way:

- whether or not an IU is a fragment is cached
- while iterating, matched fragments are removed
- if the list of fragments is empty (all matched) break out the loop

Signed-off-by: Christoph Läubrich <[email protected]>
@laeubi
Copy link
Member

laeubi commented Mar 7, 2022

@sratz can you check if that reduces the execution times in your case?

Beside that, you should really try to enable parallel execution, as we have enabled parallel compile again in Tycho 2.7 this should reduce the build times significantly if building a lot of modules.

laeubi added a commit to laeubi/tycho that referenced this issue Mar 7, 2022
… to classpath

Currently a very simple approach is used that has a very bad
runtime-complexity. This si now improved in the following way:

- whether or not an IU is a fragment is cached
- while iterating, matched fragments are removed
- if the list of fragments is empty (all matched) break out the loop

Signed-off-by: Christoph Läubrich <[email protected]>
laeubi added a commit to laeubi/tycho that referenced this issue Mar 7, 2022
… to classpath

Currently a very simple approach is used that has a very bad
runtime-complexity. This si now improved in the following way:

- whether or not an IU is a fragment is cached
- while iterating, matched fragments are removed
- if the list of fragments is empty (all matched) break out the loop

Signed-off-by: Christoph Läubrich <[email protected]>
@sratz
Copy link
Contributor Author

sratz commented Mar 7, 2022

Already better with #721, but it still bottlenecks in the loop over addMatchingFragments().

I've added another improvement on top of your change in #724.

With that, he performance is almost on par again with 2.6.0.

laeubi added a commit that referenced this issue Mar 7, 2022
Currently a very simple approach is used that has a very bad
runtime-complexity. This si now improved in the following way:

- whether or not an IU is a fragment is cached
- while iterating, matched fragments are removed
- if the list of fragments is empty (all matched) break out the loop

Signed-off-by: Christoph Läubrich <[email protected]>
sratz pushed a commit to sratz/tycho that referenced this issue Mar 28, 2022
… to classpath

Currently a very simple approach is used that has a very bad
runtime-complexity. This si now improved in the following way:

- whether or not an IU is a fragment is cached
- while iterating, matched fragments are removed
- if the list of fragments is empty (all matched) break out the loop

Signed-off-by: Christoph Läubrich <[email protected]>
sratz pushed a commit to sratz/tycho that referenced this issue Mar 30, 2022
… to classpath

Currently a very simple approach is used that has a very bad
runtime-complexity. This si now improved in the following way:

- whether or not an IU is a fragment is cached
- while iterating, matched fragments are removed
- if the list of fragments is empty (all matched) break out the loop

Signed-off-by: Christoph Läubrich <[email protected]>
sratz pushed a commit to sratz/tycho that referenced this issue Mar 30, 2022
… to classpath

Currently a very simple approach is used that has a very bad
runtime-complexity. This si now improved in the following way:

- whether or not an IU is a fragment is cached
- while iterating, matched fragments are removed
- if the list of fragments is empty (all matched) break out the loop

Signed-off-by: Christoph Läubrich <[email protected]>
laeubi added a commit that referenced this issue Mar 31, 2022
Currently a very simple approach is used that has a very bad
runtime-complexity. This si now improved in the following way:

- whether or not an IU is a fragment is cached
- while iterating, matched fragments are removed
- if the list of fragments is empty (all matched) break out the loop

Signed-off-by: Christoph Läubrich <[email protected]>
@laeubi laeubi added this to the 3.0 milestone Sep 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants