Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clang-tidy: fix for empty config. #3202

Merged
merged 3 commits into from
Oct 12, 2019
Merged

Conversation

KaiSzuttor
Copy link
Member

we only have a CI job with maxset and clang-tidy. This PR fixes issues found with empty config.

@KaiSzuttor
Copy link
Member Author

maybe we should replace the image of the empty job and use a clang container and build with clang-tidy?

@jngrad
Copy link
Member

jngrad commented Sep 25, 2019

I wouldn't mind, as long as the compilation time remains within ~20 min (currently it's 7 min).

@codecov
Copy link

codecov bot commented Sep 25, 2019

Codecov Report

Merging #3202 into python will decrease coverage by <1%.
The diff coverage is 71%.

Impacted file tree graph

@@          Coverage Diff           @@
##           python   #3202   +/-   ##
======================================
- Coverage      85%     85%   -1%     
======================================
  Files         530     530           
  Lines       26025   26024    -1     
======================================
- Hits        22158   22157    -1     
  Misses       3867    3867
Impacted Files Coverage Δ
src/core/observables/MagneticDipoleMoment.hpp 100% <100%> (ø) ⬆️
src/core/observables/Current.hpp 100% <100%> (ø) ⬆️
src/core/observables/DipoleMoment.hpp 100% <100%> (ø) ⬆️
src/core/constraints/ShapeBasedConstraint.cpp 87% <50%> (-1%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 295c398...20a9d23. Read the comment docs.

@RudolfWeeber
Copy link
Contributor

RudolfWeeber commented Sep 25, 2019 via email

@jngrad jngrad added this to the Espresso 5 milestone Sep 26, 2019
@jngrad
Copy link
Member

jngrad commented Oct 12, 2019

bors r+

bors bot added a commit that referenced this pull request Oct 12, 2019
3202: clang-tidy: fix for empty config. r=jngrad a=KaiSzuttor

we only have a CI job with maxset and clang-tidy. This PR fixes issues found with empty config.

Co-authored-by: Kai Szuttor <[email protected]>
@jngrad
Copy link
Member

jngrad commented Oct 12, 2019

bors r-
(another merge fail in the staging branch)

@bors
Copy link
Contributor

bors bot commented Oct 12, 2019

Canceled

@jngrad
Copy link
Member

jngrad commented Oct 12, 2019

bors r+

bors bot added a commit that referenced this pull request Oct 12, 2019
3202: clang-tidy: fix for empty config. r=jngrad a=KaiSzuttor

we only have a CI job with maxset and clang-tidy. This PR fixes issues found with empty config.

Co-authored-by: Kai Szuttor <[email protected]>
@jngrad
Copy link
Member

jngrad commented Oct 12, 2019

@KaiSzuttor we need to investigate GitLab CI on Monday, canceling a bors job and restarting it later will cause a random merge commit to start CI (see latest GitLab pipelines, reproduced below).

Status   Pipeline Commit
-------- -------- --------------------
running  #9620    staging 0b7b3573     <--- real one
skipped  #9619    staging.tmp 987d70e0
running  #9618    staging 128d59c9     <--- from the 2nd `bors r+`, but already in python
failed   #9617    staging 75d4715c     <--- from the 1st `bors r+`
running  #9616    PR-3245 fb3c8fe1
passed   #9614    python 128d59c9

@mkuron
Copy link
Member

mkuron commented Oct 12, 2019

canceling a bors job and restarting it later will cause a random merge commit to start CI

The problem is that Gitlab doesn‘t (and can‘t) know that a bors job was cancelled. Whenever something is force-pushed to the staging branch, the previously running pipeline is cancelled and one is started for the latest commit.

@bors
Copy link
Contributor

bors bot commented Oct 12, 2019

Build succeeded

@bors bors bot merged commit 20a9d23 into espressomd:python Oct 12, 2019
@jngrad
Copy link
Member

jngrad commented Oct 13, 2019

Whenever something is force-pushed to the staging branch, the previously running pipeline is cancelled and one is started for the latest commit.

Didn't notice it until two days ago. This defeats the purpose of automation. Thanks for the fast reply. I've added a note in our CI documentation.

@KaiSzuttor
Copy link
Member Author

KaiSzuttor commented Oct 13, 2019

Whenever something is force-pushed to the staging branch, the previously running pipeline is cancelled and one is started for the latest commit.

why? i thought it only gets canceled if the pipeline is not running. at least it was like that before.

@mkuron
Copy link
Member

mkuron commented Oct 13, 2019

i thought it only gets canceled if the pipeline is not running.

That's the automatic cancellation of redundant pipelines that Gitlab has had for at least a year now. Even if we disabled that (which we shouldn't, because it really reduces the number of pipelines we build on pull requests), one couldn't expect the pipeline to continue after force-pushing to staging because the automatic garbage collection will eventually remove the dangling commit.

This sounds more like a design flaw in bors. It can't reasonably force-push to staging and expect all intermediate pipelines to finish.

@jngrad jngrad removed this from the Espresso 5 milestone Nov 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants