-
-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(Re) Introduce an artifact caching proxy for ci.jenkins.io #2752
(Re) Introduce an artifact caching proxy for ci.jenkins.io #2752
Comments
First of all read #938 (reverted by #2047); I am not sure offhand which infra repo had the actual proxy configuration that you could use as a starting point. You would need to do a bit of digging. I recall it being nginx configured with a simple LRU cache of 2xx results, i.e., successful retrieval of release or
At a first approximation, revert jenkins-infra/pipeline-library#135 + jenkins-infra/pipeline-library#216 + jenkins-infra/pipeline-library#219 (but keeping some positive things from those PRs, such as removal of obsolete JDK 7 support). |
Many thanks for the pointers @jglick ! We've started refreshing https://github.com/jenkins-infra/docker-repo-proxy (jenkins-infra/docker-repo-proxy#5) which has the behavior you describe so it means we are in the correct directions! (I'm currently trying this with a local build of a plugin before trying to deploy to production). Sounds like with the informations you gave, we have enough to have a first version soon. |
Oh https://github.com/jenkins-infra/docker-repo-proxy, I see. If you get the service running, I can help draft a |
yeah you can access it via I was wondering if we would have a mirror per cloud? and then determine which cloud we were running on? to minimise bandwidth use but I guess that can be added on top |
Putting in pause (not enough bandwidth for the team for now) + Jforg works again as expected. |
Slow again today AFAICT. |
This comment was marked as resolved.
This comment was marked as resolved.
All the successful plugin bill of materials jobs run over the weekend were run with the artifact caching proxy disabled. When the artifact caching proxy is enabled for plugin bill of materials jobs, there is a high overall failure rate of the job. The failure often does not become visible until 90 minutes or more into the job. Some examples are visible at:
|
In particular, search for |
MNG-714 would be helpful. I was hoping to use this trick but it did not seem to work. Created <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
<mirrors>
<mirror>
<id>proxy</id>
<url>https://repo.do.jenkins.io/public/</url>
<mirrorOf>*,!repo.jenkins-ci.org</mirrorOf>
</mirror>
</mirrors>
<profiles>
<profile>
<id>fallback</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<repositories>
<repository>
<id>repo.jenkins-ci.org</id>
<url>https://repo.jenkins-ci.org/public/</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>repo.jenkins-ci.org</id>
<url>https://repo.jenkins-ci.org/public/</url>
</pluginRepository>
</pluginRepositories>
</profile>
</profiles>
</settings> where the mirror is expected to fail (since I am providing no authentication) and ran with docker run --rm -ti --entrypoint bash -v /tmp/settings.xml:/usr/share/maven/conf/settings.xml maven:3-eclipse-temurin-17 -c 'git clone --depth 1 https://github.com/jenkinsci/build-token-root-plugin /src && cd /src && mvn -Pquick-build install' but it fails immediately and does not fall back. |
After clearing the cache of the DigitalOcean provider, a BOM build exclusively on DigitalOcean finished with success: https://ci.jenkins.io/job/Tools/job/bom/job/master/1564/ The fact the BOM builds failed only on DO with "Premature end of Content-Length delimited message body" each time, and passed after clearing the cache on this provider make me think the error came from corrupted cache data. I'll check to either find a way to clear the cache for a specific artifact, or either reduce the cache retention currently set to one month. |
@MarkEWaite @basil could you try your next BOM builds without the |
Can try jenkinsci/bom#1916 |
FYI https://issues.apache.org/jira/browse/MNG-7708 (probably not relevant if the cache errors were persistent). |
Closing as the "unreliable" behavior (which is BOM-only) is tracked in #3481 |
Service
ci.jenkins.io
Summary
As part of #2733 , the subject of hosting a caching proxy for ci.jenkins.io builds (at least: maybe for trusted.ci, release.ci and infra.ci also) as been re-triggered in https://groups.google.com/g/jenkins-infra/c/laSsgPOH9qs.
This issue tracks the work related to deploying this service.
Why
What
We want each build, run by ci.jenkins.io (and eventually trusted.ci and release.ci), which involves maven (and eventually gradle), to use our caching proxy service instead of directly hitting repo.jenkins-ci.org.
As per https://maven.apache.org/settings.html#mirrors, we should be able to use the User-level
settings.xml
for Maven.There are different methods to provide this
settings.xml
to the build:Adding it in the agent images in jenkins-infra/packer-images (assuming we have finished the "docker and VMs" tasks, ref. Release Docker Image for Linux (Ubuntu 20.04) packer-images#282 for linux and Add Docker Windows Containers packer-images#285)
Use the Jenkins plugin "config-file-provider" , which support pipeline: https://plugins.jenkins.io/config-file-provider/#plugin-content-using-the-configuration-files-in-jenkins-pipelines , so we could set it up in the jenkins-infra/pipeline-library (easier to opt-out and faster to disable in case of outage)
The main challenge is to provide multiple caching proxies, on each cloud region that we use.
Rationale is that if we only have a single proxy, then we'll have to pay for the cross-cloud and/or cross-region bandwitdh , which we do not want. We could either:
Definition of Done
How
See associated PRs when they'll come.
The text was updated successfully, but these errors were encountered: