-
Notifications
You must be signed in to change notification settings - Fork 159
Document inter-build caching strategies #52
Comments
Today Luckily, we have the good fortune to be leveraging K8s abstractions, which means we can also access persistent volumes. We are entering territory I have yet to experiment with, so take this with a grain of salt! The general idea is that if a Build wants to leverage a persistent cache, it would mount it, e.g. spec:
steps:
- image: super-builder:latest
volumeMounts:
- name: persistent-cache
mountPath: /var/super-builder/.cache
volumes:
- name: persistent-cache
# Fill in your favorite persistent volume.
persistentVolumeClaim:
claimName: mattmoor-cache We can potentially use this in interesting ways that make caching optional, e.g. === BuildTemplate ===
spec:
parameters:
- name: CACHE
description: The name of the volume to mount for caching artifacts.
default: intra-build
steps:
- image: super-builder:latest
volumeMounts:
# Allow the user to override the volume we use as a cache.
- name: "${CACHE}"
mountPath: /var/super-builder/.cache
volumes:
# By default we provide intra-build caching via an emptyDir
- name: intra-build
emptyDir: {}
=== Build ===
spec:
template:
name: what-is-above
arguments:
- name: CACHE
value: persistent-cache
volumes:
- name: persistent-cache
# Fill in your favorite persistent volume.
persistentVolumeClaim:
claimName: mattmoor-cache |
It is notable that when choosing a persistent volume option to consider that the time it takes to attach that storage to the node may be non-zero. |
If a node had previously attached a PD for another pod which has since
finished, is the PD still attached to the node for a future pod? Will k8s
schedule another pod that wants that PD on that node? Or is the PD attached
and detached each time to simplify scheduling?
…On Fri, Feb 16, 2018 at 10:12 AM Matt Moore ***@***.***> wrote:
It is notable that when choosing a persistent volume option to consider
that the time it takes to attach that storage to the node may be non-zero.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#52 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAM3MeMb7B5AjSmJukr1nbZuPobHFy5rks5tVcUngaJpZM4SIjWG>
.
|
@mattmoor yeah i think it's useful. it's a concept we've wanted to add to openshift builds for a while, basically two things have kept us from doing it:
|
@imjasonh I have no sense for how smart the K8s scheduler is about persistent volumes. @bparees Ack on the multi-write problem. I believe the "write-once" PVC is somewhat smart about this, and IIUC the Pod will sit as For explicitly parallel builds (e.g. Matrix), having the concept of a PVC "pool" would be neat, where the write tenancy would be modeled as the pool size. I haven't bothered looking at whether this exists, since I can imagine very few workloads that might want that kind of abstraction. :) |
Issues go stale after 90d of inactivity. |
This issue is intended to track documenting (and if necessary designing / implementing) facilities for inter-build caching.
The text was updated successfully, but these errors were encountered: