Skip to content
This repository has been archived by the owner on Sep 5, 2019. It is now read-only.

Using service for build #436

Open
kameshsampath opened this issue Oct 20, 2018 · 4 comments
Open

Using service for build #436

kameshsampath opened this issue Oct 20, 2018 · 4 comments

Comments

@kameshsampath
Copy link

Currently the build pods are spun as normal pods, can we not expose a service for build's and then use service url to kick start new builds ? following similar pattern of Knative-serving.

Classify what kind of issue this is:

/kind question

@imjasonh
Copy link
Member

The Knative Serving controller is also deployed as a Deployment which listens for updates to Configuration, etc., CRDs, and responds accordingly.

I'm not sure what advantage there would be of exposing a (k8s) Service, the main API endpoint is expected to be creation/update/deletion of CRD resources, rather than direct HTTP/RPC API requests. This enables k8s-native resource management using kubectl or other tools like Helm, ksonnet, generated k8s client libraries, etc.

@kameshsampath
Copy link
Author

I know its scope of pipelines am more thinking from a perspective of starting a failed build again instead of delete and recreate of objects., especially i was wondering how a embedded build withing service definition could be deleted and recreated without deleting all the service objects or i need to induce a config change to trigger new revision

@imjasonh
Copy link
Member

I think in Pipelines' model, the way you'd retry a failed build would be to create a new PipelineRun or TaskRun from the original Pipeline/Task definition, with the same PipelineParams. Related to this is tektoncd/pipeline#50 which covers designing partial execution, especially as it concerns resuming a failed pipeline halfway through.

In either case, I think the UX of having users express this in terms of CRUD operations on a custom resource via kubectl or existing (k)native tooling is a strong preference at this point. If we want to enable a higher-level "retry this failed build" experience, it would probably come from custom client surfaces that drive CRD APIs (e.g., knctl), rather than custom APIs handled by k8s services.

Does that make sense? Nothing it set in stone, obviously.

@kameshsampath
Copy link
Author

I think in Pipelines' model, the way you'd retry a failed build would be to create a new PipelineRun or TaskRun from the original Pipeline/Task definition, with the same PipelineParams. Related to this is tektoncd/pipeline#50 which covers designing partial execution, especially as it concerns resuming a failed pipeline halfway through.
+1

Does that make sense? Nothing it set in stone, obviously.
👍

just throwing few thoughts in my mind that made me ask for this:

  • Considering simple builds, IMHO using a pipeline for just one task (build with steps) will be an overkill. In those scenarios it would be ideal if we could be able to just start individual builds
  • as explained in scenarios of mine where am embedding build in service/config, then triggering failed build is hard
  • when someone working on individual builds (tasks), they might want to test each before getting them on to a pipeline

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants