Skip to content
This repository has been archived by the owner on Sep 14, 2020. It is now read-only.

Consider pykube-ng? #15

Closed
nolar opened this issue Apr 2, 2019 · 3 comments · Fixed by #110
Closed

Consider pykube-ng? #15

nolar opened this issue Apr 2, 2019 · 3 comments · Fixed by #110
Assignees
Labels
enhancement New feature or request

Comments

@nolar
Copy link
Contributor

nolar commented Apr 2, 2019

Originally by @hjacobs :

I see that you are currently using the "official" Kubernetes client (Swagger Codegen).
I forked the old pykube to https://github.com/hjacobs/pykube as I'm rather unhappy about the complexity and size of the official Kubernetes client.
See also hjacobs/pykube#12
Not sure if this would even work or whether you use something specific of the Kubernetes Python client.

@nolar
Copy link
Contributor Author

nolar commented Jun 2, 2019

After #71, almost all Kubernetes-related code is consolidated in one package: kopf.k8s, where it is easy to be replaced by any other implementation. The only thing outside of this package is authentication (kopf.config.login()).

However, the whole codebase of Kopf assumes that the objects manipulated as dicts. Not even the Kubernetes client's "models". A lot of .get('metadata', {}).get('something') and .setdefault('status', {}).setdefault('kopf', {}) and similar lines are all around. It will be difficult to change that and to support both dicts & client's classes. It is better not to do so.

In addition, Kopf promises to hide implementation details from the user ⟹ which means that the user should not know which Kubernetes client is used under the hood ⟹ which means that the internal models/classes must not be exposed ⟹ which means they have to be converted to dicts on arrival.


Another tricky part will be watch-streaming. The official Kubernetes client does that in the while True cycle, and reconnects all the time. Pykube-ng exits after the first disconnection, which happens roughly ~5s, or as specified by a timeout query arg. The connection cannot stay forever, so either the client, or Kopf should handle the reconnections. Some partial workarounds can be found in #96 (pre-listing and resourceVersion usage).

@nolar nolar self-assigned this Jun 2, 2019
@nolar
Copy link
Contributor Author

nolar commented Jun 2, 2019

So far, a list of issues detected while trying to switch to pykube:

  • pykube.HTTPClient object is needed on every call, there is no implicit config (as in the official client). Has to be stored globally and reused all over the code.
  • pykube.HTTPClient has a default timeout of 10s for any connection, including the watching. Can be overridden explicitly with timeout=None, but requires a separate pykube.HTTPClient instance for that.
  • pykube.HTTPClient raises exceptions from requests on timeouts, not its own.
  • Watch-call terminates after the connection is lost for any reason, no internal reconnection or while True. Has to be caught and repeated. In the official K8s client, it is done internally: the watch is eternal.
  • object_factory() prints a list of discovered resources to stdout, this is a visual garbage.
  • object_factory() assumes that the resource always exists, and fails on resource['namespaced'] when resource is None.
  • object_factory() requires a kind, and not plural; would be better if plural, singular, kind, and all aliases are accepted.
  • apiextensions.k8s.io/v1beta1/customresourcedefinitions does not exist in the cluster in a listing of resources, though is accessible. Pykube should assume that the developer knows what they are doing, and create the classes properly (but: only with plural name).
  • Patching is implemented as obj.update(), where the whole body of the object is used for a patch. And this involves the resourceVersion checking for non-conflicting patches. We need the partial patches on status field only (or finalizers, or annotations), not on the whole body. And we need no conflict resolution.

On a good side:

  • It was able to handle a custom resource KopfExample and a built-in resource Pod via the same code, no {resource=>classes+methods} mapping was needed. Both event-spy-handler and regular cause-handlers worked on pods.

Basically, the whole trick is achieved by this snippet (undoable in the official K8s client library):

        version = kwargs.pop("version", "v1")
        if version == "v1":
            base = kwargs.pop("base", "/api")
        elif "/" in version:
            base = kwargs.pop("base", "/apis")
        else:

This alone justifies the effort to continue switching.

Preview branch: https://github.com/nolar/kopf/tree/pykube (based on "resume-handlers" not yet merged branch).
Diff: nolar/kopf@wip/master/20190606...pykube

@nolar
Copy link
Contributor Author

nolar commented Jun 11, 2019

So far so good. The switch to pykube-ng is now fully implemented. The legacy kubernetes official client library is supported optionally (if installed) for auto-authentication, but is not used anywhere else — and I consider removing it completely.

The missing pykube-ng's parts are simulated inside kopf.k8s.classes (e.g. obj.patch() method), and eventually should move into pykube-ng itself.

The codebase seems functional. And clean. Arbitrary k8s resources (custom and builtin) are supported transparently, as it was prototyped above. The k8s-events are sent, all is fine.

What is left: all preceding PRs, on which it is based (all are pending for a review); and some cleanup in general plus the remaining TODOs marks (to be sure nothing is forgotten); and maybe a test-drive for few days in our testing infrastructure with real tasks.

Diff (still the same): nolar/kopf@wip/master/20190606...pykube — The diff is huge mostly because of tests (massive changes).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
1 participant