Skip to content
This repository has been archived by the owner on Jun 19, 2024. It is now read-only.

upgrade to 3.x of the kubernetes-model #1067

Merged
merged 7 commits into from
Nov 8, 2017

Conversation

jstrachan
Copy link
Contributor

No description provided.

@jstrachan
Copy link
Contributor Author

one change with the 3.x schema is the generated apiVersion values for Route and DeploymentConfig are now route/v1 and apps/v1. Am not sure if thats to be expected now and we need to amend the integration tests or if its a regression ;) e.g.

---
apiVersion: apps/v1
kind: DeploymentConfig
---
apiVersion: route/v1
kind: Route

cc @hrishin

@codecov
Copy link

codecov bot commented Oct 10, 2017

Codecov Report

Merging #1067 into master will decrease coverage by 0.29%.
The diff coverage is 12.5%.

@@             Coverage Diff             @@
##             master    #1067     +/-   ##
===========================================
- Coverage     24.63%   24.33%   -0.3%     
  Complexity      682      682             
===========================================
  Files           176      176             
  Lines          9196     9306    +110     
  Branches       1590     1610     +20     
===========================================
  Hits           2265     2265             
- Misses         6675     6785    +110     
  Partials        256      256

@@ -83,6 +83,8 @@ private HTTPGetAction getHTTPGetAction(String getUrl) {
}

private TCPSocketAction getTCPSocketAction(String port) {
// TODO what to do about host?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about this?
probhandeler

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good to me. Fancy udpating this PR? :)

@@ -89,7 +89,7 @@ public void visit(PodSpecBuilder builder) {
}
String jksSecretVolumeName = getConfig(Config.jksVolumeName);
if (!isVolumeAlreadyExists(builder.buildVolumes(), jksSecretVolumeName)) {
builder.addNewVolume().withName(jksSecretVolumeName).withNewEmptyDir("Memory").endVolume();
builder.addNewVolume().withName(jksSecretVolumeName).withNewEmptyDir().withMedium("Memory").endEmptyDir().endVolume();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this change needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's no parameters on withEmptyDir() any more - withMedium() was the only place I could find a string parameter ;) no clue really what we should do with the "Medium" string really

Copy link
Member

@hrishin hrishin Oct 11, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

strange, in my case it's builder.addNewVolume().withName(jksSecretVolumeName).withNewEmptyDir("Memory").endVolume(); still valid without any compilation error.
I'm doing something wrong here then.

Copy link
Contributor Author

@jstrachan jstrachan Oct 11, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whoah - maybe its something wrong with my step; I tried that and it didn't compile for me. Wanna have a go at a PR yourself? :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure

- fixed api version in integration tests
- fixed ProbeHandler constructor
@hrishin
Copy link
Member

hrishin commented Oct 11, 2017

#1068

@jstrachan @nicolaferraro please review it if its good to go.

@jstrachan
Copy link
Contributor Author

LGTM are you happy @nicolaferraro or @ro14nd ?

Copy link
Member

@nicolaferraro nicolaferraro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a minor issue about possible NPE on the code.

I did not follow the evolution of Kubernetes-model and Kubernetes-client, but this does not seem to work with the latest stable version of Openshift (3.6.0). I've not tried with 3.7.0.alpha1, but should we release a new major version of the plugin that works only with a alpha version of Openshift?

Failure executing: POST at: https://192.168.42.200:8443/apis/image.openshift.io/v1/namespaces/fis/imagestreams. Message: ImageStream in version "v1" cannot be handled as a ImageStream: no kind "ImageStream" is registered for version "image/v1"

@@ -593,7 +593,7 @@ public int compare(Pod p1, Pod p2) {
public static Date getCreationTimestamp(HasMetadata hasMetadata) {
ObjectMeta metadata = hasMetadata.getMetadata();
if (metadata != null) {
return parseTimestamp(metadata.getCreationTimestamp());
return parseTimestamp(metadata.getCreationTimestamp().getTime());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CreationTimestamp is null for client generated objects. This will throw a NullPointerException.

@jstrachan
Copy link
Contributor Author

@nicolaferraro ideally the kube-model + kube-client should work for 3.5.x, 3.6.x and 3.7.x.

I wonder if there's a bug and it should be using apiVersion: "v1" rather than apiVersion: "image/v1". I grok the new URI scheme of /apis/image.openshift.io/v1/namespaces/fis/imagestreams and all along with the different kind but I'm not sure yet what the apiVersion should be yet for the new *.openshift.io kinds yet I'm afraid

@hrishin
Copy link
Member

hrishin commented Oct 12, 2017

@jstrachan @nicolaferraro yeah its seems to be kind of bug.
In OpenShift alpha 3.7 client throwing exceptions with "ImageStream in version "v1" cannot be handled as a ImageStream: no kind "ImageStream" is registered for version "image/v1" while building the images.

While querying imagestream endpoint HTTP response showing API version like "apiVersion":"image.openshift.io/v1"

@jstrachan
Copy link
Contributor Author

jstrachan commented Oct 14, 2017

yeah - I remember now the apiVersion stuff. So the old version was apiVersion: v1 for openshift resources. The new version is apiVersion: (app|build|route|...).openshift.io/v1. So the failing test cases lately have been due to the generation of an apiVersion of apiVersion: route/v1 rather than apiVersion: route.openshift.io/v1 - so thats definitely a bug - the new apiVersion should end with .openshift.io/v1

We may want to add some property on fabric8-maven-plugin to override the new version mechanism so that folks can generate YAML for old OpenShift clusters using the old "v1"? IIRC the code in kubernetes client uses the new version mechanism if it detects the new API groups are present on the cluster being used when applying resources; but I guess we can't know which approach to use when generating resources as YAML as part of mvn fabric8:resource so maybe need a flag to indicate which flavour is required

@nicolaferraro
Copy link
Member

Yes, that's a problem that needs to be solved with a flag or we can also think to detect which version schema to use basing on the openshift version, so that users can specify the target version of Openshift and generate compatible yml/json artifacts. This can be done on f-m-p but also on other layers.

But there's more than resource generation. I've seen issues also in Apache Camel when we upgraded to latest kubernetes-client 3.x. There isn't yaml generation in camel-kubernetes, but the client was not able to read and update a configmap, so we had to downgrade to version 2.x.

@jstrachan
Copy link
Contributor Author

@nicolaferraro yeah, I had to downgrade fabric8-generator too (the 4.x version of forge) which creates/updates BuildConfigs. The fabric8-maven-plugin tests also broke too due to the apiVersion issue as well.

Somehow the upgrade to the new schema has had an impact on the support of apiVersion which is a little odd really - as we moved to supporting the new API groups and the apiVersion build.openshift.io/v1 stuff a few months ago on the 2.3.x client and we didn't have these failures.

@jstrachan
Copy link
Contributor Author

I've fixed the underlying issue with incorrect OpenShift API group versions (so apps/v1 => apps.openshift.io/v1).

@nicolaferraro I've tested the new master of the 3.x of kubernetes-client and its able to load/create/update ConfigMaps just fine too fabric8io/kubernetes-client#892

Once the new release is out I'll update this PR to use it (and fix up the tests to use the correct apps.openshift.io/v1 version)

@jstrachan
Copy link
Contributor Author

jstrachan commented Oct 23, 2017

@hrishin I've just updated to the latest kubernetes-model / client and fabric8 release which fixes the bad api versions; so they now use the proper apps.openshift.io/v1 values now.

@nicolaferraro any chance you could retest kubernetes-client 3.0.1 sometime to see if things are working better now?

@nicolaferraro
Copy link
Member

Yes @jstrachan will do some tests today

@hrishin
Copy link
Member

hrishin commented Oct 24, 2017

@jstrachan looks good, thanks!

@nicolaferraro

We have tested these changes with OpenShift 3.7 but build request is throwing some exception.

Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://192.168.42.225:8443/apis/build.openshift.io/v1/namespaces/myproject/buildconfigs/fabric8-maven-sample-spring-boot-s2i/instantiatebinary?commit=. Message: no kind is registered for the type build.BinaryBuildRequestOptions. Received status: Status(apiVersion=v1, code=400, details=null, kind=Status, message=no kind is registered for the type build.BinaryBuildRequestOptions, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=BadRequest, status=Failure, additionalProperties={}).

Issue is instantiatebinary?commit=. If we removed commit API is working fine. Need to fix request URI.

@jstrachan
Copy link
Contributor Author

Yeah - we hit a few similar issues a while back when we first started supporting the new API Groups APIs; some query parameters are now no longer supported. Lets raise an issue and fix it in KubernetesClient maybe?

@jstrachan
Copy link
Contributor Author

@nicolaferraro on your comment about the new client/model causing issues updating a ConfigMap - I hit that today ;) fabric8io/kubernetes-model#241 should be fixed now though

@nicolaferraro
Copy link
Member

@jstrachan I think we can merge this. It seems to be working now with latest kubernetes-client. Is it ok for you?

@jstrachan
Copy link
Contributor Author

@nicolaferraro yeah - am totally happy to merge this

@nicolaferraro
Copy link
Member

[merge]

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants