-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgraded flannel, kubeadm version and StorageClass settings #326
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Only a few comments. I suggest @andersla reviewing as well. Not sure if we can get this in before the release.
@@ -29,4 +29,4 @@ modprobe dm_thin_pool | |||
|
|||
echo "Try to join master..." | |||
# shellcheck disable=SC2154 | |||
kubeadm join --token "${kubeadm_token}" "${master_ip}:6443" | |||
kubeadm join --discovery-token-unsafe-skip-ca-verification --token "${kubeadm_token}" "${master_ip}:6443" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove the --discovery-token-unsafe-skip-ca-verification
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I understand: if we use --discovery-token-unsafe-skip-ca-verification
, then we use the same as we did before, the default in Kubernetes 1.7 and earlier. This mode relies only on the symmetric token to sign (HMAC-SHA256). Please consult https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/ for more information regarding this security update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good.
@@ -10,3 +10,7 @@ provisioner: kubernetes.io/glusterfs | |||
parameters: | |||
resturl: "http://{{ heketi_endpoint }}" | |||
volumetype: {{volumetype}} | |||
volumeoptions: "performance.stat-prefetch off" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please give an explanation for the options that you have changed, and please link some documentation so I can look it up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These options were added to make a shared instance of Minio consistent with our GlusterFS backend. Regarding the options changed, consult: https://www.mankier.com/8/mount.glusterfs and https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/configuring_red_hat_openstack_with_red_hat_storage/sect-setting_up_red_hat_storage_trusted_storage_pool
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This links don't say anything about the options, just state that you can have this settings. I'd like to understand what is happening when you have this settings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well the main point of these options is that multiple shared Minio instances can do consistent and safe I/O operations on a single PV provided by GlusterFS-Heketi.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know, but we don't just support minio. If that's the only reason these option should be parametrized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise, you need to give explanation why it is always better this way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mcapuccini Well I think that in this case you cannot say "it can always be better". But regarding performance, the only option I touched is the following: performance/stat-prefetch:
what it does is pre-fetching and caching file stat information when a directory is read. Speeds up operations like ls -l
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you parametrize the options with ansible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes looks good! I think it sounds like a good option to parameterize in ansible like volumetype. I don't know what should be default. What are the costs vs benefits with "performance.stat-prefetch" on resp. off?
@@ -10,3 +10,7 @@ provisioner: kubernetes.io/glusterfs | |||
parameters: | |||
resturl: "http://{{ heketi_endpoint }}" | |||
volumetype: {{volumetype}} | |||
volumeoptions: "performance.stat-prefetch off" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes looks good! I think it sounds like a good option to parameterize in ansible like volumetype. I don't know what should be default. What are the costs vs benefits with "performance.stat-prefetch" on resp. off?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Brilliant! The storage class should be called "storage-class-minio" IMO. Also before merging we need to use an image release. Please do this and then I'll merge!
There is a v050b1 image release, just change the boot_image in the templates, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job!
Upgraded flannel, kubeadm version and StorageClass settings. This update requires k8s 1.9.2 on from the image repo.