You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Really quick question regarding the adding search domains to /etc/resolv.conf: when I add a domains to searchdomains in my inventory's k8s-cluster.yml file and run cluster.yml playbook the additional domains are not added and, consequently, I am not able to ssh into my cluster node and the installation fails. Can anyone point me out to how properly set the additional domains?
For convenience, I am dropping my k8s-cluster.yml file below:
Kubernetes configuration dirs and system namespace.
Those are where all the additional config stuff goes
the kubernetes normally puts in /srv/kubernetes.
This puts them in a sane location and namespace.
Editing those values will almost surely break something.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hello group,
Really quick question regarding the adding search domains to /etc/resolv.conf: when I add a domains to searchdomains in my inventory's k8s-cluster.yml file and run cluster.yml playbook the additional domains are not added and, consequently, I am not able to ssh into my cluster node and the installation fails. Can anyone point me out to how properly set the additional domains?
For convenience, I am dropping my k8s-cluster.yml file below:
Kubernetes configuration dirs and system namespace.
Those are where all the additional config stuff goes
the kubernetes normally puts in /srv/kubernetes.
This puts them in a sane location and namespace.
Editing those values will almost surely break something.
kube_config_dir: /etc/kubernetes
kube_script_dir: "{{ bin_dir }}/kubernetes-scripts"
kube_manifest_dir: "{{ kube_config_dir }}/manifests"
This is where all the cert scripts and certs will be located
kube_cert_dir: "{{ kube_config_dir }}/ssl"
This is where all of the bearer tokens will be stored
kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true
Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.24.6
Where the binaries will be downloaded.
Note: ensure that you've enough disk space (about 1G)
local_release_dir: "/tmp/releases"
Random shifts for retrying failed ops like pushing/downloading
retry_stagger: 5
This is the user that owns tha cluster installation.
kube_owner: kube
This is the group that the cert creation scripts chgrp the
cert files to. Not really changeable...
kube_cert_group: kube-cert
Cluster Loglevel configuration
kube_log_level: 2
Directory where credentials will be stored
credentials_dir: "{{ inventory_dir }}/credentials"
It is possible to activate / deactivate selected authentication methods (oidc, static token auth)
kube_oidc_auth: false
kube_token_auth: false
Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/
To use OpenID you have to deploy additional an OpenID Provider (e.g Dex, Keycloak, ...)
kube_oidc_url: https:// ...
kube_oidc_client_id: kubernetes
Optional settings for OIDC
kube_oidc_ca_file: "{{ kube_cert_dir }}/ca.pem"
kube_oidc_username_claim: sub
kube_oidc_username_prefix: 'oidc:'
kube_oidc_groups_claim: groups
kube_oidc_groups_prefix: 'oidc:'
Variables to control webhook authn/authz
kube_webhook_token_auth: false
kube_webhook_token_auth_url: https://...
kube_webhook_token_auth_url_skip_tls_verify: false
For webhook authorization, authorization_modes must include Webhook
kube_webhook_authorization: false
kube_webhook_authorization_url: https://...
kube_webhook_authorization_url_skip_tls_verify: false
Choose network plugin (cilium, calico, kube-ovn, weave or flannel. Use cni for generic cni plugin)
Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: calico
Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
kube_network_plugin_multus: false
Kubernetes internal network for services, unused block of space.
kube_service_addresses: 10.233.0.0/18
internal network. When used, it will assign IP
addresses from this range to individual pods.
This network must be unused in your network infrastructure!
kube_pods_subnet: 10.233.64.0/18
internal network node size allocation (optional). This is the size allocated
to each node for pod IP address allocation. Note that the number of pods per node is
also limited by the kubelet_max_pods variable which defaults to 110.
Example:
Up to 64 nodes and up to 254 or kubelet_max_pods (the lowest of the two) pods per node:
- kube_pods_subnet: 10.233.64.0/18
- kube_network_node_prefix: 24
- kubelet_max_pods: 110
Example:
Up to 128 nodes and up to 126 or kubelet_max_pods (the lowest of the two) pods per node:
- kube_pods_subnet: 10.233.64.0/18
- kube_network_node_prefix: 25
- kubelet_max_pods: 110
kube_network_node_prefix: 24
Configure Dual Stack networking (i.e. both IPv4 and IPv6)
enable_dual_stack_networks: false
Kubernetes internal network for IPv6 services, unused block of space.
This is only used if enable_dual_stack_networks is set to true
This provides 4096 IPv6 IPs
kube_service_addresses_ipv6: fd85:ee78:d8a6:8607::1000/116
Internal network. When used, it will assign IPv6 addresses from this range to individual pods.
This network must not already be in your network infrastructure!
This is only used if enable_dual_stack_networks is set to true.
This provides room for 256 nodes with 254 pods per node.
kube_pods_subnet_ipv6: fd85:ee78:d8a6:8607::1:0000/112
IPv6 subnet size allocated to each for pods.
This is only used if enable_dual_stack_networks is set to true
This provides room for 254 pods per node.
kube_network_node_prefix_ipv6: 120
The port the API Server will be listening on.
kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
kube_apiserver_port: 443 # (https)
Kube-proxy proxyMode configuration.
Can be ipvs, iptables
kube_proxy_mode: ipvs
configure arp_ignore and arp_announce to avoid answering ARP queries from kube-ipvs0 interface
must be set to true for MetalLB, kube-vip(ARP enabled) to work
kube_proxy_strict_arp: false
A string slice of values which specify the addresses to use for NodePorts.
Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32).
The default empty string slice ([]) means to use all local addresses.
kube_proxy_nodeport_addresses_cidr is retained for legacy config
kube_proxy_nodeport_addresses: >-
{%- if kube_proxy_nodeport_addresses_cidr is defined -%}
[{{ kube_proxy_nodeport_addresses_cidr }}]
{%- else -%}
[]
{%- endif -%}
If non-empty, will use this string as identification instead of the actual hostname
kube_override_hostname: >-
{%- if cloud_provider is defined and cloud_provider in [ 'aws' ] -%}
{%- else -%}
{{ inventory_hostname }}
{%- endif -%}
Encrypting Secret Data at Rest
kube_encrypt_secret_data: false
Graceful Node Shutdown (Kubernetes >= 1.21.0), see https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/
kubelet_shutdown_grace_period had to be greater than kubelet_shutdown_grace_period_critical_pods to allow
non-critical podsa to also terminate gracefully
kubelet_shutdown_grace_period: 60s
kubelet_shutdown_grace_period_critical_pods: 20s
DNS configuration.
Kubernetes cluster name, also will be used as DNS domain
cluster_name: cluster.local
Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
ndots: 2
dns_timeout: 2
dns_attempts: 2
Nameservers for BNYM
nameservers:
Custom search domains to be added in addition to the default cluster search domains
searchdomains:
Remove default cluster search domains (
default.svc.{{ dns_domain }}, svc.{{ dns_domain }}
).remove_default_searchdomains: true
Can be coredns, coredns_dual, manual or none
dns_mode: coredns
Set manual server if using a custom cluster DNS server
manual_dns_server: 10.x.x.x
Enable nodelocal dns cache
enable_nodelocaldns: true
enable_nodelocaldns_secondary: false
nodelocaldns_ip: 169.254.25.10
nodelocaldns_health_port: 9254
nodelocaldns_second_health_port: 9256
nodelocaldns_bind_metrics_host_ip: false
nodelocaldns_secondary_skew_seconds: 5
nodelocaldns_external_zones:
- zones:
- example.com
- example.io:1053
nameservers:
- 1.1.1.1
- 2.2.2.2
cache: 5
- zones:
- https://mycompany.local:4453
nameservers:
- 192.168.0.53
cache: 0
- zones:
- mydomain.tld
nameservers:
- 10.233.0.3
cache: 5
rewrite:
- name website.tld website.namespace.svc.cluster.local
Enable k8s_external plugin for CoreDNS
enable_coredns_k8s_external: false
coredns_k8s_external_zone: k8s_external.local
Enable endpoint_pod_names option for kubernetes plugin
enable_coredns_k8s_endpoint_pod_names: false
Can be docker_dns, host_resolvconf or none
resolvconf_mode: docker_dns
Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false
Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
Container runtime
docker for docker, crio for cri-o and containerd for containerd.
Default: containerd
container_manager: containerd
Additional container runtimes
kata_containers_enabled: false
kubeadm_certificate_key: "{{ lookup('password', credentials_dir + '/kubeadm_certificate_key.creds length=64 chars=hexdigits') | lower }}"
K8s image pull policy (imagePullPolicy)
k8s_image_pull_policy: IfNotPresent
audit log for kubernetes
kubernetes_audit: false
define kubelet config dir for dynamic kubelet
kubelet_config_dir:
default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
pod security policy (RBAC must be enabled either by having 'RBAC' in authorization_modes or kubeadm enabled)
podsecuritypolicy_enabled: false
Custom PodSecurityPolicySpec for restricted policy
podsecuritypolicy_restricted_spec: {}
Custom PodSecurityPolicySpec for privileged policy
podsecuritypolicy_privileged_spec: {}
Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts
kubeconfig_localhost: false
Use ansible_host as external api ip when copying over kubeconfig.
kubeconfig_localhost_ansible_host: false
Download kubectl onto the host that runs Ansible in {{ bin_dir }}
kubectl_localhost: false
A comma separated list of levels of node allocatable enforcement to be enforced by kubelet.
Acceptable options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "".
kubelet_enforce_node_allocatable: pods
Optionally reserve resources for OS system daemons.
system_reserved: true
Uncomment to override default values
system_memory_reserved: 512Mi
system_cpu_reserved: 500m
system_ephemeral_storage_reserved: 2Gi
Reservation for master hosts
system_master_memory_reserved: 256Mi
system_master_cpu_reserved: 250m
system_master_ephemeral_storage_reserved: 2Gi
Eviction Thresholds to avoid system OOMs
https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#eviction-thresholds
eviction_hard: {}
eviction_hard_control_plane: {}
An alternative flexvolume plugin directory
kubelet_flexvolumes_plugins_dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
Supplementary addresses that can be added in kubernetes ssl keys.
That can be useful for example to setup a keepalived virtual IP
supplementary_addresses_in_ssl_keys: [10.0.0.1, 10.0.0.2, 10.0.0.3]
Running on top of openstack vms with cinder enabled may lead to unschedulable pods due to NoVolumeZoneConflict restriction in kube-scheduler.
See #2141
Set this variable to true to get rid of this issue
volume_cross_zone_attachment: false
Add Persistent Volumes Storage Class for corresponding cloud provider (supported: in-tree OpenStack, Cinder CSI,
AWS EBS CSI, Azure Disk CSI, GCP Persistent Disk CSI)
persistent_volumes_enabled: false
Container Engine Acceleration
Enable container acceleration feature, for example use gpu acceleration in containers
nvidia_accelerator_enabled: true
Nvidia GPU driver install. Install will by done by a (init) pod running as a daemonset.
Important: if you use Ubuntu then you should set in all.yml 'docker_storage_options: -s overlay2'
Array with nvida_gpu_nodes, leave empty or comment if you don't want to install drivers.
Labels and taints won't be set to nodes if they are not in the array.
nvidia_gpu_nodes:
- kube-gpu-001
nvidia_driver_version: "384.111"
flavor can be tesla or gtx
nvidia_gpu_flavor: gtx
NVIDIA driver installer images. Change them if you have trouble accessing gcr.io.
nvidia_driver_install_centos_container: atzedevries/nvidia-centos-driver-installer:2
nvidia_driver_install_ubuntu_container: gcr.io/google-containers/ubuntu-nvidia-driver-installer@sha256:7df76a0f0a17294e86f691c81de6bbb7c04a1b4b3d4ea4e7e2cccdc42e1f6d63
NVIDIA GPU device plugin image.
nvidia_gpu_device_plugin_container: "registry.k8s.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e"
Support tls min version, Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13.
tls_min_version: "VersionTLS12"
Support tls cipher suites.
tls_cipher_suites: {}
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_RC4_128_SHA
- TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_RSA_WITH_RC4_128_SHA
- TLS_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_128_CBC_SHA256
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_RC4_128_SHA
Amount of time to retain events. (default 1h0m0s)
event_ttl_duration: "1h0m0s"
Automatically renew K8S control plane certificates on first Monday of each month
auto_renew_certificates: true
First Monday of each month
auto_renew_certificates_systemd_calendar: "Mon --1,2,3,4,5,6,7 03:{{ groups['kube_control_plane'].index(inventory_hostname) }}0:00"
The text was updated successfully, but these errors were encountered: