Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

[elasticsearch] master pods not deploying #253

Closed
rewt opened this issue Aug 9, 2019 · 9 comments
Closed

[elasticsearch] master pods not deploying #253

rewt opened this issue Aug 9, 2019 · 9 comments
Labels
bug Something isn't working triage/stale

Comments

@rewt
Copy link

rewt commented Aug 9, 2019

anyone else having issue with master pods not deploying?

Release "elastic-master" does not exist. Installing it now.
NAME:   elastic-master
LAST DEPLOYED: Fri Aug  9 14:50:50 2019
NAMESPACE: elastic
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                       DATA  AGE
dev-cluster-master-config  1     1s

==> v1/Service
NAME                         TYPE       CLUSTER-IP   EXTERNAL-IP  PORT(S)            AGE
dev-cluster-master           ClusterIP  10.0.242.25  <none>       9200/TCP,9300/TCP  1s
dev-cluster-master-headless  ClusterIP  None         <none>       9200/TCP,9300/TCP  1s

==> v1beta1/PodDisruptionBudget
NAME                    MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
dev-cluster-master-pdb  N/A            1                0                    1s

==> v1beta1/StatefulSet
NAME                READY  AGE
dev-cluster-master  0/3    1s


NOTES:
1. Watch all cluster members come up.
  $ kubectl get pods --namespace=elastic -l app=dev-cluster-master -w
2. Test cluster health using Helm test.
  $ helm test elastic-master
@tfluehmann
Copy link

Can you share the values.yml with us?
Which node-affinity did you choose?

@rewt
Copy link
Author

rewt commented Aug 12, 2019

nodeAffinity is undefined

nodeAffinity: {}

---
clusterName: "dev-cluster"
nodeGroup: "master"

# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: "dev"

# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.master=true
roles:
  master: "true"
  ingest: "false"
  data: "false"

replicas: 3
minimumMasterNodes: 2

esMajorVersion: ""

# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig:
#  elasticsearch.yml: |
#    key:
#      nestedkey: value
  log4j2.properties: |
    status = error
    appender.console.type = Console
    appender.console.name = console
    appender.console.layout.type = PatternLayout
    appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
    rootLogger.level = info
    rootLogger.appenderRef.console.ref = console
    logger.searchguard.name = com.floragunn
    logger.searchguard.level = info

# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
#  - name: MY_ENVIRONMENT_VAR
#    value: the_value_goes_here

# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
#  - name: elastic-certificates
#    secretName: elastic-certificates
#    path: /usr/share/elasticsearch/config/certs
image: "myrepo.azurecr.io/elasticsearch-docker"
imageTag: "7.2.0"
imagePullPolicy: "always"

# image: "myrepo.azurecr.io/elasticsearch-docker"
# imageTag: "7.2.0"
# imagePullPolicy: "always"

podAnnotations: {}
  # iam.amazonaws.com/role: es-cluster

# additionals labels
labels: {}

esJavaOpts: "-Xmx512m -Xms512m"

resources:
  requests:
    cpu: "25m"
    memory: "512Mi"
  limits:
    # cpu: "2"
    # memory: "32Gi"

initResources: {}
  # limits:
  #   cpu: "25m"
  #   # memory: "128Mi"
  # requests:
  #   cpu: "25m"
  #   memory: "128Mi"

sidecarResources: {}
  # limits:
  #   cpu: "25m"
  #   # memory: "128Mi"
  # requests:
  #   cpu: "25m"
  #   memory: "128Mi"

networkHost: "0.0.0.0"

volumeClaimTemplate:
  accessModes: [ "ReadWriteOnce" ]
  storageClassName: "managed-premium"
  resources:
    requests:
      storage: 10Gi

persistence:
  enabled: true
  annotations: {}

extraVolumes: []
  # - name: extras
  #   emptyDir: {}

extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

extraInitContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""

# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"

# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "soft"

# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}

# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"

protocol: http
httpPort: 9200
transportPort: 9300

service:
  type: ClusterIP
  nodePort:
  annotations: {}

updateStrategy: RollingUpdate

# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1

podSecurityContext:
  fsGroup: 1000

# The following value is deprecated,
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""

securityContext:
  capabilities:
    drop:
    - ALL
  # readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 1000

# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120

sysctlVmMaxMapCount: 262144

readinessProbe:
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  successThreshold: 3
  timeoutSeconds: 5

# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: "wait_for_status=green&timeout=1s"

## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""

imagePullSecrets: []
nodeSelector: {}
tolerations: []

# Enabling this will publically expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

nameOverride: ""
fullnameOverride: ""

# https://github.com/elastic/helm-charts/issues/63
masterTerminationFix: false

lifecycle: {}
  # preStop:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
  # postStart:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]

sysctlInitContainer:
  enabled: true

@Crazybus
Copy link
Contributor

==> v1beta1/StatefulSet
NAME                READY  AGE
dev-cluster-master  0/3    1s

It looks like they deployed just fine. Did they fail to start up properly?

Could you include the output of helm get elastic-master and kubectl get pods -l release=elastic-master.

minimumMasterNodes: 2

This setting isn't used for Elasticsearch 7 and doesn't do anything (https://github.com/elastic/helm-charts/tree/master/elasticsearch#configuration)

clusterName: "dev-cluster"
nodeGroup: "master"

# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: "dev"

As the comment says, this should be set to clusterName + "-" + nodeGroup which would be dev-cluster-master. However since you are using the default nodeGroup: master you should just leave this blank.

@jmlrt jmlrt added the bug Something isn't working label Sep 23, 2019
@kvignesh1420
Copy link

kvignesh1420 commented Oct 25, 2019

any update on this issue? I am facing the same problem where the pods are not being created. The values.yaml file is the standard one as mentioned in the master branch.

$ helm install --namespace elastic --name elasticsearch elastic/elasticsearch --set imageTag=7.4.0 --set volumeClaimTemplate.storageClassName=nfs
NAME:   elasticsearch
LAST DEPLOYED: Fri Oct 25 19:01:17 2019
NAMESPACE: elastic
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME                           TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)            AGE
elasticsearch-master           ClusterIP  10.100.237.73  <none>       9200/TCP,9300/TCP  0s
elasticsearch-master-headless  ClusterIP  None           <none>       9200/TCP,9300/TCP  0s

==> v1/StatefulSet
NAME                  READY  AGE
elasticsearch-master  0/3    0s

==> v1beta1/PodDisruptionBudget
NAME                      MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
elasticsearch-master-pdb  N/A            1                0                    0s


NOTES:
1. Watch all cluster members come up.
  $ kubectl get pods --namespace=elastic -l app=elasticsearch-master -w
2. Test cluster health using Helm test.
  $ helm test elasticsearch

@rewt
Copy link
Author

rewt commented Oct 26, 2019 via email

@kvignesh1420
Copy link

turns out that there was an issue with my Kube-controller as well. Anyhow, the pods have been deployed now.

@fatmcgav
Copy link
Contributor

fatmcgav commented Nov 8, 2019

@rewt Is this still an issue for you?

@botelastic
Copy link

botelastic bot commented Feb 6, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@botelastic
Copy link

botelastic bot commented Mar 7, 2020

This issue has been automatically closed because it has not had recent activity since being marked as stale.

@botelastic botelastic bot closed this as completed Mar 7, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working triage/stale
Projects
None yet
Development

No branches or pull requests

6 participants