Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is no leader for this topic-partition as we are in the middle of a leadership election #516

Closed
thusfar7 opened this issue Jul 2, 2019 · 10 comments

Comments

@thusfar7
Copy link

thusfar7 commented Jul 2, 2019

I'm using kafka with KafkaJS. When I do docker-compose up everything works fine. But if I change any of the kafka configs or better to say, when kafka container restarts, my kafkajs nodejs code cannot send messages to topic nor listen from it and it throws the following error.

{"level":"ERROR","timestamp":"2019-07-02T18:27:28.585Z","logger":"kafkajs","message":"[Connection] Response Metadata(key: 3, version: 5)","broker":"kafka:9092","clientId":"test-2","error":"There is no leader for this topic-partition as we are in the middle of a leadership election","correlationId":5,"size":129}

But if I do docker-compose down and then docker-compose up then it works. Or more importantly, if I just do docker rm -f project-name_zookeeper_1 and docker-compose up which recreates the zookeeper container then everything works fine again.

I also use kafkahq, which also works when my kafkajs/nodejs code works, and doesnt work when my kafkajs/nodejs code doesnt work. So I'd say the problem is not within that part of my app.

You could try to reproduce it yourself. Kafkahq password is password.

  zookeeper:
    image: wurstmeister/zookeeper:latest
    ports:
      - "2181:2181"
    networks:
      - nw_kafka

  # Kafka
  kafka:
    image: wurstmeister/kafka:2.11-1.1.1
    ports:
      - "9092:9092"
    networks:
      - nw_kafka
    environment:
      KAFKA_ADVERTISED_HOST_NAME: ${HOST_IP}
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_CREATE_TOPICS: "test-2:2:1,test-1:3:1"
      KAFKA_TOPIC_METADATA_REFRESH_INTERVAL_MS: "60000"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  # KafkaHQ
  kafkahq:
    image: tchiotludo/kafkahq
    depends_on:
      - kafka
    environment:
      KAFKAHQ_CONFIGURATION: |
        kafkahq:
          connections:
            docker-kafka-server:
              properties:
                bootstrap.servers: "kafka:9092"
          security:
            basic-auth:
              root:
                password: '5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8'
                roles:
                  - topic/read
                  - topic/insert
                  - topic/delete
                  - topic/config/update
                  - node/read
                  - node/config/update
                  - topic/data/read
                  - topic/data/insert
                  - topic/data/delete
                  - group/read
                  - group/delete
                  - group/offsets/update
                  - registry/read
                  - registry/insert
                  - registry/update
                  - registry/delete
                  - registry/version/delete
            default-roles:
              - node/read
    ports:
      - 8089:8080
    networks:
      - nw_kafka

I searched through other people issues but haven't found a solution to this.

Any idea?

@sscaling
Copy link
Collaborator

sscaling commented Jul 3, 2019

As per the README the broker-ids are generated automatically. So if you restart the container when it registers with zookeeper it will get the next auto-generated ID. This should be evident in the logs (you'll see an initial broker ID of 1001, then 1002, 1003 etc). However, any topics you have already created will be assigned to now non-existing broker IDs, so the leader will be unavailable.

@thusfar7
Copy link
Author

thusfar7 commented Jul 3, 2019

@sscaling Thanks a bunch. This helped.

@thusfar7 thusfar7 closed this as completed Jul 3, 2019
@crobinson42
Copy link

@sscaling i'm dealing with this issue right now and looking for a little insight if you have it. I have 2 brokers up, a topic is created (auto create topics = true) then 1 broker goes down which causes the producers to constantly fail when sending a message on that topic until that broker comes back up. Is this a config issue or am I missing something in my setup?

@Alqio
Copy link

Alqio commented Jun 8, 2020

So what can I do to fix this problem? I get this problem also with KafkaJS on AWS MSK (and locally with this image).

@fabiankaimer
Copy link

I'm having the same issue. Currently, I manually resolve it by completely removing zookeeper and restarting docker-compose.

Any assistance is welcome!

@Alqio
Copy link

Alqio commented Jun 10, 2020

A solution I found was to follow these instructions:
Because of how cluster state is stored in zookeeper you should either configure a consistent broker ID, use the —no-recreate option if restarting kafka or make sure the zookeeper and kafka containers are torn down by running docker-compose rm -vfs.

@michalklym
Copy link

michalklym commented Nov 25, 2020

docker-compose rm kafka-container-name does the job, thanks @Alqio

lispc added a commit to fluidex/dingir-exchange that referenced this issue Feb 1, 2021
… not works, so I have to use local nginx & grpcgateway in the host; (2) wurstmeister/kafka-docker#516 occurs sometimes
lispc added a commit to fluidex/dingir-exchange that referenced this issue Feb 1, 2021
Known issues: (1) host.docker.internal not works on macos (2) kafka 'in the middle of a leadership election' sometimes <wurstmeister/kafka-docker#516>
@alexey-sh
Copy link

docker-compose down
docker rm -f $(docker ps -a -q)
docker volume rm $(docker volume ls -q

didn't help me💁🏼

@rphansen91
Copy link

Removing the volume is not an option on our end as it would blow away all of our critical streams. What is the underlying issue causing no leader to be elected?

@fabiankaimer
Copy link

fabiankaimer commented Jul 8, 2022

Alqio recommended an appropriate solution to prevent eternal and unresolved leadership elections:

A solution I found was to follow these instructions:
Because of how cluster state is stored in zookeeper you should either configure a consistent broker ID, use the —no-recreate option if restarting kafka or make sure the zookeeper and kafka containers are torn down by running docker-compose rm -vfs.

Alternatively, a quick fix for us was to stop docker-compose, remove zookeeper completely (and also kafka-docker if that wouldn't help) and restart docker-compose.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants