Tags down


Automatically reconnect failed tasks in Kafka-Connect

By : Aleks Brro
Date : September 15 2020, 09:00 PM
will be helpful for those in need There isn't any way other than using the REST API to find a failed task and submit a restart request - and then running this on a periodic basis. For example
code :
curl -s "http://localhost:8083/connectors?expand=status" | \
  jq -c -M 'map({name: .status.name } +  {tasks: .status.tasks}) | .[] | {task: ((.tasks[]) + {name: .name})}  | select(.task.state=="FAILED") | {name: .task.name, task_id: .task.id|tostring} | ("/connectors/"+ .name + "/tasks/" + .task_id + "/restart")' | \
  xargs -I{connector_and_task} curl -v -X POST "http://localhost:8083"\{connector_and_task\}

Share : facebook icon twitter icon

Kafka connect tasks die with NPE randomly

By : Bridget Huang
Date : March 29 2020, 07:55 AM
I wish this helpful for you The NPE originates here https://github.com/apache/kafka/blob/ do you have multiple partitions in your config topic? Only one partition is allowed for the config topic. This requirement is strictly enforced starting in 0.11 so future versions shouldn't hit this issue.

Build a combined docker image for snowflake-kafka-connector with cp-kafka-connect-base to deploy on kafka connect cluste

By : user3278991
Date : March 29 2020, 07:55 AM
will be helpful for those in need Have you tried mounting external volumes with Docker and mapping this location where the Snowflake Connector jar is stored: https://docs.confluent.io/current/installation/docker/operations/external-volumes.html#
For example:
code :
    image: confluentinc/kafka-connect-datagen:latest
      context: .
      dockerfile: Dockerfile
    hostname: connect
    container_name: connect
      - zookeeper
      - broker
      - schema-registry
      - "8083:8083"
      - ~/my-location:/etc/kafka-connect/jar
 CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components,/etc/kafka-connect/jars"

Failed to connect to and describe Kafka cluster. Apache kafka connect

By : Sprokt
Date : March 29 2020, 07:55 AM
it helps some times I have setup MSK cluster in aws and created an EC2 instance in the same vpn. , After adding the following ssl config it worked.
code :


Can kafka connect - mongo source run as cluster (max.tasks > 1)

By : user3509960
Date : March 29 2020, 07:55 AM
help you fix your problem Mongo-source doesn't support tasks.max > 1. Even if you set it greater than 1 only one task will be pulling data from mongo to Kafka.
How many task is created depends on particular connector. Function List> Connector::taskConfigs(int maxTasks), (that should be overridden during the implementation of your connector) return the list, which size determine number of Tasks. If you check mongo-kafka source connector you will see, that it is singletonList.

kafka Connect: Tasks.max more than # of partitions but the status says RUNNING

By : Rajesh kannan
Date : March 29 2020, 07:55 AM
around this issue There are may be idle tasks, but that does not necessarily mean they are in UNASSIGNED or FAILURE state. They are active and running as part of a consumer group (assuming a sink connector).
If you had a source connector, then there are just 50 running producer threads, sending data to all 40 partitions. There isn't a 1:1 limitation on how many producers like there are for consumers.
Privacy Policy - Terms - Contact Us © 35dp-dentalpractice.co.uk