Tag Archives: kafka

[Solved] kafka Startup Error: Error: VM option ‘UseG1GC’ is experimental and must be enabled via -XX:+UnlockExperimentalVMOptions.

The node hung up the next day after the Kirin server installed Kafka

Prompt VM option ‘useg1gc’ is empirical and must be enabled via – XX: + unlockexperimentalvmoptions

Find this configuration directly and delete it
the configuration path is as follows:
/APP/Kafka/Kafka_2.12-2.8.0/bin/kafka-run-class.SH
after opening, find:
Kafka_JVM_PERFORMANCE_OPTS=”-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20

Directly delete – XX: + useg1gc. Restart ZK cluster and start Kafka cluster

The service is normal.

Kafka creates topic error: org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0.

kafka executes the following statement to create a topic.
[root@node01 kafka_2.11-1.0.0]# bin/kafka-topics.sh –create –topic streaming-test –replication-factor 1 –partitions 3 –zookeeper node01:2181,node02:2181,node03:2181
Error while executing topic command : Replication factor: 1 larger than available brokers: 0.
Error:[2019-10-15 20:23:25,461] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0.
Reason: The configuration item zookeeper.connect in the broker’s configuration file server.properties specifies the root directory of kafka’s zookeeper (zookeeper.connect=node01:2181,node02:2181,node03:2181/kafka)
Solution: The value of the command line parameter “–zookeeper” also needs to bring the root directory, as follows.
bin/kafka-topics.sh –create –topic streaming-test –replication-factor 1 –partitions 3 –zookeeper node01:2181,node02:2181,node03:2181/kafka

Kafka startup error & problem solving

Kafka startup error & amp; Problem solving

When I went to work early in the morning, I received a notice from my o & M colleagues that a physical machine was down, causing the virtual machine to hang up. The Kafka server had to be restarted

1. Start

Start zookeeper

bin/zkServer.sh start conf/zoo.cfg &

Start Kafka

bin/kafka-server-start.sh config/server.properties &

2. Test

question 1

After Kafka is started, it is found that warn is always printed, as shown in the figure above. In the process, PS – ef| grep Kafka cannot be seen. Obviously Kafka failed to start

Resetting first dirty offset of __consumer_offsets

From the repeated error message, we can know that this is a problem that the cleaning thread has been encountering. The fastest way is to empty Kafka’s data directory. Or regardless of this warn, when a large amount of data flows in and generates segments that can be cleaned up, there will be no more this warn. reference resources https://blog.csdn.net/define_us/article/details/80537186。

question 2

After Kafka is started normally, test whether it can be used to walk a wave

Create topic

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

Producer production message

[ apps@erp -computation-4 kafka_2.11-1.1.0]$ bin/kafka-console-producer.sh --broker-list 10.17.156.8:9092 --topic test

my name is xiaoqiang

Consumer News

[ apps@erp -computation-4 kafka_2.11-1.1.0]$ bin/kafka-console-consumer.sh --bootstrap-server 10.17.156.8:9092 --topic test --from-beginning

my name is xiaoqiang

At this point, Kafka is started and used normally. CTRL + C, close the xshell window and focus on the code. When testing the application, it is found that a pile of error reports all point to Kafka. Fight the Kafka server again and find that the Kafka process is no longer present

Kafka starts normally

The application starts normally

ctrl+c

Application connection Kafka error

The Kafka process was killed

Finally, the problem is found: when exiting Kafka, do not use Ctrl + C, but use the exit function to exit

No Kafka server to stop error handling of Kafka

Use the command kafka-server-stop.sh to close the Kafka service. If it is found that it cannot be deleted, an error is reported, as shown in the following figure

 

 

Modify Kafka server stop.sh
to

PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}')

Change to

PIDS=$(jps -lm | grep -i 'kafka.Kafka'| awk '{print $1}')

command details: use the JPS – LM command to list all Java processes, then screen out the Kafka process through the pipeline by using the grep – I ‘Kafka. Kafka’ command, and finally take out the process number by using awk

 

The specific modification is shown in the figure below

 

test:

 

ok!

Solution of failed tosend message exception in Kafka

These two days in learning kafka, the official web demo deployed to their own virtual machine to run, normal;

Then deployed to the company’s R & D line host, found that the producer has been unable to send messages;

Some of the error logs are as follows:

[2014-11-13 09:58:09,660] WARN Error while fetching metadata [{TopicMetadata for topic mor ->
No partition metadata for topic mor due to kafka.common.LeaderNotAvailableException}] for topic [mor]: class kafka.common.LeaderNotAvailableException (kafka.producer.BrokerPartitionInfo)
[2014-11-13 09:58:09,660] ERROR Failed to send requests for topics mor with correlation ids in [17,24] (kafka.producer.async.DefaultEventHandler)
[2014-11-13 09:58:09,660] ERROR Error in handling batch of 17 events (kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
at scala.collection.immutable.Stream.foreach(Stream.scala:526)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)

The procedure is exactly the same, except that the listening port of zookeeper is modified on the R&D line, and the command is entered according to the corresponding port, so it is not caused by a problem with the parameters;

Compare the configuration on the R & D line and the virtual machine, found no difference except for this port;

Put in the server.properties file

#host.name=localhost

and then ran it, and found that the problem was solved;

(But it works fine in the virtual machine without changing this stuff;

I suspect that some internal configuration differences are caused by different distributions;

Virtual machine:

Distributor ID: Ubuntu
Description: Ubuntu 14.04.1 LTS
Release: 14.04
Codename: trusty

R&D Line:

LSB Version: :
Distributor ID: RedHatEnterpriseServer
Description: Red Hat Enterprise Linux Server release 5.4 (Tikanga)
Release: 5.4
Codename: Tikanga

)

 


In the process of solving the problem, but also found other problems, but so far not found to affect the operation;

is zookeeper and broker run up, create the producer, create the topic, create the Consumer, when, zookeeper will report an exception, part of the record is as follows:

[2014-11-13 09:12:13,486] INFO Got user-level KeeperException when processing sessionid:0x149a6b3e36c0001 type:setData cxid:0x3 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/config/topics/morning Error:KeeperErrorCode = NoNode for /config/topics/morning (org.apache.zookeeper.server.PrepRequestProcessor)

[2014-11-13 09:12:13,506] INFO Got user-level KeeperException when processing sessionid:0x149a6b3e36c0001 type:create cxid:0x4 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)

[2014-11-13 09:12:13,535] INFO Processed session termination for sessionid: 0x149a6b3e36c0001 (org.apache.zookeeper.server.PrepRequestProcessor)

[2014-11-13 09:19:50,958] INFO Got user-level KeeperException when processing sessionid:0x149a6bbeaf50000 type:create cxid:0x4 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers (org.apache.zookeeper.server.PrepRequestProcessor)
[2014-11-13 09:19:50,982] INFO Got user-level KeeperException when processing sessionid:0x149a6bbeaf50000 type:create cxid:0xa zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config (org.apache.zookeeper.server.PrepRequestProcessor)
[2014-11-13 09:19:50,998] INFO Got user-level KeeperException when processing sessionid:0x149a6bbeaf50000 type:create cxid:0x10 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin (org.apache.zookeeper.server.PrepRequestProcessor)
[2014-11-13 09:19:51,295] INFO Got user-level KeeperException when processing sessionid:0x149a6bbeaf50000 type:setData cxid:0x19 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch (org.apache.zookeeper.server.PrepRequestProcessor)
[2014-11-13 09:19:51,374] INFO Got user-level KeeperException when processing sessionid:0x149a6bbeaf50000 type:delete cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election (org.apache.zookeeper.server.PrepRequestProcessor)

[2014-11-13 10:31:50,651] INFO Got user-level KeeperException when processing sessionid:0x149a6bbeaf5001a type:setData cxid:0x19 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/consumers/test-consumer-group/offsets/mor/0 Error:KeeperErrorCode = NoNode for /consumers/test-consumer-group/offsets/mor/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2014-11-13 10:31:50,661] INFO Got user-level KeeperException when processing sessionid:0x149a6bbeaf5001a type:create cxid:0x1a zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/consumers/test-consumer-group/offsets Error:KeeperErrorCode = NoNode for /consumers/test-consumer-group/offsets (org.apache.zookeeper.server.PrepRequestProcessor)

 

At first the producer could not send messages, I thought it was related to these exceptions, but when running in the virtual machine, these same exceptions appeared, but also did not affect the producer to send messages;

After searching the Internet, some people said it was the reason for incorrectly closing zookeeper and server, and some said it was the reason for not deleting the zookeeper logs and kafka logs under /tmp,

Anyway, I tried the methods mentioned above, but still reported these exceptions;

If any of you know what causes these exceptions, please tell me, thank you very much;

 


In addition, when the Consumer running java code from the local machine connects to the R & D line, the connection is quickly closed and the message sent by the producer is not received;

The reason is that the configured timeout is too short, zookeeper did not finish reading the Consumer’s data, the connection was disconnected by the Consumer, part of the log is as follows:

[2014-11-13 10:28:47,989] INFO Accepted socket connection from /192.168.50.33:2676 (org.apache.zookeeper.server.NIOServerCnxn)
[2014-11-13 10:28:47,989] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)

 

The solution is to configure the timeout time in the configuration to be longer, as follows:

props.put(“zookeeper.session.timeout.ms”, “400000”);

 


kafka official website on the java version of the producer part of the code, there is a place to write is not very clear, as follows:

props.put("metadata.broker.list","broker1:9092,broker2:9092");

Broker1 and broker2 represent the corresponding host name of the broker, not the ID of the broker

props.put("metadata.broker.list","localhost:9092,localhost:9093");

 

Kafka official kafka-server-start.sh cannot close Kafka process solution

Geeks, please accept the hero post of 2021 Microsoft x Intel hacking contest>>>

If you use this script (Kafka server stop. SH) to close the Kafka process in the Kafka/bin directory, you will find that

No Kafka server to stop

View the modification script: VI kafka-server-stop.sh and find that:

 1 #!/bin/sh
 2 # Licensed to the Apache Software Foundation (ASF) under one or more
 3 # contributor license agreements.  See the NOTICE file distributed with
 4 # this work for additional information regarding copyright ownership.
 5 # The ASF licenses this file to You under the Apache License, Version 2.0
 6 # (the "License"); you may not use this file except in compliance with
 7 # the License.  You may obtain a copy of the License at
 8 # 
 9 #    http://www.apache.org/licenses/LICENSE-2.0
10 # 
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}')
17#PIDS=$(jps -lm | grep -i 'kafka.Kafka' | awk '{print $1}')
18 
19 if [ -z "$PIDS" ]; then
20   echo "No kafka server to stop"
21   exit 1
22 else 
23   kill -s TERM $PIDS
24 fi

At this time, you can change the PIDs = $(PS ax | grep – I ‘Kafka \. Kafka’ | grep Java | grep – V grep | awk ‘{print $1}’) to

PIDS=$(jps -lm | grep -i ‘kafka.Kafka’ | awk ‘{print $1}’)

You can use it to close Kafka successfully

Sometimes when we write the Kafka close script, we call this script to close. If we don’t modify it, the script you write will not close Kafka