添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

通过java实现生产者和消费者代码,从而实现kafka消息订阅模式。

1.centos6.4
2.kafka_2.11-0.8.2.1
3.zookeeper-3.4.5-cdh5.7.0

三、KAFKA在Linux环境上创建topic,并且测试

1.启动kafka
参考:https://blog.csdn.net/u010886217/article/details/82973573

2.创建主题topic

bin/kafka-topics.sh --create --zookeeper hadoop01:2181/kafka08 --replication-factor 1 --partitions 1 --topic subscribe_topic1
[root@hadoop01 kafka_2.11-0.8.2.1]# bin/kafka-topics.sh --describe --zookeeper hadoop01:2181/kafka08
Topic:subscribe_topic1    PartitionCount:1    ReplicationFactor:1    Configs:
    Topic: subscribe_topic1    Partition: 0    Leader: 0    Replicas: 0    Isr: 0

3.测试
(1)生产者

bin/kafka-console-producer.sh --broker-list hadoop01:9092 --topic subscribe_topic1

(2)消费者

bin/kafka-console-consumer.sh --zookeeper hadoop01:2181/kafka08 --topic subscribe_topic1 --from-beginning

4.测试生产者和消费者
(1)生产者输入消息

[root@hadoop01 kafka_2.11-0.8.2.1]# bin/kafka-console-producer.sh --broker-list hadoop01:9092 --topic subscribe_topic1
[2019-09-08 13:50:13,579] WARN Property topic is not valid (kafka.utils.VerifiableProperties)
sdfs we 
sdf ewr dsf 
1 23 4

(2)消费者消费消息

[root@hadoop01 kafka_2.11-0.8.2.1]# bin/kafka-console-consumer.sh --zookeeper hadoop01:2181/kafka08 --topic subscribe_topic1 --from-beginning
sdfs we 
sdf ewr dsf 
1 23 4

四、代码实现

1.pom.xml文件

    <!-- kafka api-->
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>0.8.2.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka -->
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka_2.11</artifactId>
        <version>0.8.2.1</version>
        <!--排除这个slf4j-log4j12-->
        <exclusions>
            <exclusion>
                <groupId>org.slf4j</groupId>
                <artifactId>slf4j-log4j12</artifactId>
            </exclusion>
        </exclusions>
    </dependency>

2.配置本地windows的hosts,因为程序会通过,kafka的broker返回的主机名hadoop01去查找broker位置,所以需要配置添加

192.168.130.3 hadoop01

3.生产者

(1)代码

package com.example.kafkaMQ;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;
public class KafkaProducer {
    private static org.apache.kafka.clients.producer.KafkaProducer producer;
    private final static String TOPIC = "subscribe_topic1";
    public KafkaProducer(){
        Properties props = new Properties();
        props.put("bootstrap.servers", "192.168.130.3:9092");
        props.put("acks", "all");
        props.put("retries", 0);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        //设置分区类,根据key进行数据分区
        producer = new org.apache.kafka.clients.producer.KafkaProducer(props);
    public void produce(){
        for (int i = 30;i<40;i++){
            String key = String.valueOf(i);
            String data = "hello kafka message:"+key;
            producer.send(new ProducerRecord<String, String>(TOPIC,key,data));
            System.out.println(data);
        producer.close();
    public static void main(String[] args) {
        new KafkaProducer().produce();

(2)运行:

(expectResponse=true, payload=null, request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=producer-1}, body={topics=[subscribe_topic1]})) to node -1
17:33:09.150 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.internals.Metadata - Updated cluster metadata version 2 to Cluster(nodes = [Node(0, hadoop01, 9092)], partitions = [Partition(topic = subscribe_topic1, partition = 0, leader = 0, replicas = [0,], isr = [0,]])
hello kafka message:30
hello kafka message:31
hello kafka message:32
17:33:09.165 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 0 at hadoop01:9092.
hello kafka message:33
hello kafka message:34
hello kafka message:35
hello kafka message:36
hello kafka message:37
hello kafka message:38
hello kafka message:39

(3)在shell界面开启消费者,查看消费者消费消息,成功~

hello kafka message:30
hello kafka message:31
hello kafka message:32
hello kafka message:33
hello kafka message:34
hello kafka message:35
hello kafka message:36
hello kafka message:37
hello kafka message:38
hello kafka message:39

4.消费者
(1)代码

package com.example.kafkaMQ;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import java.io.UnsupportedEncodingException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
public class KafkaConsumer {
    private static final String TOPIC = "subscribe_topic1";
    public void exec() throws UnsupportedEncodingException {
        Properties props = new Properties();
        props.put("zookeeper.connect", "192.168.130.3:2181/kafka08");
        props.put("auto.offset.reset","smallest");
        props.put("group.id", "test-group");
        props.put("enable.auto.commit", "true");
        props.put("zookeeper.session.timeout.ms", "400");
        props.put("zookeeper.sync.time.ms", "200");
        props.put("auto.commit.interval.ms", "1000");
        ConsumerConfig consumerConfig =  new kafka.consumer.ConsumerConfig(props);
        ConsumerConnector consumerConnector = kafka.consumer.Consumer.createJavaConsumerConnector(consumerConfig);
        Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
        int localConsumerCount = 1;
        topicCountMap.put(TOPIC, localConsumerCount);
        Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumerConnector
                .createMessageStreams(topicCountMap);
        List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(TOPIC);
        streams.stream().forEach(stream -> {
            ConsumerIterator<byte[], byte[]> it = stream.iterator();
            while (it.hasNext()) {
                System.out.println(new String(it.next().message()));
    public static void main(String[] args)  throws Exception{
        new KafkaConsumer().exec();

(2)运行测试结果,可以接受到信息,成功~

18:03:37.883 [main-SendThread(192.168.130.3:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x16d0eff58b1002a after 1ms
18:03:39.216 [main-SendThread(192.168.130.3:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x16d0eff58b1002a after 1ms
18:03:40.550 [main-SendThread(192.168.130.3:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x16d0eff58b1002a after 1ms
18:03:41.883 [main-SendThread(192.168.130.3:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x16d0eff58b1002a after 1ms
18:03:43.217 [main-SendThread(192.168.130.3:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x16d0eff58b1002a after 1ms

1.参考:https://segmentfault.com/a/1190000006053486

一、实现功能通过java实现生产者和消费者代码,从而实现kafka消息订阅模式。二、环境1.centos6.42.kafka_2.11-0.8.2.13.zookeeper-3.4.5-cdh5.7.0三、KAFKA在Linux环境上创建topic,并且测试1.启动kafka参考:https://blog.csdn.net/u010886217/article/detai...
>^Cnode1:/root/kafka_2.12-2.4.0#bin/kafka-console-producer.sh --broker-list 192.168.137.2:9092 --topic test_topic >aaaaaaaaa >bbbbbbbbb >ccccccccc >ddddddddd >eeeeeeeee >fffffffff >ggggggggg #!/usr/bin/env python # coding=utf-8..
import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; public class Test4 { public sta...
传统的生产者-消费者模型一个线程写消息,一个线程取消息,通过锁机制控制队列和等待,但一不小心就可能死锁协程版的生产者-消费者模型生产者生产消息后,直接通过yield跳转到消费者开始执行,待消费者执行完毕后,切换回生产者继续生产,效率极高运行示例# main.py #!/usr/bin/env python3 # -*- coding: utf-8 -*- # Python基础-协程-生产者消费者
public static final int MAX_SIZE = 2; //存储媒介 public static LinkedList<Integer> list = new LinkedList<>(); static class Produc public class provider { @Test public void SendMessage() throws IOException, TimeoutException { Connection connection = RabbitMqUtils.getConnection(); // 创建通道 Channel channel = connection.createChannel();
package blockqueen; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; public class xiaofeizhe {   final BlockingQueue queue=new ArrayBlockingQueue(5
IDEA 闪退报错:There is not enough memory to perform the requested operation. please increase Xmx ... 42820 SpringBoot莫名报错:Failed to start component [StandardEngine[Tomcat].StandardHost[localhost].TomcatEmbed 37769 Kettle 错误:Error connecting to database: (using class oracle.jdbc.driver.OracleDriver) ORA-28040: No 27021