Spark修炼之道(进阶篇)——Spark入门到精通:第十五节 Kafka 0.8.2.1 集群搭建
来源:互联网 发布:star discover算法 编辑:程序博客网 时间:2024/05/17 18:17
作者:周志湖
微信号:zhouzhihubeyond
本节为下一节Kafka与Spark Streaming做铺垫
主要内容
1.kafka 集群搭建
1. kafka 集群搭建
kafka 安装与配置
到下面的地址下载:Scala 2.10 - kafka_2.10-0.8.2.1.tgz
http://kafka.apache.org/downloads.html
下载完成后,使用命令
tar -zxvf kafka_2.10-0.8.2.1.tgz
解压,解压后的目录如下
进入config目录,将server.properties文件内容如下:
############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.broker.id=0############################# Socket Server Settings ############################## The port the socket server listens onport=9092# Hostname the broker will bind to. If not set, the server will bind to all interfaceshost.name=sparkmaster//中间省略,默认配置即可############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=sparkmaster:2181,sparkslave01:2181,sparkslave02:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000
将整个安装文件进行跨机器拷贝:
root@sparkmaster:/hadoopLearning# scp -r kafka_2.10-0.8.2.1/ sparkslave01:/hadoopLearning/ root@sparkmaster:/hadoopLearning# scp -r kafka_2.10-0.8.2.1/ sparkslave02:/hadoopLearning/
将sparkslave01机器上的server.properties文件内容如下:
############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.broker.id=1############################# Socket Server Settings ############################## The port the socket server listens onport=9092# Hostname the broker will bind to. If not set, the server will bind to all interfaceshost.name=sparkslave01//中间省略,默认配置即可############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=sparkmaster:2181,sparkslave01:2181,sparkslave02:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000
将sparkslave02机器上的server.properties文件内容如下:
# The id of the broker. This must be set to a unique integer for each broker.broker.id=2############################# Socket Server Settings ############################## The port the socket server listens onport=9092# Hostname the broker will bind to. If not set, the server will bind to all interfaceshost.name=sparkslave02//中间省略,默认配置即可############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=sparkmaster:2181,sparkslave01:2181,sparkslave02:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000
- 启动Kafka集群
root@sparkslave02:/hadoopLearning/kafka_2.10-0.8.2.1# bin/kafka-server-start.sh config/server.properties root@sparkslave01:/hadoopLearning/kafka_2.10-0.8.2.1# bin/kafka-server-start.sh config/server.properties root@sparkmaster:/hadoopLearning/kafka_2.10-0.8.2.1# bin/kafka-server-start.sh config/server.properties
3 创建topic
在sparkmaster机器上执行下列命令创建一个topic
root@sparkmaster:/hadoopLearning/kafka_2.10-0.8.2.1# bin/kafka-topics.sh --create --topic kafkatopictest --replication-factor 3 --partitions 2 --zookeeper sparkmaster:2181Created topic "kafkatopictest".
4 发送消息至kafka
在sparkslave01机器上执行下列命令并向kafka发送消息
root@sparkslave01:/hadoopLearning/kafka_2.10-0.8.2.1# bin/kafka-console-producer.sh --broker-list sparkslave01:9092 --sync --topic kafkatopictestHello Kafka, I will test Spark Streaming on you next lesson
5 接收kafka发送来的消息
在sparkslave02机器上执行下列命令并接收kafka发送消息
root@sparkslave02:/hadoopLearning/kafka_2.10-0.8.2.1# bin/kafka-console-consumer.sh --zookeeper sparkmaster:2181 --topic kafkatopictest --from-beginningHello Kafka, I will test Spark Streaming on you next lesson
至此Kafka 集群搭建与测试完毕
下一节当中,我们将演示kafka如何与Spark Streaimg结合起来使用
0 0
- Spark修炼之道(进阶篇)——Spark入门到精通:第十五节 Kafka 0.8.2.1 集群搭建
- Spark修炼之道(进阶篇)——Spark入门到精通:第一节 Spark 1.5.0集群搭建
- Spark修炼之道(进阶篇)——Spark入门到精通:第一节 Spark 1.5.0集群搭建
- Spark修炼之道(进阶篇)——Spark入门到精通:第五节 Spark编程模型(二)
- Spark修炼之道(进阶篇)——Spark入门到精通:第五节 Spark编程模型(二)
- Spark修炼之道(进阶篇)——Spark入门到精通:第十六节 Spark Streaming与Kafka
- Spark修炼之道(进阶篇)——Spark入门到精通:第十三节 Spark Streaming—— Spark SQL、DataFrame与Spark Streaming
- Spark修炼之道(进阶篇)——Spark入门到精通:第十一节 Spark Streaming—— DStream Transformation操作
- Spark修炼之道(进阶篇)——Spark入门到精通:第十节 Spark SQL案例实战(一)
- Spark修炼之道(进阶篇)——Spark入门到精通:第二节 Hadoop、Spark生成圈简介
- Spark修炼之道(进阶篇)——Spark入门到精通:第四节 Spark编程模型(一)
- Spark修炼之道(进阶篇)——Spark入门到精通:第六节 Spark编程模型(三)
- Spark修炼之道(进阶篇)——Spark入门到精通:第七节 Spark运行原理
- Spark修炼之道(进阶篇)——Spark入门到精通:第八节 Spark SQL与DataFrame(一)
- Spark修炼之道(进阶篇)——Spark入门到精通:第二节 Hadoop、Spark生成圈简介
- Spark修炼之道(进阶篇)——Spark入门到精通:第四节 Spark编程模型(一)
- Spark修炼之道(进阶篇)——Spark入门到精通:第六节 Spark编程模型(三)
- Spark修炼之道(进阶篇)——Spark入门到精通:第七节 Spark运行原理
- SparkSqlForTest
- hive参数调节
- GoldenGate for win安装配置
- 【cocos2d-x 3D游戏开发】1: 2D基础回顾---触摸事件(2.x)
- 利用webview加载帧动画
- Spark修炼之道(进阶篇)——Spark入门到精通:第十五节 Kafka 0.8.2.1 集群搭建
- 移动广告平台如何变得更具有价值?
- Js事件
- PHP开发移动客户端接口---查询数据接口
- 使用Cglib的BeanCopier实现Bean的拷贝
- day_31Java注解---Annotation
- [Android--UI]用Fragments创建动态UI
- linux的ntp服务器时间同步设置
- 模拟发短信