阿里云kafka安装

来源:互联网 发布:centos telnet 命令 编辑:程序博客网 时间:2024/05/21 00:54

阿里云kafka安装

本案例使用的是kafka的版本是kafka_2.11-0.9.0.1,zookeeper的版本是zookeeper-3.4.6,jdk1.7.0_79 来安装一个简单测试环境。

一、安装启动zookeeper

1 . 去官网下载zookeeper-3.4.6.tar.gz
2 . 解压文件,设置zookeeper环境变量(ZOOKEEPER_HOME)
设置完后的etc/profile文件部分内容如下:

JAVA_HOME=/usr/java/jdk1.7.0_79JRE_HOME=/usr/java/jdk1.7.0_79/jreCLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/libexport JAVA_HOME JRE_HOME PATH CLASSPATHexport MAVEN_HOME=/usr/maven/apache-maven-3.5.0export PATH=${MAVEN_HOME}/bin:${PATH}export ZOOKEEPER_HOME=/usr/zookeeper/zookeeper-3.4.6export PATH=${ZOOKEEPER_HOME}/bin:${PATH}export PATH

别忘了,立即生效执行的命令: source /etc/profile
3 . 修改zookeeper文件下config目录下zoo_sample.cfg为zoo.cfg(下载完默认的端口号就是2181)
4 . 启动zookeeper cd /home/usr/zookeeper/zookeeper-3.4.6/bin(zookeeper的安装目录)

JMX enabled by defaultUsing config: /root/hadoop-0.20.2/zookeeper-3.3.1/bin/../conf/zoo.cfgStarting zookeeper ...STARTED

查看zookeeper占用的进程ID
ps -ef|grep zookeeper
如果现实这样代表启动成功
这里写图片描述


二、安装启动kafka

  1. 去官网下载kafka_2.11-0.9.0.1.tgz
  2. 解压文件,解压目录为: /home/kafka/kafka_2.11-0.9.0.1,修改config/server.properties内容为:
# Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements.  See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License.  You may obtain a copy of the License at##    http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.broker.id=0############################# Socket Server Settings #############################listeners=PLAINTEXT://:9092# The port the socket server listens onport=9092# Hostname the broker will bind to. If not set, the server will bind to all interfaceshost.name=x.x.x.x# Hostname the broker will advertise to producers and consumers. If not set, it uses the# value for "host.name" if configured.  Otherwise, it will use the value returned from# java.net.InetAddress.getCanonicalHostName().advertised.host.name=x.x.x.x# The port to publish to ZooKeeper for clients to use. If this is not set,# it will publish the same port that the broker binds to.#advertised.port=<port accessible by clients># The number of threads handling network requestsnum.network.threads=3# The number of threads doing disk I/Onum.io.threads=8# The send buffer (SO_SNDBUF) used by the socket serversocket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket serversocket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)socket.request.max.bytes=104857600############################# Log Basics ############################## A comma seperated list of directories under which to store log fileslog.dirs=/tmp/kafka-logs# The default number of log partitions per topic. More partitions allow greater# parallelism for consumption, but this will also result in more files across# the brokers.num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.# This value is recommended to be increased for installations with data dirs located in RAID array.num.recovery.threads.per.data.dir=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync# the OS cache lazily. The following configurations control the flush of data to disk.# There are a few important trade-offs here:#    1. Durability: Unflushed data may be lost if you are not using replication.#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.# The settings below allow one to configure the flush policy to flush data after a period of time or# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can# be set to delete segments after a period of time, or after a given size has accumulated.# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens# from the end of the log.# The minimum age of a log file to be eligible for deletionlog.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining# segments don't drop below log.retention.bytes.#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according# to the retention policieslog.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=localhost:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000

其中需要注意的参数配置为:host.name=阿里云的内网地址,advertised.host.name=阿里云的外网地址。

3 . 启动kafka,进去bin目录,输入命令:
./kafka-server-start.sh ../config/server.properties
需要注意:因为我阿里云这台服务器内存配置比较低,需要修改kafka-server-start.sh里的内容。export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" -> export KAFKA_HEAP_OPTS="-Xmx512M -Xms512M"

4 .正常启动后,显示INFO KafkaConfig values: …, ps -ef|grep kafka
这里写图片描述

代表启动成功。


三、kafka创建topic简单测试

  1. 不要关闭kafka服务,再开启一个终端,创建topic。
    ./kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic test
  2. 查看所有topic列表
    ./kafka-topics.sh –list –zookeeper localhost:2181
    这是会显示出test
  3. 查看单个topic的详情
    ./kafka-topics.sh –describe –zookeeper localhost:2181 –topic test
  4. product测试发送消息
    在该终端上输入:
    /kafka-console-producer.sh –broker-list localhost:9092 –topic test
    这里写图片描述
    每一个回车命令就是向kafka发送一条数据,该案列发送了2条数据。
  5. consumer测试接受消息
    再另开一个终端,输入:
    ./kafka-console-consumer.sh –zookeeper localhost:2181 –topic test –from-beginning
    这里写图片描述
    可以看到接受到了最新的2条消息。

四、手动删除topic

  1. 删除kafka存储目录(server.properties文件log.dirs配置,默认为”/tmp/kafka-logs”)相关topic目录
  2. 如果kafaka启动时加载的配置文件中server.properties没有配置delete.topic.enable=true,那么此时的删除并不是真正的删除,而是把topic标记为:marked for deletion
  3. 登录zookeeper客户端:命令 cd zookeeperHome/bin ./zkCli.sh
  4. 找到topic所在的目录:ls /brokers/topics
  5. 找到要删除的topic,执行命令:rmr /brokers/topics/test 即可,此时topic被彻底删除。(此处删除的是元数据)
原创粉丝点击
热门问题 老师的惩罚 人脸识别 我在镇武司摸鱼那些年 重生之率土为王 我在大康的咸鱼生活 盘龙之生命进化 天生仙种 凡人之先天五行 春回大明朝 姑娘不必设防,我是瞎子 牙齿酸痛怎么办才能好 牙齿被磨小了疼怎么办 牙齿有黑点蛀牙怎么办 小孩有蛀牙牙痛怎么办 牙齿修补后疼痛怎么办 腹部绞痛出冷汗怎么办 结石运动后尿血怎么办 透析病人尿血该怎么办 宝宝吃药就吐怎么办 肚子坠胀尿血怎么办 透析病人回来血尿怎么办 血尿腹痛腰疼怎么办 新诺明 吃多了 怎么办 吃下火药拉肚子怎么办 打哈欠停不下来怎么办 小孩一直咳不停怎么办 孩子咳嗽咳不停怎么办 孩子咳嗽出冷汗怎么办 宝宝一直咳不停怎么办 尿血右上腹疼怎么办 尿道感染尿出血怎么办 儿童血尿腹痛是怎么办 肚子胀痛拉二天血尿怎么办 小牛肚尿道发炎怎么办 吃奶小牛涨肚怎么办 尿路感染尿出血了怎么办 尿血还带血块怎么办 尿道感染尿血了怎么办 老人小便带血怎么办 胎儿双肾盂扩张怎么办 宝宝发烧后血尿怎么办 孕妇肾盂分离16怎么办 急性尿道炎尿血严重怎么办 尿急尿频尿血严重怎么办 打预防针两天后发烧怎么办 孩子发烧心跳快怎么办 拉肚子脱水人犯困怎么办 拉肚子拉脱水了怎么办 孕妇肚子疼拉水怎么办 腹泻拉脱水了怎么办 扁桃体化脓反复发烧怎么办