Flume搭建测试
来源:互联网 发布:java 定时 编辑:程序博客网 时间:2024/06/07 17:07
日志收集之Flume
case 1:1. wget http://archive.apache.org/dist/flume/1.6.0/apache-flume-1.6.0-bin.tar.gz2. tar -zxvf apache-flume-1.6.0-bin.tar.gz3. mv apache-flume-1.6.0 flume4. cd confvi commands.conf# example.conf: A single-node Flume configuration# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = execa1.sources.r1.command = echo 'hello'# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000 # The maximum number of events stored in the channela1.channels.c1.transactionCapacity = 100# The maximum number of events the channel will take from a source or give to a sink per transaction# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c15.cd ..bin/flume-ng agent --conf conf --conf-file ./conf/commands.conf --name a1 -Dflume.root.logger=INFO,consolecase 2:0. 检查rpm -qa | grep telnet #查看是否有安装telnet yum list | grep telnet #查看yum安装列表中是否有telnet #安装 yum install xinetd #telnet依赖sudo yum -y install telnet #telnet客户端 sudo yum -y install telnet-server #telnet服务器端# 启动服务 service xinetd restart #重启xinetdservice iptables stop1. cd confvi example.conf# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = netcat# netcat,nc,用于创建 TCP/IP 连接,最大的用途就是用来处理 TCP/UDP 套接字,一个非常标准的telnet客户端工具a1.sources.r1.bind = hda1.sources.r1.port = 44444# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c12. cd ..bin/flume-ng agent --conf conf --conf-file conf/example.conf --name a1 -Dflume.root.logger=INFO,console3. 另一个窗口telnet xxxx 44444hello,worldhi,Chinacase 3:1. cd confmv flume-env.sh.template flume-env.shvi flume-env.shexport JAVA_HOME=/usr/java/jdk1.7.0_51JAVA_OPTS="-Xms8192m -Xmx8192m -Xss256k -Xmn2g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit"#堆空间是内存中划拨给JVM的一块保留区域,为java程序使用。设置JVM初始内存 最大可用内存 每个线程的堆栈大小 年轻代大小为2G 年轻代为并行收集 年老代为并发收集 在抛出OOM之前限制jvm耗费在GC上的时间比例2. vi messages.confagent.sources = s1 agent.channels = c1 agent.sinks = sk1 #设置spooldir agent.sources.s1.type = spooldir agent.sources.s1.spoolDir = /var/log/messages agent.sources.s1.fileHeader = true agent.sources.s1.channels = c1 agent.sinks.sk1.type = logger agent.sinks.sk1.channel = c1 #In Memory !!! agent.channels.c1.type = memory agent.channels.c1.capacity = 10004 agent.channels.c1.transactionCapacity = 100 3.cd ..bin/flume-ng agent --conf conf --conf-file conf/messages.conf --name agent -Dflume.root.logger=INFO,consolecase 4:0. 检查安装如下依赖sudo apt-get install make gcc gcc-c++ kernel-devel m4 ncurses-devel openssl-devel unixODBC unixODBC-devel wxBase wxGTK SDL wxGTK-glsudo yum -y install epel-release1. rabbitmq依赖Erlangsudo apt-get install erlang2. 部署rabbitmqsudo apt-get install rabbitmq-server3. 启动服务sudo rabbitmq-plugins enable rabbitmq_managementsudo service rabbitmq-server restart4. 检查服务是否启动http://node01:156725. rabbitmq消息队列gedit new_task.py #!/usr/bin/env pythonimport pikaimport sysconnection = pika.BlockingConnection(pika.ConnectionParameters( host='localhost'))channel = connection.channel()channel.queue_declare(queue='task_queue', durable=True)message = ' '.join(sys.argv[1:]) or "Hello World!"channel.basic_publish(exchange='', routing_key='task_queue', body=message, properties=pika.BasicProperties( delivery_mode = 2, # make message persistent ))print(" [x] Sent %r" % message)connection.close()gedit pro_task.py #!/usr/bin/env pythonimport pikaimport timeconnection = pika.BlockingConnection(pika.ConnectionParameters( host='localhost'))channel = connection.channel()channel.queue_declare(queue='task_queue', durable=True)print(' [*] Waiting for messages. To exit press CTRL+C')def callback(ch, method, properties, body): print(" [x] Received %r" % body) time.sleep(body.count(b'.')) print(" [x] Done") ch.basic_ack(delivery_tag = method.delivery_tag)channel.basic_qos(prefetch_count=1)channel.basic_consume(callback, queue='task_queue')channel.start_consuming()检查:python new_task.py 另一个窗口: python pro_task.pysudo apt-get install python-pipsudo pip install pika6. flume/confgedit rabbitmq.conf# example.conf: A single-node Flume configuration# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.channels = c1a1.sources.r1.type = org.apache.flume.source.rabbitmq.RabbitMQSourcea1.sources.r1.hostname = node01a1.sources.r1.port = 5672a1.sources.r1.queuename = task_queuea1.sources.r1.threads = 2# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c17.bin/flume-ng agent --conf conf --conf-file ./conf/rabbitmq.conf --name a1 -Dflume.root.logger=INFO,console另一个窗口:python new_task.py python new_task.py "Hi,China"case 5:hdfs dfs -mkdir -p /flume/eventsvi hdfs.conf# example.conf: A single-node Flume configuration# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.channels = c1a1.sources.r1.type = org.apache.flume.source.rabbitmq.RabbitMQSourcea1.sources.r1.hostname = localhosta1.sources.r1.port = 5672a1.sources.r1.queuename = task_queuea1.sources.r1.threads = 2# Describe the sinka1.sinks.k1.type = hdfsa1.sinks.k1.channel = c1a1.sinks.k1.hdfs.path = /flume/event/standardlog/%Y/%m/%d a1.sinks.k1.hdfs.filePrefix = standardlog-%Y-%m-%d-%Ha1.sinks.k1.hdfs.fileSuffix = .loga1.sinks.k1.hdfs.round = truea1.sinks.k1.hdfs.roundValue = 10a1.sinks.k1.hdfs.roundUnit = minutea1.sinks.k1.hdfs.writeFormat = Texta1.sinks.k1.hdfs.fileType = DataStreama1.sinks.k1.hdfs.rollInterval = 60a1.sinks.k1.hdfs.rollCount = 0a1.sinks.k1.hdfs.rollSize = 0a1.sinks.k1.hdfs.batchSize = 50000# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c1bin/flume-ng agent --conf conf --conf-file ./conf/hdfs.conf --name a1 -Dflume.root.logger=INFO,console另一个窗口:python new_task.py python new_task.py "Hi,China"
阅读全文
0 0
- Flume搭建测试
- Flume搭建
- flume搭建
- flume搭建
- 使用flume问题总结1——搭建flume+测试Syslog source
- Flume-ng在windows环境搭建并测试+log4j日志通过Flume输出到HDFS
- flume搭建调试
- flume搭建调试
- flume开发环境搭建
- Flume集群搭建
- flume集群搭建
- Flume集群搭建
- 搭建flume集群
- flume+kalka 搭建一
- flume开发环境搭建
- flume 集群测试场景
- Flume结合Spark测试
- Flume安装与测试
- 算法基础-->概率组合
- AS使用JNI生成so简单使用(一)
- html中实现添加水印的功能 / 在jsp页面上直接打开PDF文件
- python成长之路【第十篇】:浅析python select模块
- JFreeChart中文乱码和文字模糊问题的通用解决方案
- Flume搭建测试
- 如何查看OpenCV函数的源代码
- 35:求出e的值
- 【Android】SharedPreferences探索
- regular-expression-matching
- 16位汇编第九讲汇编指令以及逆向中的花指令
- 【转】React Native签名打包生成Android apk
- 解决“只能通过Chrome网上应用商店安装该程序”的方法
- JAVA-IDEA-配置