ssjs-记录-1
来源:互联网 发布:马尔科夫人力资源矩阵 编辑:程序博客网 时间:2024/06/06 04:35
软件版本
jdk1.6.0_45
zookeeper-3.4.5
hadoop-2.2.0
hbase-0.96.2-hadoop2
apache-flume-1.5.2
kafka_2.9.2-0.8.1.1
sbt-0.13.5
apache-storm-0.9.1-incubating
角色分配
bigdata0 Zookeeper、Kafka Server、supervisor
bigdata1 Zookeeper、Kafka Server、supervisor
bigdata2 Zookeeper、Flume、Kafka Server、supervisor
bigdata3 Kafka Server、Kafka Monitor、nimbus、core
一、Flume
工作目录
/data/hdfs/data2
#Flume
export FLUME_HOME=/data/hdfs/data2/jianxin/flume/flume
export PATH=$PATH:$FLUME_HOME/bin
hadoop fs -mkdir -p hdfs://mycluster/flume/test/httpDirHDFS/
mkdir -p /home/jianxin/flume/channel/httpDirHDFS/checkpointDir
mkdir -p /home/jianxin/flume/channel/httpDirHDFS/dataDirs
提前将RTC-HttpHandlerCustom.jar拷贝到$FLUME_HOME/lib/下,作为HTTPSource
nohup flume-ng agent -nhttp_self_to_hdfs-c conf/ -f /data/hdfs/data2/jianxin/flume/flume/conf/http_self_to_hdfs -Dflume.root.logger=INFO,console >> /data/hdfs/data2/jianxin/flume/flume/logs/log1 2>&1 &
启动http_self_to_kafka时,提前将$KAFKA_HOME/lib/*.jar拷贝到$FLUME_HOME/lib/下,
提前将RTC-KafkaSink.jar拷贝到$FLUME_HOME/lib/下,作为KafkaSink
nohup flume-ng agent -nhttp_self_to_kafka -c conf/ -f /data/hdfs/data2/jianxin/flume/flume/conf/http_self_to_kafka -Dflume.root.logger=INFO,console >> /data/hdfs/data2/jianxin/flume/flume/logs/http_self_to_kafka_log 2>&1 &
二、Kafka
#kafka
export KAFKA_HOME=/opt/kafka/kafka_2.9.2-0.8.1.1
export PATH=$PATH:$KAFKA_HOME/bin
启动Kafka监控
bigdata3
cd /opt/kafka/monitor_1
./kafka_monitor.sh
Web监控页面:http://bigdata3:8888
启动Kafka Server
mkdir /opt/kafka/log
nohup kafka-server-start.sh $KAFKA_HOME/config/server.properties >> /opt/kafka/log/log1 2>&1 &
查看所有topic
kafka-topics.sh --list --zookeeper bigdata0:2181,bigdata1:2181,bigdata2:2181
查看所有topic(详细)
kafka-topics.sh --describe --zookeeper bigdata0:2181,bigdata1:2181,bigdata2:2181
创建topic
kafka-topics.sh --create --zookeeper bigdata0:2181,bigdata1:2181,bigdata2:2181 --replication-factor 3 --partitions 3 --topic flume-kafka-1
控制台测试生产者
kafka-console-producer.sh --broker-list bigdata0:9092,bigdata1:9092,bigdata2:9092 --topic flume-kafka-1
控制台测试消费者
kafka-console-consumer.sh --zookeeper bigdata0:2181,bigdata1:2181,bigdata2:2181 --topic flume-kafka-1 --from-beginning
三、Storm
#storm
export STORM_HOME=/opt/storm/storm
export PATH=$PATH:$STORM_HOME/bin
启动Storm集群
主节点启动
nohup /opt/storm/storm/bin/storm nimbus > /opt/storm/storm/logs/nimbus.log 2>&1 &
子节点启动
nohup /opt/storm/storm/bin/storm supervisor > /opt/storm/storm/logs/supervisor.log 2>&1 &
master Web UI监控页面启动(进程名core)
nohup /opt/storm/storm/bin/storm ui > /opt/storm/storm/logs/ui.log 2>&1 &
Web监控页面:http://bigdata3:9999/
storm list
storm kill TOPO_NAME
提交Topology到Storm集群
storm jar RTC-WordCountTopo.jar cn.yjx.rtc.storm.wordcount.topology.WordCountTopo
storm jar RTC-KafkaStormTopology.jar cn.yjx.rtc.storm.kafkaStorm.topology.KafkaStormTopology storm2hbase-20150327
整合Storm与Kafka时,注意将kafka_2.9.2-0.8.1.1.jar、scala-library-2.9.2.jar、metrics-core-2.2.0.jar 拷贝到$STORM_HOME/lib下
其他命令
rm -rf /opt/storm/storm/logs/* && rm -rf /opt/storm/storm/storm_local_dir/* && ll
问题总结
1、
java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$AppendRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
apache-flume-1.4.0与hadoop-2.2.0使用的protobuf-java-2.5.0.jar版本不一样。
2、
ERROR Shell:303 - Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
- ssjs-记录-1
- xpages不同数据库共享ssjs
- Create HTML mails in SSJS using MIME
- Use Rhino to write SSJS(Server side javascript)
- 记录1
- 记录1
- 记录1
- 记录1
- 记录1
- 记录1
- 记录1
- 记录1
- 记录-1
- 记录1
- 记录1
- 记录1
- 记录1
- 记录1
- jsp与js 学习笔记1
- Ubuntu 中安装fcitx中文输入法,五笔和拼音
- Qt开发,将数据库封装
- 5.1 GRO(Generic Receive Offload)
- 九种引人瞩目的开源大数据技术
- ssjs-记录-1
- CFNetwork读取iOS设备的代理设置
- 加盟
- Java synchronized、wait、notify实现线程(生成消费者模式)
- 使用Aop和AbstractRoutingDataSource实现多数据源的配置
- eclipse导出javadoc
- 金阳光测试:单元测试第九讲ppt+源代码+视频
- Arraylist
- 黑马程序员——GUI