阿里云搭建集群环境

来源:互联网 发布:c 程序员用mac好吗 编辑:程序博客网 时间:2024/05/16 04:06
#修改主机名vim /etc/sysconfig/network#创建文件夹mkdir -p /usr/application/dev-envmkdir -p /usr/application/dev-fm#跨机器复制scp jdk-8u111-linux-x64.tar.gz root@node2:/usr/application/dev-envscp jdk-8u111-linux-x64.tar.gz root@node3:/usr/application/dev-envscp scala-2.11.2.tgz root@node2:/usr/application/dev-envscp scala-2.11.2.tgz root@node3:/usr/application/dev-envscp -v * root@node2:/usr/application/dev-fmscp -v * root@node3:/usr/application/dev-fm#安装javatar -xzvf jdk-8u111-linux-x64.tar.gz tar -xzvf scala-2.11.2.tgz vim /etc/profileJAVA_HOME=/usr/application/dev-env/jdk1.8.0_111JRE_HOME=/usr/application/dev-env/jdk1.8.0_111SCALA_HOME=/usr/application/dev-env/scala-2.11.2CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/libPATH=$PATH:JAVA_HOME/bin:$JRE_HOME/bin:$SCALA_HOME/binexport PATH CLASSPATH JAVA_HOME JRE_HOME SCALA_HOMEsource /etc/profile#配置免密码登录cd ~/ssh-keygen -t rsa -P '' -f ~/.ssh/id_dsacd .sshcat id_dsa.pub >> authorized_key汇总公钥到一个文件vim  /etc/ssh/sshd_config RSAAuthentication yesPubkeyAuthentication yesAuthorizedKeysFile  /root/.ssh/authorized_key

搭建Flume环境

节点 node1, node2, node31.在node1,node2,node3分别部署agent采集端2.在node3上部署collector端,汇集node1,node2,node3中的监控数据并暂存到本地文件系统。该Flume系统用于收集系统运行数据,包括内存、硬盘、系统信息等。利用Sigar采集系统数据,然后借助flume agent作为启动外壳,用socket向本机的tcp端口发送数据。具体操作步骤:1.编写自定义Source作为任务启动外壳。代码见:https://github.com/MouseSong/Big-Data/tree/master/bigdata_flume_customer2.上传sigar so文件到classpath(三个节点重复此操作)

这里写图片描述

3.上传自定义Sorce jar包到 flume_home/lib目录(三个节点重复次操作)4.编写启动sigar的外壳agent配置文件sigar-shell.conf#defined name of source channel and sinka1.sources=r1a1.channels=c1a1.sinks=k1#configure sigar sourcea1.sources.r1.type=customer.source.SigarSourcea1.sources.r1.no=node1a1.sources.r1.hostname=node1a1.sources.r1.port=4141#configure channela1.channels.c1.type=memorya1.channels.c1.capacity=1000a1.channels.c1.transactionCapacity=100#configurej sinka1.sinks.k1.type=loggera1.sources.r1.channels=c1a1.sinks.k1.channel=c15.编写数据收集agent配置文件hostinfo.conf#defined name of source/channel/sinka1.sources=r1a1.channels=c1a1.sinks=k1#configure syslogtcp sourcea1.sources.r1.type=syslogtcpa1.sources.r1.host=node1a1.sources.r1.port=4141a1.sources.r1.interceptors=i1 i2a1.sources.r1.interceptors.i1.type=hosta1.sources.r1.interceptors.i1.hostHeader=ipa1.sources.r1.interceptors.i2.type=timestamp#configure channela1.channels.c1.type=memorya1.channels.c1.capacity=10000a1.channels.c1.transactionCapacity=1000#configure avor sinka1.sinks.k1.type=avroa1.sinks.k1.hostname=node3a1.sinks.k1.port=5555a1.sources.r1.channels=c1a1.sinks.k1.channel=c16.将上述两个配置文件复制到node2,node3节点并修改node1为对应的主机名或ip7.编写collector配置文件#defined name of source , channel and sinka1.sources = r1a1.sinks = k1a1.channels = c1#configure avro sourcea1.sources.r1.type = avroa1.sources.r1.channels = c1a1.sources.r1.bind = node3a1.sources.r1.port = 5555#configure logger sinka1.sinks.k1.type = file_rolla1.sinks.k2.sink.directory=/usr/application/dev-fm/flume1.7.0/temp/datasa1.sinks.k2.sink.rollSize=5096000000a1.sinks.k2.sink.round=truea1.sinks.k2.sink.roundValue=12a1.sinks.k2.sink.roundUnit=houra1.sinks.k2.channel=c1#configure channela1.channels.c1.type = memorya1.channels.c1.capacity = 10000a1.channels.c1.transactionCapacity = 1000             

搭建Hadoop环境

1.下载解压(略)2.编辑环境变量#javaJAVA_HOME=/usr/application/dev-env/jdk1.8.0_111JRE_HOME=/usr/application/dev-env/jdk1.8.0_111#scalaSCALA_HOME=/usr/application/dev-env/scala-2.11.2#hadoopHADOOP_HOME=/usr/application/dev-fm/hadoop-2.5.2YARN_HOME=/usr/application/dev-fm/hadoop-2.5.2CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/libPATH=$PATH:JAVA_HOME/bin:$JRE_HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/binexport PATH CLASSPATH JAVA_HOME JRE_HOMEexport HADOOP_HOME YARN_HOMEexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop3.修改hadoop-evn.shexport JAVA_HOME=/usr/application/dev-env/jdk1.8.0_1114.修改slavesnode1node2node35.修改core-site.xml<configuration>   <property>     <name>fs.default.name</name>     <value>hdfs://node1:9000</value>   </property>  </configuration>6.修改hdfs-site.xml<configuration>  <property>     <name>dfs.name.dir</name>     <value>/usr/application/dev-fm/hadoop-2.5.2/hadoop_filesystem/name</value>  </property>  <property>     <name>dfs.data.dir</name>     <value>/usr/application/dev-fm/hadoop-2.5.2/hadoop_filesystem/data</value>  </property>  <property>     <name>dfs.replication</name>     <value>3</value>  </property>  <property>     <name>dfs.permissions</name>     <value>false</value>  </property></configuration>7.修改mapred-site.xml<configuration>    <property>       <name>mapred.job.tracker</name>       <value>node1:9001</value>    </property>    <property>       <name>mapred.tasktracker.map.tasks.maximum</name>       <value>4</value>    </property>    <property>       <name>mapred.tasktracker.reduce.tasks.maximum</name>       <value>4</value>    </property></configuration>8.复制源文件到另外两台机器>scp -r hadoop2.5/ root@node2:/usr/application/dev-fm/>scp -r hadoop2.5/ root@node3:/usr/application/dev-fm/9.格式化namenode>hadoop namenode -format10.启动hadoop>./start-all.sh

搭建Zookeeper集群

1.下载zookeeperhttp://zookeeper.apache.org/2.解压安装包>tar -xzvf zookeeper-3.4.6.tar.gz3.创建zookeeper数据目录>mkdir -p /usr/application/dev-fm/zookeeper-3.4.6/zk_datas4.修改配置文件>cd conf/>cp zoo_sample.cfg zoo.cfg>vim zoo.cfg# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=/usr/application/dev-fm/zookeeper-3.4.6/zk_datas# the port at which the clients will connectclientPort=2181# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge feature#autopurge.purgeInterval=1server.1=outer1:2888:3888server.2=outer2:2888:3888server.3=outer3:2888:38885.在zookeeper数据目录下创建myid文件并赋予id值>echo "1" >> myid6.将配置好的zookeeper目录复制到node2, node3>scp -r zookeeper-3.4.6/ root@node2:/usr/application/dev-fm>scp -r zookeeper-3.4.6/ root@node3:/usr/application/dev-fm7.修改node2和node3中myid的值为2,38.启动三个zookeeper节点

搭建Kafka集群

1.下载安装包 http://kafka.apache.org/2.解压>tar -xvzf  kafka_2.11-0.9.0.0.tgz3.编辑配置文件>vim config/server.propertiesbroker.id=0host.name=outer1log.dirs=/usr/application/dev-fm/kafka0.9/dataszookeeper.connect=outer1:2181,outer2:2181,outer3:21814.将kafka安装文件复制到node2,node3中,并求改broker.id1,2, 修改host.name 为 outer2, outer35.启动三个节点的kafka broker>nohup bin/kafka-server-start.sh config/server.properties > ./kafka.log 2>&1 &

配置Flume Kafka Sink

#defined name of source , channel and sinkcollector.sources = r1collector.sinks = k1 k2collector.channels = c1#configure avro sourcecollector.sources.r1.type = avrocollector.sources.r1.channels = c1collector.sources.r1.bind = outer3collector.sources.r1.port = 5555#configure logger sink#collector.sinks.k2.type = file_roll#collector.sinks.k2.sink.directory=/usr/application/dev-fm/flume1.7.0/temp/datas#collector.sinks.k2.sink.rollSize=5096000000#collector.sinks.k2.sink.round=true#collector.sinks.k2.sink.roundValue=12#collector.sinks.k2.sink.roundUnit=hour#collector.sinks.k2.channel=c1collector.sinks.k2.type=org.apache.flume.sink.kafka.KafkaSinkcollector.sinks.k2.brokerList=outer1:9092collector.sinks.k2.partitioner.class=org.apache.flume.plugins.SinglePartitioncollector.sinks.k2.partition.key=1collector.sinks.k2.serializer.class=kafka.serializer.StringEncodercollector.sinks.k2.request.required.acks=0collector.sinks.k2.max.message.size=1000000collector.sinks.k2.kafka.topic=testcollector.sinks.k2.channel=c1collector.sinks.k1.type=hdfscollector.sinks.k1.hdfs.path=hdfs://outer1:9000/flume/hostinfocollector.sinks.k1.hdfs.filePrefix=hostinfocollector.sinks.k1.hdfs.minBlockReplicas=1collector.sinks.k1.hdfs.rollInterval=600collector.sinks.k1.hdfs.rollSize=0collector.sinks.k1.hdfs.rollCount=0collector.sinks.k1.hdfs.idleTimeout=0collector.sinks.k1.channel=c1#configure channelcollector.channels.c1.type = memorycollector.channels.c1.capacity = 10000collector.channels.c1.transactionCapacity = 1000
0 0
原创粉丝点击