hadoop2.6.5和Hbase1.2.6单机搭建

来源:互联网 发布:android post json 编辑:程序博客网 时间:2024/05/21 10:17
1.hadoop单机搭建
1.1.从官网中下载hadoop,笔者使用的是2.6.5版本的hadoop,在/usr/local/下创建hadoop目录,并将hadoop-2.6.5.tar.gz解压到hadoop目录下,创建tmp,hdfs目录
# mkdir /usr/local/hadoop# cd /usr/local/hadoop# tar -zxvf  hadoop-2.6.5.tar.gz# mkdir tmp# mkdir hdfs# mkdir hdfs/name hdfs/data


1.2.在/etc/profile加入hadoop home
# vi /etc/profile

加入以下内容:
# set hadoop homeHADOOP_HOME=/usr/local/hadoop/hadoop-2.6.5PATH=$HADOOP_HOME/bin:$PATHexport PATH


1.3.进入到${HADOOP_HOME}/etc/hadoop
1.3.1.修改hadoop-env.sh
export JAVA_HOME=/opt/app/jdk1.8.0_144


1.3.2.修改core-site.xml
<configuration>   <property>      <name>fs.default.name</name>      <value>hdfs://node-01:9000</value>      <description>HDFS的URI,文件系统://namenode标识:端口号</description>   </property>    <property>      <name>hadoop.tmp.dir</name>      <value>/usr/local/hadoop/tmp</value>      <description>namenode上本地的hadoop临时文件夹</description>   </property></configuration>



1.3.3.修改hdfs-site.xml
<configuration>   <property>      <name>dfs.name.dir</name>      <value>/usr/local/hadoop/hdfs/name</value>      <description>namenode上存储hdfs名字空间元数据 </description>    </property>    <property>      <name>dfs.data.dir</name>      <value>/usr/local/hadoop/hdfs/data</value>      <description>datanode上数据块的物理存储位置</description>   </property>    <property>      <name>dfs.replication</name>      <value>1</value>      <description>副本个数,配置默认是3,应小于datanode机器数量</description>   </property></configuration>


1.3.4.修改mapred-site.xml
<configuration>     <property>        <name>mapreduce.framework.name</name>        <value>yarn</value>     </property></configuration>


1.3.5.修改yarn-site.xml

<configuration>   <!-- Site specific YARN configuration properties -->   <property>        <name>yarn.nodemanager.aux-services</name>        <value>mapreduce_shuffle</value>   </property>   <property>        <name>yarn.resourcemanager.webapp.address</name>        <value>${yarn.resourcemanager.hostname}:8088</value>   </property></configuration>

1.4.启动dfs和yarn
# sbin/start-dfs.sh
# sbin/start-yarn.sh
通过jps命令查看hadoop是否启动
24112 Jps
3126 DataNode
3272 SecondaryNameNode
2985 NameNode
3449 ResourceManager
3535 NodeManager

1.5.关停dfs和yarn
# sbin/stop-dfs.sh
# sbin/stop-yarn.sh

2.hbase单机搭建
2.1.从官网下载Hbase,笔者使用的版本是1.2.6版本,创建/usr/local/hbase并解压到该目录下
# mkdir /usr/local/hbase
# mkdir /usr/local/hbase/hbaseData

2.2.在/etc/profile加入hbase home
# vi /etc/profile
加入以下内容:
# set hbase home
HBASE_HOME=/usr/local/hbase/hbase-1.2.6
PATH=$HBASE_HOME/bin:$PATH
export PATH

2.3.进入bin
2.3.1.修改hbase-env.sh
export JAVA_HOME=/opt/app/jdk1.8.0_144
#不使用hbase内置的zookeeper
export HBASE_MANAGES_ZK=false

2.3.2.修改hbase-site.xml
<configuration>
   <property>
      <name>hbase.tmp.dir</name>
      <value>/usr/local/hbase/hbaseData</value>
   </property>//hbase临时文件目录
   <property>
      <name>hbase.rootdir</name>
      <value>hdfs://192.168.74.79:9000/hbase</value>
   </property>//在hdfs访问路径
   <property>
   <name>hbase.zookeeper.quorum</name> <!-- list of  zookooper -->
   <value>localhost</value>
  </property>
   <property>
   <name>hbase.cluster.distributed</name> <!-- 是否分布式部署 -->
   <value>true</value>
  </property>
</configuration>

2.4.启动hbase
# bin/start-hbase.sh
通过jps查看Hbase启动成功与否
8631 HMaster
8749 HRegionServer