hadoop spark hbase 单机安装

来源:互联网 发布:如何分析竞价数据 编辑:程序博客网 时间:2024/06/05 18:08

关闭防火墙

如是在虚拟机或是docker里安装,请一定先关闭防火墙,否则外部系统无法访问.

systemctl status firewalld.service #检查防火墙状态systemctl stop firewalld.service #关闭防火墙systemctl disable firewalld.service #禁止开机启动防火墙

vim /etc/hosts

如果没有足够的权限,可以切换用户为root。
三台机器的内容统一增加以下host配置:
可以通过hostname来修改服务器名称为master、slave1、slave2 ,此步也是为将来集群服务的

192.168.71.242 master

时间同步

yum install -y ntp #安装ntp服务ntpdate cn.pool.ntp.org #同步网络时间

需要开放的外网端口

50070,8088,60010 ,7077

解压安装包

tar -zxvf /usr/jxx/scala-2.12.4.tgz -C /usr/local/tar -zxvf /usr/jxx/spark-2.2.0-bin-hadoop2.7 -C /usr/local/tar -zxvf /usr/jxx/hbase-1.3.1-bin.tar.gz -C /usr/local/tar -zxvf /usr/jxx/hadoop-2.8.2.tar.gz -C /usr/local/

创建文件目录

为了便于管理,给Master的hdfs的NameNode、DataNode及临时文件,在用户目录下创建目录

mkdir -p /data/hdfs/namemkdir -p /data/hdfs/datamkdir -p /data/hdfs/tmp

如果是集群那将这些目录通过scp命令拷贝到Slave1和Slave2的相同目录下

设置环境变量

vim /etc/profile 添加

export JAVA_HOME=/usr/local/jdk1.8.0 #jdk如果已有就不用添加export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/binexport CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATHexport SCALA_HOME=/usr/local/scala-2.12.4export PATH=$PATH:$SCALA_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.8.2export PATH=$PATH:$HADOOP_HOME/binexport HBASE_HOME=/usr/local/hbase-1.3.1export PATH=$PATH:$HBASE_HOME/binexport SPARK_HOME=/usr/local/spark-2.2.0-bin-hadoop2.7export PATH=$PATH:$SPARK_HOME/bin

然后执行

source /etc/profile

修改配置

vim /usr/local/hadoop-2.8.2/etc/hadoop/hadoop-env.sh
修改

export JAVA_HOME=/usr/local/java/jdk1.8.0

vim /usr/local/hadoop-2.8.2/etc/hadoop/core-site.xml

<configuration><property>  <name>hadoop.tmp.dir</name>  <value>file:/data/hdfs/tmp</value>  <description>A base for other temporary directories.</description></property><property>  <name>io.file.buffer.size</name>  <value>131072</value></property><property>  <name>fs.default.name</name>  <value>hdfs://master:9000</value></property><property><name>hadoop.proxyuser.root.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.root.groups</name><value>*</value></property></configuration>

vim /usr/local/hadoop-2.8.2/etc/hadoop/hdfs-site.xml

<configuration><property><name>dfs.replication</name>  <value>2</value></property><property>  <name>dfs.namenode.name.dir</name>  <value>file:/data/hdfs/name</value>  <final>true</final></property><property>  <name>dfs.datanode.data.dir</name>  <value>file:/data/hdfs/data</value>  <final>true</final></property><property>  <name>dfs.namenode.secondary.http-address</name>  <value>master:9001</value></property><property>  <name>dfs.webhdfs.enabled</name>  <value>true</value></property><property>  <name>dfs.permissions</name>  <value>false</value></property></configuration>

vim /usr/local/hadoop-2.8.2/etc/hadoop/yarn-site.xml

yarn-site.xml

<configuration><!-- Site specific YARN configuration properties --><property><name>yarn.resourcemanager.address</name>  <value>master:18040</value></property><property>  <name>yarn.resourcemanager.scheduler.address</name>  <value>master:18030</value></property><property>  <name>yarn.resourcemanager.webapp.address</name>  <value>master:18088</value></property><property>  <name>yarn.resourcemanager.resource-tracker.address</name>  <value>master:18025</value></property><property>  <name>yarn.resourcemanager.admin.address</name>  <value>master:18141</value></property><property>  <name>yarn.nodemanager.aux-services</name>  <value>mapreduce_shuffle</value></property></configuration>

cp mapred-site.xml.template mapred-site.xml
vim mapred-site.xml

<configuration><property>  <name>mapreduce.framework.name</name>  <value>yarn</value></property></configuration>

vim /usr/local/hbase-1.3.1/conf/hbase-site.xml

<property><name>hbase.rootdir</name><!-- 对应于hdfs中配置 micmiu.com --><value>hdfs://localhost:9000/hbase</value></property><property><name>hbase.cluster.distributed</name><value>true</value></property>

vim /usr/local/hbase-1.3.1/conf/hbase-env.sh

export JAVA_HOME=/usr/local/jdk1.8.0 #jdk如果已有就不用添加export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/binexport CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATHexport SCALA_HOME=/usr/local/scala-2.12.4export PATH=$PATH:$SCALA_HOME/binexport HADOOP_HOME=/usr/local/hadoop-2.8.2export PATH=$PATH:$HADOOP_HOME/binexport HBASE_HOME=/usr/local/hbase-1.3.1export PATH=$PATH:$HBASE_HOME/binexport SPARK_HOME=/usr/local/spark-2.2.0-bin-hadoop2.7export PATH=$PATH:$SPARK_HOME/binexport HBASE_MANAGES_ZK=true

mv /usr/local/spark-2.2.0-bin-hadoop2.7/conf/spark-env.sh.template /usr/local/spark-2.2.0-bin-hadoop2.7/conf/spark-env.sh

mv /usr/local/spark-2.2.0-bin-hadoop2.7/conf/spark-defaults.conf.template /usr/local/spark-2.2.0-bin-hadoop2.7/conf/spark-defaults.conf

mkdir -p /disk/spark

vim /usr/local/spark-2.2.0-bin-hadoop2.7/conf/spark-env.sh

export JAVA_HOME=/usr/local/jdk1.8.0export SCALA_HOME=/usr/local/scala-2.12.4export HADOOP_HOME=/usr/local/hadoop-2.8.2export HBASE_HOME=/usr/local/hbase-1.3.1export SPARK_HOME=/usr/local/spark-2.2.0-bin-hadoop2.7export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport SPARK_LOCAL_DIRS=/disk/sparkexport SPARK_DAEMON_MEMORY=256mexport SPARK_HISTORY_OPTS="$SPARK_HISTORY_OPTS -Dspark.history.fs.logDirectory=/tmp/spark -Dspark.history.ui.port=18082"export STANDALONE_SPARK_MASTER_HOST=localhost

vim /usr/local/spark-2.2.0-bin-hadoop2.7/conf/spark-defaults.conf

spark.master=spark://localhost:7077spark.eventLog.dir=/dask/spark/applicationHistoryspark.eventLog.enabled=truespark.yarn.historyServer.address=localhost:18082

初始化环境

格式化namenode
hdfs namenode -format

启动服务

启动hdfs

sh /usr/local/hadoop-2.8.2/sbin/start-dfs.sh

启动hbase

sh /usr/local/hbase-1.3.1/bin/start-hbase.sh

启动spark

sh /usr/local/spark-2.2.0-bin-hadoop2.7/sbin/start-all.sh

7 设置开机启动(这个不太好使)

su - root -c “sh /usr/local/hadoop-2.8.2/sbin/start-dfs.sh”

su - root -c “sh /usr/local/hbase-1.3.1/bin/start-hbase.sh”

su - root -c “sh /usr/local/spark-2.2.0-bin-hadoop2.7/sbin/start-all.sh”

运行Hbase shell时报错:

hbase shell
hbase(main):006:0> list
TABLE
ERROR: Can’t get master address from ZooKeeper; znode data == null

Here is some help for this command:
List all tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples:

解决办法:

重新启动,stop-hbase.sh->start-hbase.sh 即解决

原创粉丝点击