Hadoop 单节点集群配置

来源:互联网 发布:win10如何更新软件 编辑:程序博客网 时间:2024/05/16 15:08

1.安装JDK

查看當前java版本,

  java -version  sudo apt-get update

安裝jdksudo apt-get install default-jdk

查詢java安裝路徑  (这个路径要记下,根据不同的jdk版本后面可能会改下名字)

update-alternatives --display java

2.設置ssh無密碼登錄

安裝ssh

sudo apt-get install ssh

安裝rsync

sudo apt-get install rsync

產生ssh key密鑰

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

查看产生单ssh密钥

ll ~/.ssh

将产生的密钥放入许可证文件

cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

下载安装hadoop

wget https://archive.apache.org/dist/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz

解压缩

sudo tar -zxvf hadoop-2.7.3.tar.gz

将hadoop移动到目录/usr/local/hadoop

sudo mv hadoop-2.7.3 /usr/local/hadoop

查看安装目录

ll /usr/local/hadoop

3.设置Hadoop环境变量(这里的命令行后面基本都是在文本找到对应位置进行添加或者修改)

编辑~/.bashrc    写在最后一行fi下面

sudo gedit ~/.bashrcexport JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64export HADOOP_HOME=/usr/local/hadoopexport PATH=$PATH:$HADOOP_HOME/binexport PATH=$PATH:$HADOOP_HOME/sbinexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH

让~/.bashrc设置生效

source ~/.bashrc

编辑hadoop-env.sh

sudo gedit /usr/local/hadoop/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

设置core-site.xml

sudo gedit /usr/local/hadoop/etc/hadoop/core-site.xml

<!-- Put site-specific property overrides in this file. --><configuration><property><name>fs.default.name</name><value>hdfs://localhost:9000</value></property></configuration>

编辑 yarn-site.xml

sudo gedit /usr/local/hadoop/etc/hadoop/yarn-site.xml
<!-- Site specific YARN configuration properties --><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></property></configuration>

 设置mapred-site.xml 

复制模板文件

sudo cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml

编辑mapred-site.xml

sudo gedit /usr/local/hadoop/etc/hadoop/mapred-site.xml

<configuration><property> <name>mapreduce.framework.name</name> <value>yarn</value></property></configuration>

设置hdfs-site.xml

sudo gedit /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration><property>  <name>dfs.replication</name><value>3</value></property><property> <name>dfs.namenode.name.dir</name> <value> file:/usr/local/hadoop/hadoop_date/hdfs/namenode</value></property><property> <name>dfs.datanode.data.dir</name> <value> file:/usr/local/hadoop/hadoop_data/hdfs/datanode</value></property></configuration>

4.创建并格式化hdfs目录

创建namenode,datanode数据存储目录

sudo mkdir -p /usr/local/hadoop/hadoop_data/hdfs/namenode

格式化namenode

sudo mkdir -p /usr/local/hadoop/hadoop_data/hdfs/datanode

将hadoop目录的所有者改为jyd(这个是linux用户名)

sudo chown jyd:jyd -R /usr/local/hadoop

将hdfs进行格式化

hadoop namenode -format

5.启动Hadoop

启动hdfs

start-dfs.sh

启动yarn

start-yarn.sh

同时启动hdfs yarn

start-all.sh

用jps查看已经启动的进程

jps

6.打开Hadoop Resource Manager Web界面

界面网址

http://localhost:8088/

7.NameNode HDFS Web界面

hdfs web ui网址

http://localhost:50070/












原创粉丝点击