Hadoop1.0单点安装-Windows

来源:互联网 发布:淘宝店招格式 编辑:程序博客网 时间:2024/05/01 23:12

一:CygWin安装

        cygwin1.7.15  下载地址

        安装省略、(记得安装ssh)


        安装完成后  将usr\sbin 目录   加入到path环境变量中

二:ssh配置

         $ ssh-host-config

         *** Query: Should privilege separation be used? (yes/no) no

         *** Query: (Say "no" if it is already installed as a service) (yes/no)yes

         *** Query: Enter the value of CYGWIN for the daemon: [] ntsec

         *** Query: Do you want to use a different name? (yes/no) yes

         *** Query: Enter the new user name: admin

         *** Query: Reenter: admin

         *** Query: Create new privileged user account 'admin'? (yes/no) yes

         *** Query: Please enter the password:密码

         *** Query: Reenter:重复密码

         启动ssh服务

        net start sshed


         配置无密登录

         $ ssh-keygen(win7下 以管理员身份运行)

         Enter file in which to save the key (/home/Administrator/.ssh/id_rsa):回车

         Enter passphrase (empty for no passphrase):回车

         Enter same passphrase again:回车

 

         cd /cygdrive/c/cygwin/home/Administrator/.ssh  

        (对应cygwin安装目录 例如:D:\cygwin\home\Administrator\.ssh)

         cp id_rsa.pub authorized_keys

        登录 ssh

         $ ssh localhost

         The authenticity of host 'localhost (127.0.0.1)' can't be established.
         ECDSA key fingerprint is 86:07:88:db:34:94:f8:09:6d:f4:7d:19:48:67:fe:e1.
         Are you sure you want to continue connecting (yes/no)? yes

三:hadoop配置 启动 (hadoop-1.0.0版本)

       1.配置 修改hadoop/conf目录下 4个文件

       hadoop-env.sh  core-site.xml  hdfs-site.xml  mapred-site.xml

       ①.hadoop-env.sh
       export JAVA_HOME=/cygdrive/d/Java/jdk1.6.0_10

       ②.conf/core-site.xml:

<configuration><property><name>fs.default.name</name><value>hdfs://localhost:9000</value></property></configuration>

       ③.conf/hdfs-site.xml

<configuration><property><name>dfs.replication</name><value>1</value></property></configuration>

       ④.conf/mapred-site.xml

<configuration><property>         <name>mapred.job.tracker</name>         <value>localhost:9001</value> </property></configuration>

       2. 启动

       切换到hadoop 安装目录 cd  /cygdrive/d/hadoop/hadoop-1.0.0

       格式化NameNode           bin/hadoop namenode -format

       启动hadoop                      bin/start-all.sh

       在hdsf系统中建立 一个名称为test的目录 bin/hadoop fs -mkdir test

       上传文件  bin/hadoop fs -put *.txt test   (Hadoop根目录下的所有文本文件 上传到了test目录)

       也可以通过NameNode - http://localhost:50070/ 上面的Browse the filesystem连接 验证是否上传成功

      JobTracker - http://localhost:50030/

 

     hadoop1.0版本 启动时无法启动TaskTracker   解决方案见另一篇:问题记录

原创粉丝点击