Hadoop常用笔记shell命令

来源:互联网 发布:中国云计算市场规模 编辑:程序博客网 时间:2024/06/07 04:47
esc :wq 保存退出  a 修改  ZZ退出  :q!不保存退出
pwd 查看路径
more 文件名 查看文件
cd xxx/ 进入当前目录的xxx下
shutdown -h now 关机
ifconfig 查看IP等相关信息
Ctrl+Alt+F2 切换图像/命令行
exit 切换用户  su 切换root
clear 清屏
chown -R hadoop:hadoop /opt/data 修改权限
chmod -R 755 wc2.jar 赋予文件权限
src = source      源
dst = destination 目的
rm -rf /var/log/httpd/access 删除文件夹实例: 将会删除/var/log/httpd/access目录以及其下所有文件、文件夹
rm -f /var/log/httpd/access.log 删除文件使用实例:将会强制删除/var/log/httpd/access.log这个文件


192.168.192.128:50070  namenode
192.168.192.128:8088   job_app
192.168.192.128:8042   datanode
192.168.192.128:19888  history




linux本地目录 /usr/home910/liyuting/
df -hl 查看磁盘剩余空间
查看安全模式
hadoop dfsadmin -safemode get
hadoop fs -put /mnt/usbhd1/20140816/2015-09-13-15----2015-07-23-20  /input/20140816
 
hadoop fs -put /mnt/usb11/userdata/2014-06-27-06----2014-07-02-18  /input/userdata  


hadoop jar ip.jar /input/2014-11-08-21-----2014-11-30-23/ /home910/dingxiaoqiang/2014-11-08-21-----2014-11-30-23/


hadoop jar zz.jar /home910/liyuting/input/sort.txt    /home910/liyuting/sortoutput/


hadoop jar zz.jar /opt/data/wc/inputjion/  /opt/data/wc/inputjion/tb_dim_city.dat /opt/data/wc/outputjion/


hadoop jar a.jar /bs/input/ww  /bs/outputjion3/  /bs/data/gd_weather_report.txt


hadoop archive -archiveName har1.har -p /opt/data/wc/input/ /opt/data/wc/outputjionsemi/


hadoop dfs -ls har:///opt/data/wc/outputjionsemi/har1.har


fdisk -l
Linux有7个运行级别:init[0123456] 
0:关机 
1:单用户 
2:多用户状态没有网络服务 
3:多用户状态有网络服务 
4:系统未使用保留给用户 
5:图形界面 
6:重新启动 


hadoop
jps查看hadoop守护进程
start-dfs.sh
start-mapred.sh
stop-mapred.sh
stop-dfs.sh
start-all.sh
stop-all.sh
启动顺序NameNode 、DataNode 、SecondaryNameNode 、JobTracker 、TaskTracker
停止顺序JobTracker 、TaskTracker、 NameNode、 DataNode、 SecondaryNameNode
./hadoop-daemon.sh start namenode(在Linux下./表示当前目录)
./hadoop-daemon.sh start datanode
./hadoop-daemon.sh start secondarynamenode
./hadoop-daemon.sh start jobtracker
./hadoop-daemon.sh start tasktracker
命令
 namenode -format     format the DFS filesystem
  secondarynamenode    run the DFS secondary namenode
  namenode             run the DFS namenode
  datanode             run a DFS datanode
  dfsadmin             run a DFS admin client
  mradmin              run a Map-Reduce admin client
  fsck                 run a DFS filesystem checking utility
  fs                   run a generic filesystem user client
  balancer             run a cluster balancing utility
  oiv                  apply the offline fsimage viewer to an fsimage
  fetchdt              fetch a delegation token from the NameNode
  jobtracker           run the MapReduce job Tracker node
  pipes                run a Pipes job
  tasktracker          run a MapReduce task Tracker node
  historyserver        run job history servers as a standalone daemon
  job                  manipulate MapReduce jobs
  queue                get information regarding JobQueues
  version              print the version
  jar <jar>            run a jar file
  distcp <srcurl> <desturl> copy file or directories recursively
  distcp2 <srcurl> <desturl> DistCp version 2
  archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
  classpath            prints the class path needed to get the
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
 or
  CLASSNAME            run the class named CLASSNAME


查看文件系统命令
hadoop --config /opt/modules/hadoop-1.2.1/conf/ fs
Usage: java FsShell
           [-ls <path>]
           [-lsr <path>]
           [-du <path>]
           [-dus <path>]
           [-count[-q] <path>]
           [-mv <src> <dst>]
           [-cp <src> <dst>]
           [-rm [-skipTrash] <path>]
           [-rmr [-skipTrash] <path>]
           [-expunge]
           [-put <localsrc> ... <dst>]
           [-copyFromLocal <localsrc> ... <dst>]
           [-moveFromLocal <localsrc> ... <dst>]
           [-get [-ignoreCrc] [-crc] <src> <localdst>]
           [-getmerge <src> <localdst> [addnl]]  合并文件 源目录  目标文件(带地址)
           [-cat <src>]
           [-text <src>]
           [-copyToLocal [-ignoreCrc] [-crc] <src> <localdst>]
           [-moveToLocal [-crc] <src> <localdst>]
           [-mkdir <path>]
           [-setrep [-R] [-w] <rep> <path/file>]
           [-touchz <path>]
           [-test -[ezd] <path>]
           [-stat [format] <path>]
           [-tail [-f] <file>]
           [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
           [-chown [-R] [OWNER][:[GROUP]] PATH...]
           [-chgrp [-R] GROUP PATH...]
           [-help [cmd]]
hadoop fs -ls /
hadoop fs -lsr /   hadoop fs -lsr /wc/input
hadoop fs -mkdir /wc/
hadoop fs -mkdir /wc/input/
hadoop fs -put /opt/modules/hadoop-1.2.1/conf/*.xml /wc/input/
/opt/modules/hadoop-1.2.1/
hadoop jar hadoop-examples-1.2.1.jar wordcount /wc/input/ /wc/output/  输出目录不能存在
hadoop jar wc2.jar /opt/wc/input/ /opt/wc/output3/   在wc2.jar所在目录下执行命令
hadoop jar ip.jar /opt/data/wc/inputip/ /opt/data/wc/ipoutput2/


hadoop dfs -rmr /opt/wc/output
hadoop fs -text /wc/output/part-r-00000查看文件(常用)
hadoop fs -cat /wc/output/part-r-00000查看文件


hadoop fs -help  HDFS shell命令
touch 01.data


java中3代表整型,3L代表长整型,L是长整型的标识
0 0
原创粉丝点击