mahou 安装实例

来源:互联网 发布:淘宝刀创 编辑:程序博客网 时间:2024/06/06 01:01

 
 1 下载mahout  http://mirrors.hust.edu.cn/apache/mahout/0.9/
 
 2 选择安装到130 机  复制到 130机的 /home/bigdata/中
  解压  tar -zxvf mahout-distribution-0.9.tar.gz
  改名  mv  mahout-distribution-0.9.tar.gz   mahout
 
 3 设置环境变量
   vi /etc/profile 
   MAHOUT_HOME=/home/bodata/mahout
   path= ,,$MAHOUT_HOME/bin
   
   source /etc/profile
   
 4 输入mahout  
 Warning: $HADOOP_HOME is deprecated.


Running on hadoop, using /home/bigdata/hadoop/bin/hadoop and HADOOP_CONF_DIR=
MAHOUT-JOB: /home/bigdata/mahout/mahout-examples-0.9-job.jar
Warning: $HADOOP_HOME is deprecated.


An example program must be given as the first argument.
Valid program names are:
  arff.vector: : Generate Vectors from an ARFF file or directory
  baumwelch: : Baum-Welch algorithm for unsupervised HMM training
  canopy: : Canopy clustering
  cat: : Print a file or resource as the logistic regression models would see it
  cleansvd: : Cleanup and verification of SVD output
  clusterdump: : Dump cluster output to text
  clusterpp: : Groups Clustering Output In Clusters
  cmdump: : Dump confusion matrix in HTML or text formats
  concatmatrices: : Concatenates 2 matrices of same cardinality into a single matrix
  cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx)
  cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally.
  evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes
  fkmeans: : Fuzzy K-means clustering
  hmmpredict: : Generate random sequence of observations by given HMM
  itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering
  kmeans: : K-means clustering
  lucene.vector: : Generate Vectors from a Lucene index
  lucene2seq: : Generate Text SequenceFiles from a Lucene index
  matrixdump: : Dump matrix in CSV format
  matrixmult: : Take the product of two matrices
  parallelALS: : ALS-WR factorization of a rating matrix
  qualcluster: : Runs clustering experiments and summarizes results in a CSV
  recommendfactorized: : Compute recommendations using the factorization of a rating matrix
  recommenditembased: : Compute recommendations using item-based collaborative filtering
  regexconverter: : Convert text files on a per line basis based on regular expressions
  resplit: : Splits a set of SequenceFiles into a number of equal splits
  rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>}
  rowsimilarity: : Compute the pairwise similarities of the rows of a matrix
  runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model
  runlogistic: : Run a logistic regression model against CSV data
  seq2encoded: : Encoded Sparse Vector generation from Text sequence files
  seq2sparse: : Sparse Vector generation from Text sequence files
  seqdirectory: : Generate sequence files (of Text) from a directory
  seqdumper: : Generic Sequence File dumper
  seqmailarchives: : Creates SequenceFile from a directory containing gzipped mail archives
  seqwiki: : Wikipedia xml dump to sequence file
  spectralkmeans: : Spectral k-means clustering
  split: : Split Input data into test and train sets
  splitDataset: : split a rating dataset into training and probe parts
  ssvd: : Stochastic SVD
  streamingkmeans: : Streaming k-means clustering
  svd: : Lanczos Singular Value Decomposition
  testnb: : Test the Vector-based Bayes classifier
  trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model
  trainlogistic: : Train a logistic regression using stochastic gradient descent
  trainnb: : Train the Vector-based Bayes classifier
  transpose: : Take the transpose of a matrix
  validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set
  vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors
  vectordump: : Dump vectors from a sequence file to text
  viterbi: : Viterbi decoding of hidden states from given output states sequence
  
  安装成功
  
 5  下载数据测试
 http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data
 
 synthetic_control.data 改名  control.data  上传到130机 /home/bigdata/test/
 6 进入hadoop 创建目录
 .创建测试目录testdata,并把数据导入到这个tastdata目录中(这里的目录的名字只能是testdata)
 hadoop fs -mkdir testdata
 上传数据到testdata
  hadoop fs -put /home/bigdata/test/control.data testdata
  赋权限 
   hadoop fs -chmod -R 755 testdata
  
  
  7 使用kmeans算法(这会运行几分钟左右)  有默认输入,输出
   hadoop jar /home/bigdata/mahout/mahout-examples-0.9-job.jar org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
   
   
   hadoop fs -lsr output
   看到MapReduce 运行,表示安装成功。

0 0