`

Hadoop-分布式安装配置

 
阅读更多

 

基础环境
3台linux环境机器,本文采用3个VMWare做的虚拟机安装linux AS 5,本文采用vmware的NAT方式规划IP
分别为:

 

机器名 IP 说明
Hadoop00  192.168.91.10  Master, nameNode, SecondaryNamenode, jobTracker
Hadoop01 192.168.91.11  Slave,dataNode, tasktracker
Hadoop02 192.168.91.12 Slave,dataNode, tasktracker

 

在三台机器中配置好IP和HOST
/etc/hosts中添加
192.168.91.10   hadoop00
192.168.91.11   hadoop01
192.168.91.12   hadoop02

 

用户准备
创建hadoop运行的专用用户和组,这里我使用hadoop作为用户名和组名。在三台机器分别建立用户和组。
groupadd hadoop
useradd –g hadoop –G hadoop hadoop

 

配置密钥方式免密码登录
因为hadoop需要nameNode能无密码方式登录和访问各个dataNode,所以要配置操作系统hadoop运行用户的密钥方式无密码登录。

只需要在nameNode(hadoop00, 192.168.91.10)配置免密钥登录其它dataNade。在nameNode中生成公私钥对,然后把公钥发送到各个dataNode。

在nameNode上操作:
[hadoop@hadoop00 ~]$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
直接回车,完成后会在~/.ssh/生成两个文件:id_dsa 和id_dsa.pub。这两个是成对出现,类似钥匙和锁。再把id_dsa.pub 追加到授权key 里面(当前并没有authorized_keys文件):
[hadoop@hadoop00 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
注意:需要修改.ssh和authorized_keys的访问权限,否则可能无法正常登录
[hadoop@hadoop00 ~]$ chmod 700 ~/.ssh
[hadoop@hadoop00 ~]$ chmod 600 ~/.ssh/ authorized_keys
测试本机无密码登录
[hadoop@hadoop00 ~]$ ssh localhost

 拷贝公钥id_dsa.pub到各dataNode
 [hadoop@hadoop00 ~]$ scp ~/.ssh/id_dsa.pub hadoop@hadoop01:/home/hadoop/
[hadoop@hadoop00 ~]$ scp ~/.ssh/id_dsa.pub hadoop@hadoop02:/home/hadoop/
分别登录各个dataNode,追加公钥id_dsa.pub到dataNode的authorized_keys中

[hadoop@hadoop01 ~] mkdir .ssh
[hadoop@hadoop01 ~] chmod 700 .ssh
[hadoop@hadoop01 ~] cat id_dsa.pub >> .ssh/authorized_keys
[hadoop@hadoop01 ~] chmod 600 .ssh/authorized_keys
测试nameNode无密码访问dataNode
[hadoop@hadoop00 ~] ssh hadoop01
Last login: Thu Sep 22 07:57:07 2011 from hadoop00

 

 

安装配置环境变量
下载安装hadoop-0.21.0
http://mirror.bjtu.edu.cn/apache/hadoop/common/hadoop-0.21.0/hadoop-0.21.0.tar.gz
下载JDK版本:jdk-6u24-linux-i586.bin

Hadoop下载后直接就要到hadoop的用户主目录
[hadoop@hadoop00 ~] cd /home/hadoop
[hadoop@hadoop00 ~] tar –xzvf hadoop-0.21.0.tar.gz
待配置完成后,直接拷贝到各个dataNode

JDK的安装配置,安装就免了,配置环境变量如下(master和各slave配置相同)
vi ~/.bash_profile
在文件结尾加入:
# java env
export JAVA_HOME=/usr/java/jdk1.6.0_24
export JRE_HOME=$JAVA_HOME/jre
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
# hadoop env
export HADOOP_HOME=/home/hadoop/hadoop-0.21.0
export PATH=$HADOOP_HOME/bin:$PATH

 

配置hadoop
配置nameNode的hadoop

1.配置hadoop环境shell文件:hadoop-0.21.0/conf/hadoop-env.sh
# The java implementation to use.  Required.
export JAVA_HOME=/usr/java/jdk1.6.0_24

2.配置:hadoop-0.21.0/conf/core-site.xml 

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/hadoopdata</value>
  </property>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://hadoop00:9000</value>
  </property>
  <property>
    <name>dfs.hosts.exclude</name>
    <value>excludes</value>
  </property>
</configuration>

 

3.配置:hadoop-0.21.0/conf/hdfs-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 
<!-- Put site-specific property overrides in this file. -->
<configuration>
     <property>
          <name>dfs.name.dir</name>
          <value>/home/hadoop/hadoopname</value>
     </property>
 
     <property>
          <name>dfs.data.dir</name>
          <value>/home/hadoop/hadoopdata</value>
     </property>
 
     <property>
          <name>dfs.replication</name>
          <value>1</value>
     </property>
</configuration>

 

4.配置:hadoop-0.21.0/conf/mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>hadoop00:9001</value>
  </property>
</configuration>

 

拷贝nameNode配置好的hadoop到各个dataNode相同的目录
[hadoop@hadoop00 ~] zip  -r hadoop-0.21.0.zip hadoop-0.21.0
[hadoop@hadoop00 ~] scp hadoop-0.21.0.zip hadoop@hadoop01:/home/hadoop
[hadoop@hadoop00 ~] scp hadoop-0.21.0.zip hadoop@hadoop02:/home/hadoop
分别登录两台dataNode,直接解压hadoop-0.21.0.zip
[hadoop@hadoop01 ~] unzip hadoop-0.21.0.zip
[hadoop@hadoop02 ~] unzip hadoop-0.21.0.zip

启动和停止hadoop
Hadoop直接在nameNode上运行命令启动,nameNode会负责自动连接,启动和停止所有的dataNode.
1.启动
[hadoop@hadoop00 ~]$ ~/hadoop-0.21.0/bin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
starting namenode, logging to /home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-namenode-hadoop00.out
192.168.91.11: starting datanode, logging to /home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-datanode-hadoop01.out
192.168.91.12: starting datanode, logging to /home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-datanode-hadoop02.out
192.168.91.10: starting secondarynamenode, logging to /home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-secondarynamenode-hadoop00.out
starting jobtracker, logging to /home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-jobtracker-hadoop00.out
192.168.91.12: starting tasktracker, logging to /home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-tasktracker-hadoop02.out
192.168.91.11: starting tasktracker, logging to /home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-tasktracker-hadoop01.out

2.停止
[hadoop@hadoop00 ~]$ ~/hadoop-0.21.0/bin/stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-mapred.sh
stopping namenode
192.168.91.12: stopping datanode
192.168.91.11: stopping datanode
192.168.91.10: stopping secondarynamenode
stopping jobtracker
192.168.91.11: stopping tasktracker
192.168.91.12: stopping tasktracker

初始配置HDFS
1、 格式化HDFS文件系统
[hadoop@hadoop00 ~]$ hadoop namenode -format

2、 查看HDFS
[hadoop@hadoop00 ~]$ hadoop fs -ls /
11/09/24 07:49:55 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/09/24 07:49:56 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
Found 4 items
drwxr-xr-x   - hadoop supergroup          0 2011-09-22 08:05 /home
drwxr-xr-x   - hadoop supergroup          0 2011-09-22 11:29 /jobtracker
drwxr-xr-x   - hadoop supergroup          0 2011-09-22 11:23 /user

3、 通过WEB查看hadoop
查看集群状态 http://192.168.91.10:50070/dfshealth.jsp

查看JOB状态 http://192.168.91.10:50030/jobtracker.jsp


运行hadoop的example-wordcount
Wordcount程序是一个简单的计算输入文件中每个单词出现的次数,并输出到指定的目录下。该程序是官方的例子,在hadoop-0.21.0安装目录下的:hadoop-mapred-examples-0.21.0.jar

在hdfs上建立程序的输入目录和文件,同时建立程序的输出目录.
[hadoop@hadoop00 ~]$ mkdir input
[hadoop@hadoop00 ~]$ cat a a a a a b b b c c c c c c c c c 1 1 1 > input/file
[hadoop@hadoop00 ~]$ hadoop fs –mkdir /wordcount
[hadoop@hadoop00 ~]$ hadoop fs –put input /wordcount

[hadoop@hadoop00 ~]$ hadoop jar hadoop-0.21.0/hadoop-mapred-examples-0.21.0.jar wordcount /wordcount/input /wordcount/output
11/09/24 08:11:25 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/09/24 08:11:26 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
11/09/24 08:11:26 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/09/24 08:11:26 INFO input.FileInputFormat: Total input paths to process : 2
11/09/24 08:11:26 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
11/09/24 08:11:26 INFO mapreduce.JobSubmitter: number of splits:2
11/09/24 08:11:27 INFO mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:null
11/09/24 08:11:27 INFO mapreduce.Job: Running job: job_201109240745_0002
11/09/24 08:11:28 INFO mapreduce.Job:  map 0% reduce 0%
11/09/24 08:11:44 INFO mapreduce.Job:  map 50% reduce 0%
11/09/24 08:11:50 INFO mapreduce.Job:  map 100% reduce 0%
11/09/24 08:11:57 INFO mapreduce.Job:  map 100% reduce 100%
11/09/24 08:11:59 INFO mapreduce.Job: Job complete: job_201109240745_0002
11/09/24 08:11:59 INFO mapreduce.Job: Counters: 34
……

[hadoop@hadoop00 ~]$ hadoop fs -cat /wordcount/output/part-r-00000
11/09/24 08:18:09 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/09/24 08:18:09 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
1       3
a       5
b       3
c       9

分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics