本文的目的是介绍CentOS6.8编译Hadoop-2.4.1的详细情况,特别关注编译hadoop源码的相关信息。我们将通过专业的研究、有关数据的分析等多种方式,为您呈现一个全面的了解CentOS6.
本文的目的是介绍CentOS6.8编译Hadoop-2.4.1的详细情况,特别关注编译hadoop源码的相关信息。我们将通过专业的研究、有关数据的分析等多种方式,为您呈现一个全面的了解CentOS6.8编译Hadoop-2.4.1的机会,同时也不会遗漏关于CentOS 5.6 64位 重编译Hadoop 2.6.0、CentOS 6.3 下编译 64 位 hadoop2.7.1、CentOS 6.5编译安装hadoop-2.2.0、centos 64位 hadoop编译的知识。
本文目录一览:- CentOS6.8编译Hadoop-2.4.1(编译hadoop源码)
- CentOS 5.6 64位 重编译Hadoop 2.6.0
- CentOS 6.3 下编译 64 位 hadoop2.7.1
- CentOS 6.5编译安装hadoop-2.2.0
- centos 64位 hadoop编译
CentOS6.8编译Hadoop-2.4.1(编译hadoop源码)
前面我写过一篇怎样编译hadoop-2.2.0的博客,今天我又试着编译了一下hadoop-2.4.1版本,编译的步骤跟编译hadoop-2.2.0的时候基本一样,编译hadoop-2.2.0的博客地址是:
http://blog.csdn.net/u012453843/article/details/52903324
我这里就只说下我遇到的问题
1.一定要确保服务器能够连网,因为要下载很多东西,如果不能连网,将无法成功编译
2.编译的过程中碰到了如下问题,后来从网上查找答案,发现是hadoop-2.4.1-src/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads目录下文件没有下载完全,缺少apache-tomcat-6.0.36.tar.gz,我们便下载一个并把它放到这个目录下,然后重新编译。
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project hadoop-hdfs-httpfs: An Ant BuildException has occured: Can't get http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.36/bin/apache-tomcat-6.0.36.tar.gz to /usr/local/hadoop/hadoop-2.4.1-src/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads/apache-tomcat-6.0.36.tar.gz
[ERROR] around Ant part ...<get dest="downloads/apache-tomcat-6.0.36.tar.gz" skipexisting="true" verbose="true" src="http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.36/bin/apache-tomcat-6.0.36.tar.gz"/>... @ 5:182 in /usr/local/hadoop/hadoop-2.4.1-src/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors,re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems,you can resume the build with the command
[ERROR] mvn <goals> -rf :hadoop-hdfs-httpfs
[root@test hadoop-2.4.1-src]# ^C
[root@test hadoop-2.4.1-src]#
经过了一段时间的编译终于编译成功
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS [5.031s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [3.855s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [7.075s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.569s]
[INFO] Apache Hadoop Project dist POM .................... SUCCESS [3.354s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [9.588s]
[INFO] Apache Hadoop MiniKDC ............................. SUCCESS [8.882s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [9.300s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [5.570s]
[INFO] Apache Hadoop Common .............................. SUCCESS [3:25.938s]
[INFO] Apache Hadoop NFS ................................. SUCCESS [18.331s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [0.082s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [6:12.442s]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [40.572s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [22.507s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [10.303s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.143s]
[INFO] hadoop-yarn ....................................... SUCCESS [0.134s]
[INFO] hadoop-yarn-api ................................... SUCCESS [3:06.873s]
[INFO] hadoop-yarn-common ................................ SUCCESS [1:13.959s]
[INFO] hadoop-yarn-server ................................ SUCCESS [0.167s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [26.176s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [31.386s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [8.314s]
[INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [15.003s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [34.895s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [0.996s]
[INFO] hadoop-yarn-client ................................ SUCCESS [14.592s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [0.129s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [7.437s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [5.101s]
[INFO] hadoop-yarn-site .................................. SUCCESS [0.125s]
[INFO] hadoop-yarn-project ............................... SUCCESS [9.303s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [0.138s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [53.614s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [50.496s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [7.640s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [23.926s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [23.072s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [2:02.021s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [6.489s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [22.299s]
[INFO] hadoop-mapreduce .................................. SUCCESS [10.712s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [30.326s]
[INFO] Apache Hadoop distributed copy .................... SUCCESS [58.770s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [6.264s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [16.981s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [11.848s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [6.313s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [6.905s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [13.115s]
[INFO] Apache Hadoop OpenStack support ................... SUCCESS [12.148s]
[INFO] Apache Hadoop Client .............................. SUCCESS [11.822s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.269s]
[INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [36.518s]
[INFO] Apache Hadoop Tools dist .......................... SUCCESS [20.957s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [0.067s]
[INFO] Apache Hadoop distribution ........................ SUCCESS [56.226s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 29:14.596s
[INFO] Finished at: Tue Nov 15 00:22:34 PST 2016
[INFO] Final Memory: 77M/237M
[INFO] ------------------------------------------------------------------------
[root@test hadoop-2.4.1-src]# ls
BUILDING.txt hadoop-client hadoop-hdfs-project hadoop-minicluster hadoop-tools NOTICE.txt
dev-support hadoop-common-project hadoop-mapreduce-project hadoop-project hadoop-yarn-project pom.xml
hadoop-assemblies hadoop-dist hadoop-maven-plugins hadoop-project-dist LICENSE.txt README.txt
[root@test hadoop-2.4.1-src]# cd hadoop-dist/
[root@test hadoop-dist]# ls
pom.xml target
[root@test hadoop-dist]# cd target/
[root@test target]# ls
antrun dist-tar-stitching.sh hadoop-2.4.1.tar.gz hadoop-dist-2.4.1-javadoc.jar maven-archiver
dist-layout-stitching.sh hadoop-2.4.1 hadoop-dist-2.4.1.jar javadoc-bundle-options test-dir [root@test target]#
CentOS 5.6 64位 重编译Hadoop 2.6.0
采用yum安装 gcc,gcc++,jdk 1.7,cmake.
下载并tar xvpfz 以下包
apache-ant-1.9.5-bin.tar.gz
apache-maven-3.3.3-bin.tar.gz,
findbugs-2.0.2.tar.gz
protobuf-2.5.0.tar.gz
hadoop-2.6.0-src.tar.gz
配置/etc/profile
export ANT_HOME=/home/hadoop/apache-ant-1.9.5
export FINDBUGS_HOME=/home/hadoop/findbugs-2.0.2
export MAVEN_HOME=/home/hadoop/apache-maven-3.3.3
export JAVA_HOME=/usr/java/jdk1.7.0_75
export CLASS_PATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/jre/lib/ext/mysql5.jar
export HADOOP_PREFIX=/home/hadoop/hadoop-2.6.0
export PATH=$PATH:$JAVA_HOME/bin:$MAVEN_HOME/bin:$HADOOP_HOME/bin:$ANT_HOME/bin:$FINDBUGS_HOME/bin
运行source /etc/profile
编译并安装 protobuf
./configure
make;make install;
ldconfig -v
5. 进入hadoop src用mvn编译。
mvn clean package -DskipTests -Pdist,native -Dtar
也有加-Pdist,native,doc 打包javadoc的,如果慢可以不用。
中途出错或断网,重新编译不成功,可以删除rm -fr hadoop.2.6.0-src,重新tar xvpfz,再编译,否则可以反复编译不成功。
6. 可以直接打编译好的lib拷贝到hadoop-2.6.0.tar.gz翻译出来的lib下,覆盖以使用。具体目录是
cp /home/hadoop/hadoop-src.2.6.0/hadoop-dist/target/hadoop-2.6.0/lib/native /home/hadoop/hadoop-2.6.0/lib/native
slave node的服务器上的lib可以不拷。
通过hadoop 下的start-dfs.sh启动dfs,默认以http://w.x.y.z:50070 查看 dfs运行start-yarn.sh 以http://w.x.y.z:8088 查看 yarn运行,可以会慢些才显示出来、
7. 通过 hdfs dfsadmin -report 检查运行情况,并看看是否还有 Unable to load native-hadoop library for your platform WARNING信息。正常是没有了。
CentOS 6.3 下编译 64 位 hadoop2.7.1
yum install svn
yum install autoconf automake libtool cmake
yum intall ncurses-devel
yum install openssl-devel
yum install gcc*
安装 jdk1.7.0_80 并配置环境变量
安装 maven
wget http://mirrors.noc.im/apache/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz
并设置环境变量
8. 安装 ant
下载并解压 apache-ant-1.9.6-bin.tar.gz
mv ~/apache-ant-1.9.6 /usr/local
添加环境变量
ANT_HOME=/usr/local/apache-ant-1.9.6
PATH=$PATH:$ANT_HOME/bin
检验是否安装成功
ant –version
9. 安装 protobuf-2.5.0.tar.gz
tar zxvf protobuf-2.5.0.tar.gz
cd protobuf-2.5.0
./configure
make
make check
make install
ls /usr/local/bin
10. 进入 hadoop-2.7.1-src 目录,执行命令:
mvn clean package –Pdist,native –DskipTests –Dtar
或者
mvn package –Pdist,native –DskipTests –Dtar
等待编译结果。(1 小时左右)编译成功。
编译好的文件在./hadoop-dist/target/hadoop-2.7.1.tar.gz
CentOS 6.5编译安装hadoop-2.2.0
这几天在琢磨Hadoop,首先是安装Hadoop,在安装过程中出现过不少问题,现在将整个过程总结一下,网络上已经有很多这方面的资料了,但我觉得还是有必要记述一下部分重要安装过程,方便以后发现与解决问题,也希望能给需要的朋友一些参考。
我所用系统是CentOS6.5 64bit,编译安装hadoop-2.2.0,hadoop配置为单节点。在ApacheHadoop下载的hadoop-2.2.0有两种版本:1)编译版本:hadoop-2.2.0.tar.gz,2)源代码版本:hadoop-2.2.0-src.tar.gz。对于编译版本,解压缩后进行配置就可以使用了,而对于源代码版本,先要编译,然后再进行配置。
我第一次装hadoop-2.2.0使用的是编译版本,但由于我的系统是64位,自带的native-hadooplibrary不能使用(自带的库适用于32位系统,但不适用于64位系统),总是出现下列提示:”WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes whereapplicable”(问题综述3.2),因此我采用源代码重新编译安装。
1编译hadoop-2.2.0源代码
1.1编译前环境准备
我编译源代码过程中参考的是这个篇博文:hadoop2.2.0centos编译安装详解,这其中需要安装的软件或者包有许多,可以分为两类:
- yum安装:java,gcc,gcc-c++,make,lzo-devel,zlib-devel,autoconf,automake,libtool,ncurses-devel,openssl-devel。
- 手动安装:Maven,ProtocolBuffer。
对于yum安装的软件或者依赖包,在CentOS 6.5中大部分可能都已经预先安装了,可以先先检查一下是否安装或者更新:yum info package,如果需要安装:yum -y install package,如果有可用更新:yum -y update package。
对于手动安装中的软件,需要先下载软件包,然后再安装,使用的具体版本是protobuf-2.5.0.tar.gz(http://download.csdn.net/detail/erli11/7408809,官方网站被wall了可以选择这个下载),apache-maven-3.0.5-bin.zip(mirror.bit.edu.cn/apache/maven/maven-3/3.0.5/binaries/apache-maven-3.0.5-bin.zip),protobuf需要采用源码安装;maven是编译版本,只需配置环境变量。但请注意:不要使用Maven3.1.1,因为Maven3.1.1与Maven3.0.x存在兼容性问题,不能成功下载插件,会出现问题maven-"ServiceUnavailable"。建议使用oschina的maven镜像,因为国外的某些网站可能会被Wall。这两个软件的安装都可参考上面的博客。
安装完上面所列出的软件或者依赖包后,需要配置系统环境变量,让系统能够找到软件所对应的命令,以下是我在/root/.bashrc中添加的配置:
- exportJAVA_HOME="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71.x86_64"
- exportCLAsspATH=.:${JAVA_HOME}/lib/:${JAVA_HOME}/jre/lib/
- exportPATH=${JAVA_HOME}/bin:${JAVA_HOME}/jre/bin:$PATH
- exportMAVEN_HOME="/usr/local/src/apache-maven-3.0.5"
- exportPATH=$PATH:$MAVEN_HOME/bin
- exportPROTOBUF_HOME="/usr/local/protobuf"
- exportPATH=$PATH:$PROTOBUF_HOME/bin
在/root/.bashrc配置环境变量后需要使用命令source/root/.bashrc加载配置。检测java和maven是否安装成功,出现下列信息表示安装成功。
- [root@lls~]#java-version
- javaversion"1.7.0_71"
- OpenJDKRuntimeEnvironment(rhel-2.5.3.1.el6-x86_64u71-b14)
- OpenJDK64-BitServerVM(build24.65-b04,mixedmode)
- [root@lls~]#mvn-version
- ApacheMaven3.0.5(r01de14724cdef164cd33c7c8c2fe155faf9602da;2013-02-1921:51:28+0800)
- Mavenhome:/usr/local/src/apache-maven-3.0.5
- Javaversion:1.7.0_71,vendor:OracleCorporation
- Javahome:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71.x86_64/jre
- Defaultlocale:en_US,platformencoding:UTF-8
- OSname:"linux",version:"2.6.32-431.29.2.el6.x86_64",arch:"amd64",family:"unix"
1.2编译hadoop
下载hadoop-2.2.0(http://apache.fastbull.org/hadoop/common/hadoop-2.2.0/hadoop-2.2.0-src.tar.gz,官方已经不支持下载,可以在这里下载)源代码,hadoop-2.2.0的SourceCode压缩包解压出来的源代码有个bug需要patch后才能编译,参考:https://issues.apache.org/jira/browse/HADOOP-10110。
一切准备就绪,开始编译:
- cd/home/xxx/softwares/hadoop/hadoop-2.2.0-src
- mvnpackage-Pdist,native-DskipTests-Dtar
需要等待一段时间,编译成功之后的结果如下(maven使用默认镜像):
- [INFO]ReactorSummary:
- [INFO]
- [INFO]ApacheHadoopMain................................SUCCESS[2.109s]
- [INFO]ApacheHadoopProjectPOM.........................SUCCESS[1.828s]
- [INFO]ApacheHadoopAnnotations.........................SUCCESS[5.266s]
- [INFO]ApacheHadoopAssemblies..........................SUCCESS[0.228s]
- [INFO]ApacheHadoopProjectdistPOM....................SUCCESS[2.184s]
- [INFO]ApacheHadoopMavenPlugins.......................SUCCESS[3.562s]
- [INFO]ApacheHadoopAuth................................SUCCESS[3.128s]
- [INFO]ApacheHadoopAuthExamples.......................SUCCESS[2.444s]
- [INFO]ApacheHadoopCommon..............................SUCCESS[1:17.748s]
- [INFO]ApacheHadoopNFS.................................SUCCESS[16.455s]
- [INFO]ApacheHadoopCommonProject......................SUCCESS[0.056s]
- [INFO]ApacheHadoopHDFS................................SUCCESS[2:18.736s]
- [INFO]ApacheHadoopHttpFS..............................SUCCESS[18.687s]
- [INFO]ApacheHadoopHDFSBookKeeperJournal.............SUCCESS[23.553s]
- [INFO]ApacheHadoopHDFS-NFS............................SUCCESS[3.453s]
- [INFO]ApacheHadoopHDFSProject........................SUCCESS[0.046s]
- [INFO]hadoop-yarn.......................................SUCCESS[48.652s]
- [INFO]hadoop-yarn-api...................................SUCCESS[44.591s]
- [INFO]hadoop-yarn-common................................SUCCESS[30.677s]
- [INFO]hadoop-yarn-server................................SUCCESS[0.096s]
- [INFO]hadoop-yarn-server-common.........................SUCCESS[9.340s]
- [INFO]hadoop-yarn-server-nodemanager....................SUCCESS[16.656s]
- [INFO]hadoop-yarn-server-web-proxy......................SUCCESS[3.115s]
- [INFO]hadoop-yarn-server-resourcemanager................SUCCESS[13.133s]
- [INFO]hadoop-yarn-server-tests..........................SUCCESS[0.614s]
- [INFO]hadoop-yarn-client................................SUCCESS[4.646s]
- [INFO]hadoop-yarn-applications..........................SUCCESS[0.100s]
- [INFO]hadoop-yarn-applications-distributedshell.........SUCCESS[2.815s]
- [INFO]hadoop-mapreduce-client...........................SUCCESS[0.096s]
- [INFO]hadoop-mapreduce-client-core......................SUCCESS[23.624s]
- [INFO]hadoop-yarn-applications-unmanaged-am-launcher....SUCCESS[2.056s]
- [INFO]hadoop-yarn-site..................................SUCCESS[0.099s]
- [INFO]hadoop-yarn-project...............................SUCCESS[11.009s]
- [INFO]hadoop-mapreduce-client-common....................SUCCESS[20.053s]
- [INFO]hadoop-mapreduce-client-shuffle...................SUCCESS[3.310s]
- [INFO]hadoop-mapreduce-client-app.......................SUCCESS[9.819s]
- [INFO]hadoop-mapreduce-client-hs........................SUCCESS[4.843s]
- [INFO]hadoop-mapreduce-client-jobclient.................SUCCESS[6.115s]
- [INFO]hadoop-mapreduce-client-hs-plugins................SUCCESS[1.682s]
- [INFO]ApacheHadoopMapReduceExamples..................SUCCESS[6.336s]
- [INFO]hadoop-mapreduce..................................SUCCESS[3.946s]
- [INFO]ApacheHadoopMapReduceStreaming.................SUCCESS[4.788s]
- [INFO]ApacheHadoopdistributedcopy....................SUCCESS[8.510s]
- [INFO]ApacheHadoopArchives............................SUCCESS[2.061s]
- [INFO]ApacheHadoopRumen...............................SUCCESS[7.269s]
- [INFO]ApacheHadoopGridmix.............................SUCCESS[4.815s]
- [INFO]ApacheHadoopDataJoin...........................SUCCESS[3.659s]
- [INFO]ApacheHadoopExtras..............................SUCCESS[3.132s]
- [INFO]ApacheHadoopPipes...............................SUCCESS[9.350s]
- [INFO]ApacheHadoopToolsdist..........................SUCCESS[1.850s]
- [INFO]ApacheHadoopTools...............................SUCCESS[0.023s]
- [INFO]ApacheHadoopdistribution........................SUCCESS[19.184s]
- [INFO]ApacheHadoopClient..............................SUCCESS[6.730s]
- [INFO]ApacheHadoopMini-Cluster........................SUCCESS[0.192s]
- [INFO]------------------------------------------------------------------------
- [INFO]BUILDSUCCESS
- [INFO]------------------------------------------------------------------------
- [INFO]Totaltime:10:40.193s
- [INFO]Finishedat:FriNov2114:43:06CST2014
- [INFO]FinalMemory:131M/471M
- [INFO]------------------------------------------------------------------------
2.单节点安装hadoop
以下采用single-node模式在CentOS6.5 64bits中安装hadoop-2.2.0。
2.1创建用户组及添加用户
- [root@llsDesktop]#groupaddhadoopgroup
- [root@llsDesktop]#useraddhadoopuser
- [root@llsDesktop]#passwdhadoopuserID-0217ef09-5d44-44dc-815a-f0e0569e0
- Changingpasswordforuserhadoopuser.
- Newpassword:
- Retypenewpassword:
- passwd:allauthenticationtokensupdatedsuccessfully.
- [root@llsDesktop]#usermod-ghadoopgrouphadoopuser
2.2.安装和配置SSH
Hadoop使用SSH的方式管理其节点,即使在single-node方式中也需要对其进行配置。否则会出现“connectionrefused on port 22”错误。在此之前,请确保您已经安装了SSH,如果没有,可以使用yum install openssh-server安装。
为hadoop用户产生一个SSH密钥,以后无需密码即可登录到hadoop节点:
注意:这步是切换到hadoopuser后执行的。
- [hadoopuser@lls~]$ssh-keygen-trsa
- Generatingpublic/privatersakeypair.
- Enterfileinwhichtosavethekey(/home/hadoopuser/.ssh/id_rsa):
- Enterpassphrase(emptyfornopassphrase):
- Entersamepassphraseagain:
- Youridentificationhasbeensavedin/home/hadoopuser/.ssh/id_rsa.
- Yourpublickeyhasbeensavedin/home/hadoopuser/.ssh/id_rsa.pub.
- Thekeyfingerprintis:
- 0b:6e:2f:89:a5:42:42:40:b2:69:fc:3f:4c:84:33:ebhadoopuser@lls.pc
- Thekey'srandomartimageis:
- +--[RSA2048]----+
- |o.|
- |+o.|
- |+o+.|
- |...=|
- |.o..S|
- |.o+....|
- |oEBo..|
- |.o.+.|
- |...|
- +-----------------+
- [hadoopuser@lls~]$cat~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys
2.3设置安装文件权限
我将hadoop安装在/usr/local中,将编译后的文件hadoop-2.2.0复制到/usr/local中,修改所有者:
- cp-R/home/xxx/softwares/hadoop/hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0/usr/local/
- cd/usr/local/
- mvhadoop-2.2.0/hadoop
- chown-Rhadoopuser:hadoopgrouphadoop/
2.4创建HDFS路径
- cd/usr/local/hadoop/
- mkdir-pdata/namenode
- mkdir-pdata/datanode
- mkdir-pdata/secondarydatanode
2.5
配置
hadoop-env.sh
在文件
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
中,添加
- exportJAVA_HOME="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71.x86_64"
- exportHADOOP_HOME="/usr/local/hadoop"
- exportHADOOP_MAPRED_HOME=$HADOOP_HOME
- exportHADOOP_COMMON_HOME=$HADOOP_HOME
- exportHADOOP_HDFS_HOME=$HADOOP_HOME
- exportYARN_HOME=$HADOOP_HOME
- exportHADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
- exportHADOOP_OPTS="$HADOOP_OPTS-Djava.library.path=/usr/local/hadoop/lib/native"
- exportHADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
注意
:
要将原配置文件中下列的两行屏蔽或者直接删除,否则不能加载成功
native-hadooplibrary
。
- exportJAVA_HOME=${JAVA_HOME}
- exportHADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
2.6
配置
core-site.xml
在文件/usr/local/hadoop/etc/hadoop/core-site.xml中(在<configuration>标签内)添加:
2.7
配置
hdfs-site.xml
在文件
/usr/local/hadoop/etc/hadoop/hdfs-site.xml
中
(
在
<configuration>
标签内
)
添加
:
2.8
配置
yarn-site.xml
在文件
/usr/local/hadoop/etc/hadoop/yarn-site.xml
中
(
在
<configuration>
标签内
)
添加
:
2.9
配置
mapred-site.xml
创建mapred-site.xml
文件
:
在文件
/usr/local
/hadoop/etc/hadoop/mapred-site.xml
中
(
在
<configuration>
标签内
)
添加
:
2.10
添加
hadoop
可执行路径
在
/home/hadoopuser/.bashrc
添加
hadoop
可执行路径
:
- echo"exportPATH=$PATH:/usr/local/hadoop/bin:/usr/local/hadoop/sbin">>/home/hadoopuser/.bashrc
- source~/.bashrc
2.11
格式化
HDFS
- [hadoopuser@llshadoop]$hdfsnamenode-format
- 14/11/2213:00:18INFOnamenode.NameNode:STARTUP_MSG:
- /************************************************************
- STARTUP_MSG:StartingNameNode
- STARTUP_MSG:host=lls.pc/127.0.0.1
- STARTUP_MSG:args=[-format]
- STARTUP_MSG:version=2.2.0
- STARTUP_MSG:classpath=/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
- STARTUP_MSG:build=UnkNown-rUnkNown;compiledby'root'on2014-11-21T06:32Z
- STARTUP_MSG:java=1.7.0_71
- ************************************************************/
- 14/11/2213:00:18INFOnamenode.NameNode:registeredUNIXsignalhandlersfor[TERM,HUP,INT]
- Formattingusingclusterid:CID-bf1d252c-2710-45e6-af26-344debf86840
- 14/11/2213:00:19INFOnamenode.HostFileManager:readincludes:
- HostSet(
- )
- 14/11/2213:00:19INFOnamenode.HostFileManager:readexcludes:
- HostSet(
- )
- 14/11/2213:00:19INFOblockmanagement.DatanodeManager:dfs.block.invalidate.limit=1000
- 14/11/2213:00:19INFOutil.GSet:ComputingcapacityformapBlocksMap
- 14/11/2213:00:19INFOutil.GSet:VMtype=64-bit
- 14/11/2213:00:19INFOutil.GSet:2.0%maxmemory=889MB
- 14/11/2213:00:19INFOutil.GSet:capacity=2^21=2097152entries
- 14/11/2213:00:19INFOblockmanagement.BlockManager:dfs.block.access.token.enable=false
- 14/11/2213:00:19INFOblockmanagement.BlockManager:defaultReplication=1
- 14/11/2213:00:19INFOblockmanagement.BlockManager:maxReplication=512
- 14/11/2213:00:19INFOblockmanagement.BlockManager:minReplication=1
- 14/11/2213:00:19INFOblockmanagement.BlockManager:maxReplicationStreams=2
- 14/11/2213:00:19INFOblockmanagement.BlockManager:shouldCheckForEnoughRacks=false
- 14/11/2213:00:19INFOblockmanagement.BlockManager:replicationRecheckInterval=3000
- 14/11/2213:00:19INFOblockmanagement.BlockManager:encryptDataTransfer=false
- 14/11/2213:00:19INFOnamenode.FSNamesystem:fsOwner=hadoopuser(auth:SIMPLE)
- 14/11/2213:00:19INFOnamenode.FSNamesystem:supergroup=supergroup
- 14/11/2213:00:19INFOnamenode.FSNamesystem:isPermissionEnabled=true
- 14/11/2213:00:19INFOnamenode.FSNamesystem:HAEnabled:false
- 14/11/2213:00:19INFOnamenode.FSNamesystem:AppendEnabled:true
- 14/11/2213:00:19INFOutil.GSet:ComputingcapacityformapINodeMap
- 14/11/2213:00:19INFOutil.GSet:VMtype=64-bit
- 14/11/2213:00:19INFOutil.GSet:1.0%maxmemory=889MB
- 14/11/2213:00:19INFOutil.GSet:capacity=2^20=1048576entries
- 14/11/2213:00:19INFOnamenode.NameNode:Cachingfilenamesoccuringmorethan10times
- 14/11/2213:00:19INFOnamenode.FSNamesystem:dfs.namenode.safemode.threshold-pct=0.9990000128746033
- 14/11/2213:00:19INFOnamenode.FSNamesystem:dfs.namenode.safemode.min.datanodes=0
- 14/11/2213:00:19INFOnamenode.FSNamesystem:dfs.namenode.safemode.extension=30000
- 14/11/2213:00:19INFOnamenode.FSNamesystem:Retrycacheonnamenodeisenabled
- 14/11/2213:00:19INFOnamenode.FSNamesystem:Retrycachewilluse0.03oftotalheapandretrycacheentryexpirytimeis600000millis
- 14/11/2213:00:19INFOutil.GSet:ComputingcapacityformapNamenodeRetryCache
- 14/11/2213:00:19INFOutil.GSet:VMtype=64-bit
- 14/11/2213:00:19INFOutil.GSet:0.029999999329447746%maxmemory=889MB
- 14/11/2213:00:19INFOutil.GSet:capacity=2^15=32768entries
- Re-formatfilesysteminStorageDirectory/usr/local/hadoop/data/namenode?(YorN)y
- 14/11/2213:00:39INFOcommon.Storage:Storagedirectory/usr/local/hadoop/data/namenodehasbeensuccessfullyformatted.
- 14/11/2213:00:39INFOnamenode.FSImage:Savingimagefile/usr/local/hadoop/data/namenode/current/fsimage.ckpt_0000000000000000000usingnocompression
- 14/11/2213:00:39INFOnamenode.FSImage:Imagefile/usr/local/hadoop/data/namenode/current/fsimage.ckpt_0000000000000000000ofsize202bytessavedin0seconds.
- 14/11/2213:00:39INFOnamenode.NNStorageRetentionManager:Goingtoretain1imageswithtxid>=0
- 14/11/2213:00:39INFOutil.ExitUtil:Exitingwithstatus0
- 14/11/2213:00:39INFOnamenode.NameNode:SHUTDOWN_MSG:
- /************************************************************
- SHUTDOWN_MSG:ShuttingdownNameNodeatlls.pc/127.0.0.1
- ************************************************************/
2.12
启动
hadoop
- [root@llshadoop]#suhadoopuser
- [hadoopuser@llshadoop]$start-dfs.sh&&start-yarn.sh
- Startingnamenodeson[localhost]
- localhost:startingnamenode,loggingto/usr/local/hadoop/logs/hadoop-hadoopuser-namenode-lls.pc.out
- localhost:startingdatanode,loggingto/usr/local/hadoop/logs/hadoop-hadoopuser-datanode-lls.pc.out
- Startingsecondarynamenodes[0.0.0.0]
- 0.0.0.0:startingsecondarynamenode,loggingto/usr/local/hadoop/logs/hadoop-hadoopuser-secondarynamenode-lls.pc.out
- startingyarndaemons
- startingresourcemanager,loggingto/usr/local/hadoop/logs/yarn-hadoopuser-resourcemanager-lls.pc.out
- localhost:startingnodemanager,loggingto/usr/local/hadoop/logs/yarn-hadoopuser-nodemanager-lls.pc.out
- [root@llshadoop]#
2.13
查看
hadoop
状态
查看
hadoop
守护进程:
- [hadoopuser@llsdata]$jps
- 13466Jps
- 18277ResourceManager
- 17952Datanode
- 18126SecondaryNameNode
- 18394NodeManager
- 17817NameNode
最左边的数字表示
java
进程的
PID(
启动
hadoop
时动态分配
)
。
Datanode
,NameNode,NodeManager,SecondaryNameNode
,ResourceManager
是
hadoop
的守护进程。
HDFS内置了许多web服务,用户借助于这些服务,可以通过浏览器来查看HDFS的运行状况。可以通过浏览器查看更详细hadoop状态:
-
Cluster status http://localhost:8088/cluster
-
HDFS statushttp://localhost:50070/dfshealth.jsp
-
SecondaryNameNode status
http://localhost:50090/status.jsp
2.14
测试
hadoop
以下是
hadoop
自带的计算
pi
的例子
:
- [hadoopuser@llsdata]$hadoopjar/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jarpi10100
- NumberofMaps=10
- SamplesperMap=100
- WroteinputforMap#0
- WroteinputforMap#1
- WroteinputforMap#2
- WroteinputforMap#3
- WroteinputforMap#4
- WroteinputforMap#5
- WroteinputforMap#6
- WroteinputforMap#7
- WroteinputforMap#8
- WroteinputforMap#9
- StartingJob
- 14/11/2213:15:23INFOclient.RMProxy:ConnectingtoResourceManagerat/0.0.0.0:8032
- 14/11/2213:15:24INFOinput.FileInputFormat:Totalinputpathstoprocess:10
- 14/11/2213:15:24INFOmapreduce.JobSubmitter:numberofsplits:10
- 14/11/2213:15:24INFOConfiguration.deprecation:user.nameisdeprecated.Instead,usemapreduce.job.user.name
- 14/11/2213:15:24INFOConfiguration.deprecation:mapred.jarisdeprecated.Instead,usemapreduce.job.jar
- 14/11/2213:15:24INFOConfiguration.deprecation:mapred.map.tasks.speculative.executionisdeprecated.Instead,usemapreduce.map.speculative
- 14/11/2213:15:24INFOConfiguration.deprecation:mapred.reduce.tasksisdeprecated.Instead,usemapreduce.job.reduces
- 14/11/2213:15:24INFOConfiguration.deprecation:mapred.output.value.classisdeprecated.Instead,usemapreduce.job.output.value.class
- 14/11/2213:15:24INFOConfiguration.deprecation:mapred.reduce.tasks.speculative.executionisdeprecated.Instead,usemapreduce.reduce.speculative
- 14/11/2213:15:24INFOConfiguration.deprecation:mapreduce.map.classisdeprecated.Instead,usemapreduce.job.map.class
- 14/11/2213:15:24INFOConfiguration.deprecation:mapred.job.nameisdeprecated.Instead,usemapreduce.job.name
- 14/11/2213:15:24INFOConfiguration.deprecation:mapreduce.reduce.classisdeprecated.Instead,usemapreduce.job.reduce.class
- 14/11/2213:15:24INFOConfiguration.deprecation:mapreduce.inputformat.classisdeprecated.Instead,usemapreduce.job.inputformat.class
- 14/11/2213:15:24INFOConfiguration.deprecation:mapred.input.dirisdeprecated.Instead,usemapreduce.input.fileinputformat.inputdir
- 14/11/2213:15:24INFOConfiguration.deprecation:mapred.output.dirisdeprecated.Instead,usemapreduce.output.fileoutputformat.outputdir
- 14/11/2213:15:24INFOConfiguration.deprecation:mapreduce.outputformat.classisdeprecated.Instead,usemapreduce.job.outputformat.class
- 14/11/2213:15:24INFOConfiguration.deprecation:mapred.map.tasksisdeprecated.Instead,usemapreduce.job.maps
- 14/11/2213:15:24INFOConfiguration.deprecation:mapred.output.key.classisdeprecated.Instead,usemapreduce.job.output.key.class
- 14/11/2213:15:24INFOConfiguration.deprecation:mapred.working.dirisdeprecated.Instead,usemapreduce.job.working.dir
- 14/11/2213:15:24INFOmapreduce.JobSubmitter:Submittingtokensforjob:job_1416632942167_0003
- 14/11/2213:15:24INFOimpl.YarnClientImpl:Submittedapplicationapplication_1416632942167_0003toResourceManagerat/0.0.0.0:8032
- 14/11/2213:15:24INFOmapreduce.Job:Theurltotrackthejob:http://localhost:8088/proxy/application_1416632942167_0003/
- 14/11/2213:15:24INFOmapreduce.Job:Runningjob:job_1416632942167_0003
- 14/11/2213:15:31INFOmapreduce.Job:Jobjob_1416632942167_0003runninginubermode:false
- 14/11/2213:15:31INFOmapreduce.Job:map0%reduce0%
- 14/11/2213:15:54INFOmapreduce.Job:map10%reduce0%
- 14/11/2213:15:55INFOmapreduce.Job:map50%reduce0%
- 14/11/2213:15:56INFOmapreduce.Job:map60%reduce0%
- 14/11/2213:16:17INFOmapreduce.Job:map90%reduce0%
- 14/11/2213:16:18INFOmapreduce.Job:map100%reduce0%
- 14/11/2213:16:19INFOmapreduce.Job:map100%reduce100%
- 14/11/2213:16:19INFOmapreduce.Job:Jobjob_1416632942167_0003completedsuccessfully
- 14/11/2213:16:19INFOmapreduce.Job:Counters:43
- FileSystemCounters
- FILE:Numberofbytesread=226
- FILE:Numberofbyteswritten=879518
- FILE:Numberofreadoperations=0
- FILE:Numberoflargereadoperations=0
- FILE:Numberofwriteoperations=0
- HDFS:Numberofbytesread=2700
- HDFS:Numberofbyteswritten=215
- HDFS:Numberofreadoperations=43
- HDFS:Numberoflargereadoperations=0
- HDFS:Numberofwriteoperations=3
- JobCounters
- Launchedmaptasks=10
- Launchedreducetasks=1
- Data-localmaptasks=10
- Totaltimespentbyallmapsinoccupiedslots(ms)=215911
- Totaltimespentbyallreducesinoccupiedslots(ms)=20866
- Map-ReduceFramework
- Mapinputrecords=10
- Mapoutputrecords=20
- Mapoutputbytes=180
- Mapoutputmaterializedbytes=280
- Inputsplitbytes=1520
- Combineinputrecords=0
- Combineoutputrecords=0
- Reduceinputgroups=2
- Reduceshufflebytes=280
- Reduceinputrecords=20
- Reduceoutputrecords=0
- SpilledRecords=40
- ShuffledMaps=10
- FailedShuffles=0
- MergedMapoutputs=10
- GCtimeelapsed(ms)=3216
- cputimespent(ms)=6420
- Physicalmemory(bytes)snapshot=2573750272
- Virtualmemory(bytes)snapshot=10637529088
- Totalcommittedheapusage(bytes)=2063073280
- ShuffleErrors
- BAD_ID=0
- CONNECTION=0
- IO_ERROR=0
- WRONG_LENGTH=0
- WRONG_MAP=0
- WRONG_REDUCE=0
- FileInputFormatCounters
- BytesRead=1180
- FileOutputFormatCounters
- BytesWritten=97
- JobFinishedin55.969seconds
- EstimatedvalueofPiis3.14800000000000000000
出现上述结果,表示整个安装过程大功告成。
3.
问题综述
3.1
主机名映射错误
- STARTUP_MSG:host=java.net.UnkNownHostException:lls.pc:lls.pc
解决方案
:
参考
http://www.linuxidc.com/Linux/2012-03/55663.htm
3.2
无法加载
native-hadoop library
- WARNutil.NativeCodeLoader:Unabletoloadnative-hadooplibraryforyourplatform...usingbuiltin-javaclasseswhereapplicable
解决方案
:
官网上下载的
hadoop
编译版本的
native-hadooplibrary
适用于
32bit
系统,不适用
64bit
系统,
64bit
系统需要自己手动编译;编译完成之后用
64bit
的
native-hadooplibrary
替代原来的库
(
用
64bit
的
native
文件夹替代原来的文件夹
)
,替代完成之后还需要谨慎地配置环境变量,可以参考我的配置或者下面的链接。
参考
:
http://stackoverflow.com/questions/19943766/hadoop-unable-to-load-native-hadoop-library-for-your-platform-error-on-centos
参考
:
[1]
http://www.ercoppa.org/Linux-Compile-Hadoop-220-fix-Unable-to-load-native-hadoop-library.htm
[2]
http://www.ercoppa.org/Linux-Install-Hadoop-220-on-Ubuntu-Linux-1304-Single-Node-Cluster.htm
[3]
http://blog.csdn.net/w13770269691/article/details/16883663
[4]http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
[5]http://tecadmin.net/steps-to-install-hadoop-on-centosrhel-6/
[6]http://alanxelsys.com/2014/02/01/hadoop-2-2-single-node-installation-on-centos-6-5/
[7]http://blog.csdn.net/zwj0403/article/details/16855555
[8]http://www.oschina.net/question/1177468_193584
[9]http://maven.oschina.net/help.html
[10]http://stackoverflow.com/questions/19943766/hadoop-unable-to-load-native-hadoop-library-for-your-platform-error-on-centos
[11]https://issues.apache.org/jira/browse/HADOOP-10110
[12]
http://www.linuxidc.com/Linux/2012-03/55663.htm
centos 64位 hadoop编译
如果安装的centos是64位机,hadoop官网给出的源码是32位的,直接运行的话,会出现下面的信息:
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
....
Java: ssh: Could not resolve hostname Java: Name or service not known
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
需要通过编译完成。在网上看到了一片博文写的很好,我也实践了一下,可以完成。
http://blog.csdn.net/w13770269691/article/details/16883663
在某些软件的时候,运行./configure 会报错,错误提示为:
configure: error: C++ preprocessor “/lib/cpp” fails sanity
check See `config.log’ for more details
解决办法:出现该情况是由于c++编译器的相关package没有安装,以root用户登陆,在终端上执行:
# yum install glibc-headers
# yum install gcc-c++
我们今天的关于CentOS6.8编译Hadoop-2.4.1和编译hadoop源码的分享就到这里,谢谢您的阅读,如果想了解更多关于CentOS 5.6 64位 重编译Hadoop 2.6.0、CentOS 6.3 下编译 64 位 hadoop2.7.1、CentOS 6.5编译安装hadoop-2.2.0、centos 64位 hadoop编译的相关信息,可以在本站进行搜索。
本文标签: