hadoop26編譯
Ⅰ hadoop 重新編譯,提示如下錯誤,怎麼個意思了
第一次編輯錯誤提示
主要是增加了在maven目錄下,conf/settings.xml
<mirror>
<id>nexus-osc</id>
<mirrorOf>*</mirrorOf>
<name>Nexusosc</name>
<url>http://maven.oschina.net/content/groups/public/</url>
</mirror>
<profile>
<id>jdk-1.7</id>
<activation>
<jdk>1.7</jdk>
</activation>
<repositories>
<repository>
<id>nexus</id>
<name>local private nexus</name>
<url>http://maven.oschina.net/content/groups/public/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>nexus</id>
<name>local private nexus</name>
<url>http://maven.oschina.net/content/groups/public/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................. FAILURE [06:34 min]
[INFO] Apache Hadoop Project POM .......................... SKIPPED
[INFO] Apache Hadoop Annotations .......................... SKIPPED
[INFO] Apache Hadoop Assemblies ........................... SKIPPED
[INFO] Apache Hadoop Project Dist POM ..................... SKIPPED
[INFO] Apache Hadoop Maven Plugins ........................ SKIPPED
[INFO] Apache Hadoop MiniKDC .............................. SKIPPED
[INFO] Apache Hadoop Auth ................................. SKIPPED
[INFO] Apache Hadoop Auth Examples ........................ SKIPPED
[INFO] Apache Hadoop Common ............................... SKIPPED
[INFO] Apache Hadoop NFS .................................. SKIPPED
[INFO] Apache Hadoop Common Project ....................... SKIPPED
[INFO] Apache Hadoop HDFS ................................. SKIPPED
[INFO] Apache Hadoop HttpFS ............................... SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................. SKIPPED
[INFO] Apache Hadoop HDFS Project ......................... SKIPPED
[INFO] hadoop-yarn ........................................ SKIPPED
[INFO] hadoop-yarn-api .................................... SKIPPED
[INFO] hadoop-yarn-common ................................. SKIPPED
[INFO] hadoop-yarn-server ................................. SKIPPED
[INFO] hadoop-yarn-server-common .......................... SKIPPED
[INFO] hadoop-yarn-server-nodemanager ..................... SKIPPED
[INFO] hadoop-yarn-server-web-proxy ....................... SKIPPED
[INFO] hadoop-yarn-server-applicationhistoryservice ....... SKIPPED
[INFO] hadoop-yarn-server-resourcemanager ................. SKIPPED
[INFO] hadoop-yarn-server-tests ........................... SKIPPED
[INFO] hadoop-yarn-client ................................. SKIPPED
[INFO] hadoop-yarn-applications ........................... SKIPPED
[INFO] hadoop-yarn-applications-distributedshell .......... SKIPPED
[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SKIPPED
[INFO] hadoop-yarn-site ................................... SKIPPED
[INFO] hadoop-yarn-project ................................ SKIPPED
[INFO] hadoop-maprece-client ............................ SKIPPED
[INFO] hadoop-maprece-client-core ....................... SKIPPED
[INFO] hadoop-maprece-client-common ..................... SKIPPED
[INFO] hadoop-maprece-client-shuffle .................... SKIPPED
[INFO] hadoop-maprece-client-app ........................ SKIPPED
[INFO] hadoop-maprece-client-hs ......................... SKIPPED
[INFO] hadoop-maprece-client-jobclient .................. SKIPPED
[INFO] hadoop-maprece-client-hs-plugins ................. SKIPPED
[INFO] Apache Hadoop MapRece Examples ................... SKIPPED
[INFO] hadoop-maprece ................................... SKIPPED
[INFO] Apache Hadoop MapRece Streaming .................. SKIPPED
[INFO] Apache Hadoop Distributed Copy ..................... SKIPPED
[INFO] Apache Hadoop Archives ............................. SKIPPED
[INFO] Apache Hadoop Rumen ................................ SKIPPED
[INFO] Apache Hadoop Gridmix .............................. SKIPPED
[INFO] Apache Hadoop Data Join ............................ SKIPPED
[INFO] Apache Hadoop Extras ............................... SKIPPED
[INFO] Apache Hadoop Pipes ................................ SKIPPED
[INFO] Apache Hadoop OpenStack support .................... SKIPPED
[INFO] Apache Hadoop Client ............................... SKIPPED
[INFO] Apache Hadoop Mini-Cluster ......................... SKIPPED
[INFO] Apache Hadoop Scheler Load Simulator ............. SKIPPED
[INFO] Apache Hadoop Tools Dist ........................... SKIPPED
[INFO] Apache Hadoop Tools ................................ SKIPPED
[INFO] Apache Hadoop Distribution ......................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10:10 min
[INFO] Finished at: 2015-01-12T00:52:27-08:00
[INFO] Final Memory: 29M/70M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.3:attach-descriptor (attach-descriptor) on project hadoop-main: Execution attach-descriptor of goal org.apache.maven.plugins:maven-site-plugin:3.3:attach-descriptor failed: Plugin org.apache.maven.plugins:maven-site-plugin:3.3 or one of its dependencies could not be resolved: Failed to collect dependencies at org.apache.maven.plugins:maven-site-plugin:jar:3.3 -> org.apache.maven.reporting:maven-reporting-exec:jar:1.1 -> org.apache.maven.shared:maven-shared-utils:jar:0.3: Failed to read artifact descriptor for org.apache.maven.shared:maven-shared-utils:jar:0.3: Could not transfer artifact org.apache.maven.shared:maven-shared-utils:pom:0.3 from/to nexus-osc (http://maven.oschina.net/content/groups/public/): Failed to transfer file:http://maven.oschina.net/content/groups/public/org/apache/maven/shared/maven-shared-utils/0.3/maven-shared-utils-0.3.pom. Return code is: 502, ReasonPhrase: Bad Gateway. -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
[hadoop@localhost hadoop-2.5.0-src]$
Ⅱ 安裝hadoop的步驟有哪些
hadoop2.0已經發布了穩定版本了,增加了很多特性,比如HDFSHA、YARN等。最新的hadoop-2.4.1又增加了YARNHA
注意:apache提供的hadoop-2.4.1的安裝包是在32位操作系統編譯的,因為hadoop依賴一些C++的本地庫,
所以如果在64位的操作上安裝hadoop-2.4.1就需要重新在64操作系統上重新編譯
(建議第一次安裝用32位的系統,我將編譯好的64位的也上傳到群共享里了,如果有興趣的可以自己編譯一下)
前期准備就不詳細說了,課堂上都介紹了
1.修改linux主機名
2.修改IP
3.修改主機名和IP的映射關系
######注意######如果你們公司是租用的伺服器或是使用的雲主機(如華為用主機、阿里雲主機等)
/etc/hosts裡面要配置的是內網IP地址和主機名的映射關系
4.關閉防火牆
5.ssh免登陸
6.安裝JDK,配置環境變數等
集群規劃:
主機名 IP 安裝的軟體 運行的進程
HA181 192.168.1.181 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)
HA182 192.168.1.182 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)
HA183 192.168.1.183 jdk、hadoop ResourceManager
HA184 192.168.1.184 jdk、hadoop ResourceManager
HA185 192.168.1.185 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain
HA186 192.168.1.186 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain
HA187 192.168.1.187 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain
說明:
1.在hadoop2.0中通常由兩個NameNode組成,一個處於active狀態,另一個處於standby狀態。ActiveNameNode對外提供服務,而StandbyNameNode則不對外提供服務,僅同步activenamenode的狀態,以便能夠在它失敗時快速進行切換。
hadoop2.0官方提供了兩種HDFSHA的解決方案,一種是NFS,另一種是QJM。這里我們使用簡單的QJM。在該方案中,主備NameNode之間通過一組JournalNode同步元數據信息,一條數據只要成功寫入多數JournalNode即認為寫入成功。通常配置奇數個JournalNode
這里還配置了一個zookeeper集群,用於ZKFC(DFSZKFailoverController)故障轉移,當ActiveNameNode掛掉了,會自動切換StandbyNameNode為standby狀態
2.hadoop-2.2.0中依然存在一個問題,就是ResourceManager只有一個,存在單點故障,hadoop-2.4.1解決了這個問題,有兩個ResourceManager,一個是Active,一個是Standby,狀態由zookeeper進行協調
安裝步驟:
1.安裝配置zooekeeper集群(在HA185上)
1.1解壓
tar-zxvfzookeeper-3.4.5.tar.gz-C/app/
1.2修改配置
cd/app/zookeeper-3.4.5/conf/
cpzoo_sample.cfgzoo.cfg
vimzoo.cfg
修改:dataDir=/app/zookeeper-3.4.5/tmp
在最後添加:
server.1=HA185:2888:3888
server.2=HA186:2888:3888
server.3=HA187:2888:3888
保存退出
然後創建一個tmp文件夾
mkdir/app/zookeeper-3.4.5/tmp
再創建一個空文件
touch/app/zookeeper-3.4.5/tmp/myid
最後向該文件寫入ID
echo1>/app/zookeeper-3.4.5/tmp/myid
1.3將配置好的zookeeper拷貝到其他節點(首先分別在HA186、HA187根目錄下創建一個weekend目錄:mkdir/weekend)
scp-r/app/zookeeper-3.4.5/HA186:/app/
scp-r/app/zookeeper-3.4.5/HA187:/app/
注意:修改HA186、HA187對應/weekend/zookeeper-3.4.5/tmp/myid內容
HA186:
echo2>/app/zookeeper-3.4.5/tmp/myid
HA187:
echo3>/app/zookeeper-3.4.5/tmp/myid
2.安裝配置hadoop集群(在HA181上操作)
2.1解壓
tar-zxvfhadoop-2.4.1.tar.gz-C/weekend/
2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目錄下)
#將hadoop添加到環境變數中
vim/etc/profile
exportjava_HOME=/app/jdk1.7.0_79
exportHADOOP_HOME=/app/hadoop-2.4.1
exportPATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
#hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下
cd/home/hadoop/app/hadoop-2.4.1/etc/hadoop
2.2.1修改hadoop-env.sh
exportJAVA_HOME=/app/jdk1.7.0_79
2.2.2修改core-site.xml
<configuration>
<!--指定hdfs的nameservice為ns1-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1/</value>
</property>
<!--指定hadoop臨時目錄-->
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop-2.4.1/tmp</value>
</property>
<!--指定zookeeper地址-->
<property>
<name>ha.zookeeper.quorum</name>
<value>HA185:2181,HA186:2181,HA187:2181</value>
</property>
</configuration>
2.2.3修改hdfs-site.xml
<configuration>
<!--指定hdfs的nameservice為ns1,需要和core-site.xml中的保持一致-->
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>
<!--ns1下面有兩個NameNode,分別是nn1,nn2-->
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>
<!--nn1的RPC通信地址-->
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>HA181:9000</value>
</property>
<!--nn1的http通信地址-->
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>HA181:50070</value>
</property>
<!--nn2的RPC通信地址-->
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>HA182:9000</value>
</property>
<!--nn2的http通信地址-->
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>HA182:50070</value>
</property>
<!--指定NameNode的元數據在JournalNode上的存放位置-->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://HA185:8485;HA186:8485;HA187:8485/ns1</value>
</property>
<!--指定JournalNode在本地磁碟存放數據的位置-->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/app/hadoop-2.4.1/journaldata</value>
</property>
<!--開啟NameNode失敗自動切換-->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!--配置失敗自動切換實現方式-->
<property>
<name>dfs.client.failover.proxy.provider.ns1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.</value>
</property>
<!--配置隔離機制方法,多個機制用換行分割,即每個機制暫用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!--使用sshfence隔離機制時需要ssh免登陸-->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<!--配置sshfence隔離機制超時時間-->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>
2.2.4修改mapred-site.xml
<configuration>
<!--指定mr框架為yarn方式-->
<property>
<name>maprece.framework.name</name>
<value>yarn</value>
</property>
</configuration>
2.2.5修改yarn-site.xml
<configuration>
<!--開啟RM高可用-->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!--指定RM的clusterid-->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<!--指定RM的名字-->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!--分別指定RM的地址-->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>HA183</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>HA184</value>
</property>
<!--指定zk集群地址-->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>HA185:2181,HA186:2181,HA187:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>maprece_shuffle</value>
</property>
</configuration>
2.2.6修改slaves(slaves是指定子節點的位置,因為要在HA181上啟動HDFS、在HA183啟動yarn,
所以HA181上的slaves文件指定的是datanode的位置,HA183上的slaves文件指定的是nodemanager的位置)
HA185
HA186
HA187
2.2.7配置免密碼登陸
#首先要配置HA181到HA182、HA183、HA184、HA185、HA186、HA187的免密碼登陸
#在HA181上生產一對鑰匙
ssh-keygen-trsa
#將公鑰拷貝到其他節點,包括自己
ssh--idHA181
ssh--idHA182
ssh--idHA183
ssh--idHA184
ssh--idHA185
ssh--idHA186
ssh--idHA187
#配置HA183到HA184、HA185、HA186、HA187的免密碼登陸
#在HA183上生產一對鑰匙
ssh-keygen-trsa
#將公鑰拷貝到其他節點
ssh--idHA184
ssh--idHA185
ssh--idHA186
ssh--idHA187
#注意:兩個namenode之間要配置ssh免密碼登陸,別忘了配置HA182到HA181的免登陸
在HA182上生產一對鑰匙
ssh-keygen-trsa
ssh--id-iHA181
2.4將配置好的hadoop拷貝到其他節點
scp-r/app/hadoop-2.5.1/HA182:/app/
scp-r/app/hadoop-2.5.1/HA183:/app/
scp-r/app/hadoop-2.5.1/HA184:/app/
scp-r/app/hadoop-2.5.1/HA185:/app/
scp-r/app/hadoop-2.5.1/HA186:/app/
scp-r/app/hadoop-2.5.1/HA187:/app/
###注意:嚴格按照下面的步驟
2.5啟動zookeeper集群(分別在HA185、HA186、tcast07上啟動zk)
cd/app/zookeeper-3.4.5/bin/
./zkServer.shstart
#查看狀態:一個leader,兩個follower
./zkServer.shstatus
2.6啟動journalnode(分別在在HA185、HA186、HA187上執行)
cd/app/hadoop-2.5.1
hadoop-daemon.shstartjournalnode
#運行jps命令檢驗,HA185、HA186、HA187上多了JournalNode進程
2.7格式化ZKFC(在HA181上執行即可) hdfszkfc-formatZK
2.8格式化HDFS
#在HA181上執行命令:
hdfsnamenode-format
#格式化後會在根據core-site.xml中的hadoop.tmp.dir配置生成個文件,這里我配置的是/app/hadoop-2.4.1/tmp,然後將/weekend/hadoop-2.4.1/tmp拷貝到HA182的/weekend/hadoop-2.4.1/下。
scp-rtmp/HA182:/app/hadoop-2.5.1/
##也可以這樣,建議hdfsnamenode-bootstrapStandby
2.9啟動HDFS(在HA181上執行)
sbin/start-dfs.sh
2.10啟動YARN(#####注意#####:是在HA183上執行start-yarn.sh,把namenode和resourcemanager分開是因為性能問題,因為他們都要佔用大量資源,所以把他們分開了,他們分開了就要分別在不同的機器上啟動)
sbin/start-yarn.sh
到此,hadoop-2.4.1配置完畢,可以統計瀏覽器訪問:
http://192.168.1.181:50070
NameNode'HA181:9000'(active)
http://192.168.1.182:50070
NameNode'HA182:9000'(standby)
驗證HDFSHA
首先向hdfs上傳一個文件
hadoopfs-put/etc/profile/profile
hadoopfs-ls/
然後再kill掉active的NameNode
kill-9<pidofNN>
通過瀏覽器訪問:http://192.168.1.182:50070
NameNode'HA182:9000'(active)
這個時候HA182上的NameNode變成了active
在執行命令:
hadoopfs-ls/
-rw-r--r--3rootsupergroup19262014-02-0615:36/profile
剛才上傳的文件依然存在!!!
手動啟動那個掛掉的NameNode
sbin/hadoop-daemon.shstartnamenode
通過瀏覽器訪問:http://192.168.1.181:50070
NameNode'HA181:9000'(standby)
驗證YARN:
運行一下hadoop提供的demo中的WordCount程序:
hadoopjarshare/hadoop/maprece/hadoop-maprece-examples-2.4.1.jarwordcount/profile/out
OK,大功告成!!!
CID-74d21742-3e4b-4df6-a99c-d52f703b49c0
測試集群工作狀態的一些指令:
bin/hdfsdfsadmin-report 查看hdfs的各節點狀態信息
bin/hdfshaadmin-getServiceStatenn1 獲取一個namenode節點的HA狀態
sbin/hadoop-daemon.shstartnamenode單獨啟動一個namenode進程
./hadoop-daemon.shstartzkfc單獨啟動一個zkfc進程
Ⅲ hadoop編譯時出現javac: file not found: 求助!!!!!
有 種可能
1、linux的classpath設置有問題,沒有加上"."。
2、你的javac後的classpath設置也不對,wordCount的編譯不只是需這一個包,我記得還有別的相關的jar包,只有都加到你的classpath中才能正常編譯。
3、這種編譯明顯是不太合適,在windows上編譯好後再導入linux運行比較簡單直接一些。
試下看吧。
Ⅳ 為什麼編譯 hadoop 源碼
編譯了hadoop,可以方便的查看某個函數的實現。如果不編譯就只是自己去翻源代碼了。更重要的是如果你編譯了hadoop,你可以根據自己的需要改動hadoop的某些實現機制。(hadoop開源的好處)