当前位置:首页 » 编程软件 » spark编译

spark编译

发布时间: 2022-01-08 03:27:30

‘壹’ Spark Shell因为Scala编译器原因不能正常启动怎么解决

Spark Shell由于Scala编译器原因不能正常启动

使用SBT安装完成Spark后,可以运行示例,但是尝试运行spark-shell就会报错:

D:\Scala\spark\bin\spark-shell.cmd
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J:
Found binding in
[jar:file:/D:/Scala/spark/assembly/target/scala-2.10/spark-assembly-0.9.0-incubating-hadoop1.0.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/D:/Scala/spark/tools/target/scala-2.10/spark-tools-assembly-0.9.0-incubating.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
14/04/03 20:40:43 INFO HttpServer: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/04/03 20:40:43 INFO HttpServer: Starting HTTP Server

Failed to initialize compiler: object scala.runtime in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programatically, settings.usejavacp.value = true.
14/04/03
20:40:44 WARN SparkILoop$SparkILoopInterpreter: Warning: compiler
accessed before init set up.  Assuming no postInit code.

Failed to initialize compiler: object scala.runtime in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programatically, settings.usejavacp.value = true.
Failed to initialize compiler: object scala.runtime in compiler mirror not found.
        at scala.Predef$.assert(Predef.scala:179)
       
at
org.apache.spark.repl.SparkIMain.initializeSynchronous(SparkIMain.scala:197)
       
at
org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:919)
       
at
org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:876)
       
at
org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:876)
       
at
scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
       
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:876)
       
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:968)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)

Google
之还是不求解。只是在SBT的网站上看到Q&A里面有个问题提到了:http://www.scala-sbt.org/release
/docs/faq#how-do-i-use-the-scala-interpreter-in-my-code。这里说代码中怎么修改设置。显然不
适合我。

继续求解。注意到错误提示是在2.8以后才有的,原因是有一个关于编译器解释权Classpath的提议被接受了:Default compiler/interpreter classpath in a managed environment。

继续在Google中找,有一篇论文吸引了我的注意:Object Scala Found。里面终于找到一个办法:



However, a working command can be recovered, like so:
$ jrunscript -Djava.class.path=scala-library.jar -Dscala.usejavacp=true -classpath scala-compiler.jar -l scala



于是修改一下\bin\spark-class2.cmd:

rem Set JAVA_OPTS to be able to load native libraries and to set heap size
set
JAVA_OPTS=%OUR_JAVA_OPTS% -Djava.library.path=%SPARK_LIBRARY_PATH%
-Dscala.usejavacp=true -Xms%SPARK_MEM% -Xmx%SPARK_MEM%
rem Attention: when changing the way the JAVA_OPTS are assembled, the change must be reflected in ExecutorRunner.scala!

标红的部分即是心添加的一个参数。再次运行\bin\spark-shell.cmd:

D:>D:\Scala\spark\bin\spark-shell.cmd
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J:
Found binding in
[jar:file:/D:/Scala/spark/assembly/target/scala-2.10/spark-assembly-0.9.0-incubating-hadoop1.0.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/D:/Scala/spark/tools/target/scala-2.10/spark-tools-assembly-0.9.0-incubating.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
14/04/03 22:18:41 INFO HttpServer: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/04/03 22:18:41 INFO HttpServer: Starting HTTP Server
Welcome to
     

____             
__
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 0.9.0
      /_/

Using Scala version 2.10.3 (Java HotSpot(TM) Client VM, Java 1.6.0_10)
Type in expressions to have them evaluated.
Type :help for more information.
14/04/03 22:19:12 INFO Slf4jLogger: Slf4jLogger started
14/04/03 22:19:13 INFO Remoting: Starting remoting
14/04/03 22:19:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@Choco-PC:5960]
14/04/03 22:19:16 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@Choco-PC:5960]
14/04/03 22:19:16 INFO SparkEnv: Registering BlockManagerMaster
14/04/03
22:19:17 INFO DiskBlockManager: Created local directory at
C:\Users\Choco\AppData\Local\Temp\spark-local-20140403221917-7172
14/04/03 22:19:17 INFO MemoryStore: MemoryStore started with capacity 304.8 MB.
14/04/03 22:19:18 INFO ConnectionManager: Bound socket to port 5963 with id = ConnectionManagerId(Choco-PC,5963)
14/04/03 22:19:18 INFO BlockManagerMaster: Trying to register BlockManager
14/04/03 22:19:18 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager Choco-PC:5963 with 304.8 MB RAM
14/04/03 22:19:18 INFO BlockManagerMaster: Registered BlockManager
14/04/03 22:19:18 INFO HttpServer: Starting HTTP Server
14/04/03 22:19:18 INFO HttpBroadcast: Broadcast server started at http://192.168.1.100:5964
14/04/03 22:19:18 INFO SparkEnv: Registering MapOutputTracker
14/04/03
22:19:18 INFO HttpFileServer: HTTP File server directory is
C:\Users\Choco\AppData\Local\Temp\spark-e122cfe9-2d62-4a47-920c-96b54e4658f6
14/04/03 22:19:18 INFO HttpServer: Starting HTTP Server
14/04/03 22:19:22 INFO SparkUI: Started Spark Web UI at http://Choco-PC:4040
14/04/03 22:19:22 INFO Executor: Using REPL class URI: http://192.168.1.100:5947
Created spark context..
Spark context available as sc.

scala> :quit
Stopping spark context.
14/04/03 23:05:21 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!
14/04/03 23:05:21 INFO ConnectionManager: Selector thread was interrupted!
14/04/03 23:05:21 INFO ConnectionManager: ConnectionManager stopped
14/04/03 23:05:21 INFO MemoryStore: MemoryStore cleared
14/04/03 23:05:21 INFO BlockManager: BlockManager stopped
14/04/03 23:05:21 INFO BlockManagerMasterActor: Stopping BlockManagerMaster
14/04/03 23:05:21 INFO BlockManagerMaster: BlockManagerMaster stopped
14/04/03 23:05:21 INFO SparkContext: Successfully stopped SparkContext
14/04/03 23:05:21 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
14/04/03
23:05:21 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon
shut down; proceeding with flushing remote transports.

Good。浏览器打开http://Choco-PC:4040,就可以看到Spark的状态、环境、执行者等信息了。

这个Fix可能只是适用与我的情况。如果还有问题可以再找找相关的资料。

期间还碰到不能找到文件的错误。最后发现是JAVA_HOME设置没有对。如果你碰到问题了,可以打开脚本的回显,然后找找原因。

‘贰’ 新手请教:为什么安装spark需要源码编译

因为不同版本的HDFS在协议上是不兼容的,所以如果你想用你的Spark从HDFS上读取数据,那么你就的选择相应版本的HDFS来编译Spark,这个可以在编译的时候通过设置hadoop.version来选择,默认情况下,Spark默认为编译成Hadoop 1.0.4版本。现在可以使用的方法有Maven编译,sbt编译(时间较长),Spark自带脚本编译(实际还是调用Maven)。

‘叁’ eclipse运行spark wordcount程序 编译错误

找不到这个类,你缺少这个类所在的jar包,根据这个类名搜索

‘肆’ 怎么编译spark-streaming-flume

storm是实时处理,spark和hadoop是批处理,两者是互补。在Hadoop2.0之后,hadoop使用了新的yarn框架,map/rece只是其中一种默许了,spark也可以在hadoop的yarn框架下运行的,所以2者还会是融合的。
spark还有与storm相同功能的 Spark Steaming,实时处理流式数据。可以顺着Hadoop -> spark -> spark Steaming一路学下去,storm是否学习,你可以自己选择下。如果有相同功能的spark Steaming,肯定是学习spark Steaming啦。
如果我的回答没帮助到您,请继续追问。

‘伍’ spark独立模式还需要编译吗

spark有三种集群部署方式:

1、独立部署模式standalone,spark自身有一套完整的资源管理方式

2、架构于hadoop之上的spark集群

3、架构于mesos之上的spark集群

尝试了下搭建第一种独立部署模式集群,将安装方式记录如下:

环境ubuntu 12.04 (两台),部署方式是和hadoop类似,先在一台机器上部署成功后直接将文件打包拷贝到其他机器上,这里假设现在A机器上部署,并且A为master,最后B为slave

A和B均上创建用户spark

sudo useradd spark
以后spark的目录在集群所有机器的/home/spark/spark下(第一个spark是用户名,第二个spark是spark文件目录名)

保证A能无密码登陆到B上的spark用户,在ssh里面设置

这部分是现在master机器(A)上配置

0 首先保证A能无密码方式ssh至localhost和B ,具体方式参见: 点击打开链接

0.1 在A机器上执行

ssh-keygen -t rsa
cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
ssh localhost
那么A可以实现无密码登陆localhost

0.2 在B机器上执行

ps -e|grep ssh
如果出现:

695 ? 00:00:00 sshd

1754 ? 00:00:00 ssh-agent

若没有sshd那么在B上执行

sudo apt-get install openssh-server
在B上安装ssh服务端(ubuntu有可能默认只有agent端)

0.3 在B上执行:

ssh-keygen -t rsa
scp spark@A:~/.ssh/authorized_keys ~/.ssh
第一句是为了保证在B上有.ssh目录

第二句是将A的公钥拷贝到B上,从而实现A无密码访问B

0.4 在A上执行gedit ~/.ssh/config添加

user spark
这里是为了A以默认用户spark无密码登陆B,其实这一步没有必要,因为A和B机器上都是在spark用户下操作的,那么机器A的saprk执行ssh B也是以spark用户登陆的

1 每台机器确保有java ,一个简单的方式:

sudo apt-get install eclipse
2 需要maven编译spark源码 ,下载maven 点击打开链接 ,随便下载一个版本

简单的方式:

sudo apt-get install maven
复杂的方式:

wget http://mirrors.cnnic.cn/apache/maven/maven-3/3.2.2/binaries/apache-maven-3.2.2-bin.tar.gz
tar -zxvf apache-maven-3.2.2-bin.tar.gz
mv apache-maven-3.2.2-bin.tar.gz maven
sudo mv maven /usr/local
然后gedit /etc/profile末尾添加如下:

#set maven environment
M2_HOME=/usr/local/maven
export MAVEN_OPTS="-Xms256m -Xmx512m"
export PATH=$M2_HOME/bin:$PATH
验证maven安装成功:

source /etc/profile
mvn -v
出现类似语句:Apache Maven 3.2.2 (; 2014-06-17T21:51:42+08:00)

3 下载spark, 点击打开链接 ,注意不要下载带有hadoop之类字样的版本,而是source package比如spark-1.0.0.tgz

tar -zxvf spark-1.0.0.tgz
mv spark-1.0.0 spark
cd spark
sh make-distribution.sh
最后一步会

编译spark源码

,过程可能有点长,取决于网络和机器配置,我的用了19min,编译成功类似如下图(图来自网上):

4 配置spark

4.1 gedit ./conf/spark-env.sh在spark-env.sh末尾添加如下:

export SPARK_MASTER_IP=A
export SPARK_WORKER_CORES=1
export SPARK_WORKER_INSTANCES=1
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_MEMORY=1g
export MASTER=spark://${SPARK_MASTER_IP}:${SPARK_MASTER_PORT}
注意这里的SPARK_MASTER_IP我觉得还是设置为master机器的IP地址比较好,这里我假设master的hostname是A

SPARK_WORKER_INSTANCES表示slave机器的数目,这里只有B一台故设为1

4.2 gedit ./conf/slaves添加B的hostname,这里B机器的hostname假设就为B故在文件中追加一个B即可。文件里原来有一个localhost如果你想要master同时也为worker机器那么可保留该行,否则可以删除

5 验证master机器A能否单机启动spark

‘陆’ 使用maven编译spark报错,谁知道是什么原因吗

maven项目编译的标准输出路径就是mavenProject/target/classes, 项目右键 build path--configure build path--java build path--source,把default output folder改成mavenProject/target/classes,在菜单栏project下把自动编译打勾。

‘柒’ spark编译失败后怎么重新编译

### spark 下载

http://spark.apache.org/downloads.html

### 前提准备
# 安装 JDK,mvn 和 scala,并设置对应的JAVA_HOME,MVN_HOME 和 SCALA_HOME
### Spark 编译 (参考 http://spark.apache.org/docs/latest/building-with-maven.html)
#1 设置maven的环境变量
exportMAVEN_OPTS="-Xmx2g
-XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"

‘捌’ spark1.3编译出错,求解决方法

把 pom.xml文件中的scalastyle的相关改成false
<groupId>org.scalastyle</groupId>
<artifactId>scalastyle-maven-plugin</artifactId>
<version>0.4.0</version>
<configuration>
<verbose>false</verbose>
<failOnViolation>false</failOnViolation>
<includeTestSourceDirectory>false</includeTestSourceDirectory>
<failOnWarning>false</failOnWarning>

‘玖’ 如何在CDH上编译运行spark application

以SBT为例,比如说我的一个应用原来是基于原生1.6.0spark编译的
libraryDependencies ++= {
val sparkV = "1.6.0"
Seq(
"org.apache.spark" %% "spark-core" % sparkV withSources() withJavadoc(),
"org.apache.spark" %% "spark-catalyst" % sparkV withSources() withJavadoc(),
"org.apache.spark" %% "spark-sql" % sparkV withSources() withJavadoc(),
"org.apache.spark" %% "spark-hive" % sparkV withSources() withJavadoc(),
"org.apache.spark" %% "spark-repl" % sparkV withSources() withJavadoc(),
)

‘拾’ spark编译在linux怎么使用

在windows下安装软件大家都觉得很容易,只要双击setup或是install的图标,然后跟着向导一步一步的按提示做就可以了,但是在linux下安装软件就不像windows下那样容易了,有时你找到的是没有编译过的软件源码,那就更加的麻烦了,这里就介绍一下如...

热点内容
布拖县社保卡初始密码是多少 发布:2024-12-05 10:41:31 浏览:793
nvl数据库 发布:2024-12-05 10:39:53 浏览:317
ev编译教程 发布:2024-12-05 10:39:11 浏览:892
金本位算法 发布:2024-12-05 10:33:31 浏览:98
二元次解压 发布:2024-12-05 10:28:38 浏览:517
云流量服务器搭建 发布:2024-12-05 10:18:48 浏览:179
熟练空3加密 发布:2024-12-05 10:06:18 浏览:724
sony游戏机格式化密码是什么 发布:2024-12-05 10:05:34 浏览:757
云服务器的ip干净吗 发布:2024-12-05 09:53:23 浏览:456
插入排序编译代码 发布:2024-12-05 09:41:40 浏览:706