Tag Archives: scala

Idea Run Scala Error: Exception in thread “main” java.lang.NoSuchMethodError:com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V

I.Description

Using idea + Scala + spark, the running program code is as follows:

package cn.idcast.hello

import org.apache.spark.rdd.RDD
import org.apache.spark
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext

/**
 * Author itcast
 * Desc Demo Spark Starter Case-WordCount
 */
object WordCount_bak {
  def main(args: Array[String]): Unit = {
    //TODO 1.env/preparesc/SparkContext/SparkContext execution environment
    val conf: SparkConf = new SparkConf().setAppName("wc").setMaster("local[*]")
    val sc: SparkContext = new SparkContext(conf)
    sc.setLogLevel("WARN")

    //TODO 2.source/read data
    //RDD:A Resilient Distributed Dataset (RDD): Resilient Distributed Dataset, simply understood as a distributed collection! It is as simple to use as an ordinary collection!
    // RDD [is a row of data]
    val lines: RDD[String] = sc.textFile("data/input/words.txt")

    //TODO 3.transformation/data manipulation/transformation
    //cut:RDD[one word]
    val words: RDD[String] = lines.flatMap(_.split(" "))
    //record as 1:RDD[(word, 1)]
    val wordAndOnes: RDD[(String, Int)] = words.map((_,1))
    //group aggregation:groupBy + mapValues(_.map(_. _2).reduce(_+_)) ===>group+aggregate inside Spark in one step:reduceByKey
    val result: RDD[(String, Int)] = wordAndOnes.reduceByKey(_+_)

    //TODO 4.sink/output
    //direct output
    result.foreach(println)
    //collect as a local collection and then output
    println(result.collect().toBuffer)
    //output to the specified path (can be a file/folder)
    result.repartition(1).saveAsTextFile("data/output/result")
    result.repartition(2).saveAsTextFile("data/output/result2")
    result.saveAsTextFile("data/output/result3")

    // For easy viewing of the Web-UI you can let the program sleep for a while
    Thread.sleep(1000 * 60)

    //TODO 5. Close the resource
    sc.stop()
  }
}

(forget the screenshot) an error is reported in the result: exception in thread “main” Java lang.NoSuchMethodError:com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;) V

It is said on the Internet that the jar package conflicts, but it does not solve the problem


II.Solution

Root cause of the problem: the scala version of windows is inconsistent with the scala version of spark, as shown in the figure:

This is spark’s own version, 2.12.10

I installed 2.12.11 on windows (forgot the screenshot), and later replaced it with 2.12.10 (reinstallation):

After that, it runs successfully without error

 

Spark shell cannot start normally due to scala compiler

After chopping hands, the fraud call came before the express delivery was received. How to improve the privacy and security of e-commerce>>>

Spark shell can’t start blog classification normally due to scala compiler: Spark

Recently, I began to learn spark. When I run the command spark shell according to the official instructions on windows to enter the spark shell of scala version, the following problems appear:

Failed to initialize compiler: object scala.runtime in compiler mirror not found.

** Note that as of 2.8 scala does not assume use of the java classpath.

** For the old behavior pass -usejavacp to scala, or if using a Setting

** object programatically, settings.usejavacp.value = true.

The reason for this problem is that Scala version 2.8 and above will no longer use Java classpath by default. To solve this problem, it is obvious to add the command to use Java classpath in the configuration file.

After Google method, we finally found a complete and feasible solution

Modify the contents of the file (that is, add the red part in the figure below)

After saving, run the bin/spark shell command on CMD to switch to the shell in scala version

So modify the

rem Set JAVA_ OPTS to be able to load native libraries and to set heap size
set JAVA_ OPTS=%OUR_ JAVA_ OPTS% -Djava.library.path=%SPARK_ LIBRARY_ PATH%-Xms%SPARK_ MEM% -Xmx%SPARK_ MEM%-Dscala.usejavacp=true

It is worth mentioning that there are many calss configuration files in Bin of spark 1.1.0 distribution package, such as spark class, spark class.cmd and spark-class2.cmd. The contents of these three files are different. I tried to use the solution in two other files, but none of them worked. As for why the solution needs to be applied in spark-class2.cmd and what functions these three files realize or what relationship they have, it needs to be further studied.

Note: the solutions mentioned above are from

http://my.oschina.net/sulliy/blog/217212

[Solved] java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;) V sets the corresponding Scala version

The following error message appears when running the spark test program.

Exception in thread “main” java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V

at org.apache.spark.SparkConf$DeprecatedConfig.<init>(SparkConf.scala:809)
at org.apache.spark.SparkConf$.<init>(SparkConf.scala:642)
at org.apache.spark.SparkConf$.<clinit>(SparkConf.scala)
at org.apache.spark.SparkConf.set(SparkConf.scala:94)
at org.apache.spark.SparkConf.set(SparkConf.scala:83)
at org.apache.spark.SparkConf.setAppName(SparkConf.scala:120)
at com.spark.HiveContextTest.main(HiveContextTest.java:9)

The current version of scala runtime environment introduced by iidea is not the same as the default scala version of idea