Mam do czynienia z powyższym wyjątkiem, gdy próbuję zastosować metodę (ComputeDwt) na wejściu RDD[(Int,ArrayBuffer[(Int,Double)])]
. Używam nawet opcji extends Serialization
do serializowania obiektów w iskrze. Oto fragment kodu.Zadanie Iskara nie powiodło się z powodu java.io.NotSerializableException: org.apache.spark.SparkContext
input:series:RDD[(Int,ArrayBuffer[(Int,Double)])]
DWTsample extends Serialization is a class having computeDwt function.
sc: sparkContext
val kk:RDD[(Int,List[Double])]=series.map(t=>(t._1,new DWTsample().computeDwt(sc,t._2)))
Error:
org.apache.spark.SparkException: Job failed: java.io.NotSerializableException: org.apache.spark.SparkContext
org.apache.spark.SparkException: Job failed: java.io.NotSerializableException: org.apache.spark.SparkContext
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:760)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:758)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:758)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:556)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:503)
at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:361)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$run(DAGScheduler.scala:441)
at org.apache.spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:149)
Czy ktoś mógłby zasugerować mi, jaki może być problem i co należy zrobić, aby rozwiązać ten problem?
Prawdopodobnie dupe: https://stackoverflow.com/questions/21071152/aparch-spark-notserializableexception-org-apache-hadoop-io-text –