from the javadocs of NoSuchMethodError
Thrown if an application tries to call a specified method of a class (either static or instance), and that class no longer has a definition of that method.
Normally, this error is caught by the compiler; this error can only occur at run time if the definition of a class has incompatibly changed.
Based on the details of the question, I guess you were able to compile the project because you have the correct dependency set during the compilation time, but at runtime you used another version.
case class ClusteredDistribution(
clustering: Seq[Expression],
requiredNumPartitions: Option[Int] = None) extends Distribution {
require(
clustering != Nil,
"The clustering expressions of a ClusteredDistribution should not be Nil. " +
"An AllTuples should be used to represent a distribution that only has " +
"a single partition.")
case class ClusteredDistribution(
clustering: Seq[Expression],
requireAllClusterKeys: Boolean = SQLConf.get.getConf(
SQLConf.REQUIRE_ALL_CLUSTER_KEYS_FOR_DISTRIBUTION),
requiredNumPartitions: Option[Int] = None) extends Distribution {
require(
clustering != Nil,
"The clustering expressions of a ClusteredDistribution should not be Nil. " +
"An AllTuples should be used to represent a distribution that only has " +
"a single partition.")
As you can see, the constructor of each case class
is different. They don't have the same signature. So, if you compile the project using one version and during runtime you have another version where they don't have the same signature, you can end in that type of error.
You want to understand the error message that says
java.lang.NoSuchMethodError: 'boolean org.apache.spark.sql.catalyst.plans.physical.ClusteredDistribution$.apply$default$2()'
Lets try to throw the same error message. To do that, we can create a project compile
with two files.
object Main extends App {
new ClusteredDistribution(Seq("Hello"))
}
ClusteredDistribution.scala
(similar to the class at compile time)
class ClusteredDistribution(
clustering: Seq[String],
requireAllClusterKeys: Boolean = true,
requiredNumPartitions: Option[Int] = None
)
then compile the project using scalac
scalac *.scala
Then create another project called runtime
just using the *.class
files generated from the runtime
project
cp compile/*.class runtime/
inside runtime
project create a new ClusteredDistribution.scala
with the following code
class ClusteredDistribution(
clustering: Seq[String],
requiredNumPartitions: Option[Int] = None
)
compile just this class
scalac *.scala
and then run the Main
class with
scala Main
there you are, a similar error message
java.lang.NoSuchMethodError: 'boolean ClusteredDistribution$.$lessinit$greater$default$2()'
from How can I see in what [Java/Scala?] code does Scala compiler rewrites original Scala-code
use "scalac -print" to compile it, you will get the following Scala code
If we try to compile the files we created in the previous step with that flag, this is what you get
- from
compile
(<init>$default$2(): Boolean
)
[[syntax trees at end of cleanup]] // ClusteredDistribution.scala
package <empty> {
class ClusteredDistribution extends Object {
def <init>(clustering: Seq, requireAllClusterKeys: Boolean, requiredNumPartitions: Option): ClusteredDistribution = {
ClusteredDistribution.super.<init>();
()
}
};
<synthetic> object ClusteredDistribution extends Object {
<synthetic> def <init>$default$2(): Boolean = true;
<synthetic> def <init>$default$3(): Option = scala.None;
def <init>(): ClusteredDistribution.type = {
ClusteredDistribution.super.<init>();
()
}
}
}
- from
runtime
(<init>$default$2(): Option
)
[[syntax trees at end of cleanup]] // ClusteredDistribution.scala
package <empty> {
class ClusteredDistribution extends Object {
def <init>(clustering: Seq, requiredNumPartitions: Option): ClusteredDistribution = {
ClusteredDistribution.super.<init>();
()
}
};
<synthetic> object ClusteredDistribution extends Object {
<synthetic> def <init>$default$2(): Option = scala.None;
def <init>(): ClusteredDistribution.type = {
ClusteredDistribution.super.<init>();
()
}
}
}
From there, we can observe that the number from $default$2
is related with the position of the parameter where you put a default value. You can also see that there is a $default$3
for the one from compile
project which belong to the third param of the class.
You can find that in http4s 0.23.23, for keeping binary compatibility with http4s 0.23.12
, they did the following in Multipart companion object
object Multipart {
@deprecated("Retaining for binary-compatibility", "0.23.12")
def `<init>$default$2`: String = apply$default$2
@deprecated("Retaining for binary-compatibility", "0.23.12")
def apply$default$2: String = Boundary.unsafeCreate().value
@deprecated(
"Creating a boundary is an effect. Use Multiparts.multipart to generate an F[Multipart[F]], or call the two-parameter apply with your own boundary.",
"0.23.12",
)
def apply[F[_]](parts: Vector[Part[F]]) = new Multipart(parts)
}