0

In the example code below, I am trying to create case class objects with default values using runtime Scala reflection (required for my use case)!

First Approach

  1. Define default values for case class fields
  2. Create objects at runtime

Second Approach

  1. Create a case class object in the companion object
  2. Fetch that object using reflection

At first glance, the second approach seemed better because we are creating object only once but upon profiling these two approaches, the second doesn't seem to add much value. Although while sampling only one object is created indeed throughout the runtime of the application! Though it looks obvious that those objects are being created every time when using reflection (Correct me if I am wrong).

newDefault newDefault

newDefault2 newDefault2

object TestDefault extends App {

  case class XYZ(str: String = "Shivam")
  object XYZ { private val default: XYZ = XYZ() }
  case class ABC(int: Int = 99)
  object ABC { private val default: ABC = ABC() }

  def newDefault[A](implicit t: reflect.ClassTag[A]): A = {
    import reflect.runtime.{universe => ru}
    import reflect.runtime.{currentMirror => cm}

    val clazz  = cm.classSymbol(t.runtimeClass)
    val mod    = clazz.companion.asModule
    val im     = cm.reflect(cm.reflectModule(mod).instance)
    val ts     = im.symbol.typeSignature
    val mApply = ts.member(ru.TermName("apply")).asMethod
    val syms   = mApply.paramLists.flatten
    val args   = syms.zipWithIndex.map {
      case (p, i) =>
        val mDef = ts.member(ru.TermName(s"apply$$default$$${i + 1}")).asMethod
        im.reflectMethod(mDef)()
    }
    im.reflectMethod(mApply)(args: _*).asInstanceOf[A]
  }

  for (i <- 0 to 1000000000)
    newDefault[XYZ]

//  println(s"newDefault XYZ = ${newDefault[XYZ]}")
//  println(s"newDefault ABC = ${newDefault[ABC]}")

  def newDefault2[A](implicit t: reflect.ClassTag[A]): A = {
    import reflect.runtime.{currentMirror => cm}

    val clazz = cm.classSymbol(t.runtimeClass)
    val mod   = clazz.companion.asModule
    val im    = cm.reflect(cm.reflectModule(mod).instance)
    val ts    = im.symbol.typeSignature

    val defaultMember = ts.members.filter(_.isMethod).filter(d => d.name.toString == "default").head.asMethod

    val result = im.reflectMethod(defaultMember).apply()
    result.asInstanceOf[A]
  }

  for (i <- 0 to 1000000000)
    newDefault2[XYZ]
}

Is there any way to reduce the memory footprint? Any other better approach to achieve the same?

P.S. If are trying to run this app, comment the following lines alternatively:

  for (i <- 0 to 1000000000)
    newDefault[XYZ]

  for (i <- 0 to 1000000000)
    newDefault2[XYZ]

EDIT

As per @Levi Ramsey's suggestion, I did try memoization but it seems to only make a small difference!

  val cache = new ConcurrentHashMap[universe.Type, XYZ]()

  def newDefault2[A](implicit t: reflect.ClassTag[A]): A = {
    import reflect.runtime.{currentMirror => cm}

    val clazz = cm.classSymbol(t.runtimeClass)
    val mod   = clazz.companion.asModule
    val im    = cm.reflect(cm.reflectModule(mod).instance)
    val ts    = im.symbol.typeSignature

    if (!cache.contains(ts)) {
      val default = ts.members.filter(_.isMethod).filter(d => d.name.toString == "default").head.asMethod
      cache.put(ts, im.reflectMethod(default).apply().asInstanceOf[XYZ])
    }

    cache.get(ts).asInstanceOf[A]
  }

  for (i <- 0 to 1000000000)
    newDefault2[XYZ]

memoization

iamsmkr
  • 800
  • 2
  • 10
  • 29
  • 1
    Most likely things like `ts.members` are creating a new collection every time; each call to `filter` would then also be allocating an intermediate collection. You may want to memoize `newDefault2` – Levi Ramsey Sep 19 '22 at 15:19
  • 1
    Are you aware that on the left chart, i.e. the CPU utilization, the blue line (that one that is at zero all the time), represents the garbage collection, in other words, that the garbage collection has no influence on the performance at all here? – Holger Sep 19 '22 at 16:20
  • @Holger This seems misleading! Blue line that represents GC activiity is 0% in both the cases. Yet in first case GC Activity is 500.1% and in second case it is 0%. Could you please explain that? Also on the right side of the graph there clearly are crest and trough. Don't they represent GC as well? – iamsmkr Sep 19 '22 at 18:29
  • @iamsmkr better, those show up as _negative_ 500%? :) – Eugene Sep 19 '22 at 18:58
  • @LeviRamsey Please see the updated answer. It does seem to make a difference, a small one however! – iamsmkr Sep 19 '22 at 21:18
  • 2
    @iamsmkr that value refers to the current value, not the progress shown in the graph. You can see that the graph goes down at the end; apparently you stopped the application right at this moment which led to the obviously not-to-be-taken-serious value of “-500%”. The graph on the right hand side shows the *memory usage* and yes there *is* activity but as indicated, the time needed for cleaning up that amount of memory is so small, that the *CPU activity* is rounded to “0.0%” throughout the minutes shown here. You are trying to optimize something entirely irrelevant to the overall performance. – Holger Sep 20 '22 at 06:55

0 Answers0