I started adding some implicit conversions to my code base. I didn't really research how this was done in Scala or look at many examples, so I implemented these as traits. For example, this snippet lets you test the schema of a Spark DataFrame:
trait DataFrameImplicits {
implicit class DataFrameTest(df: DataFrame) {
def testInputFields(requiredCols: Map[String, DataType]): Unit = {
requiredCols.foreach { case (colName: String, colType: DataType) =>
if (!df.schema.exists(_.name == colName) || df.schema(colName).dataType.simpleString != colType.simpleString)
throw exceptWithLog(DFINPUT_TEST_BADCOLS, s"Input DataFrame to Preprocess.process does not contain column $colName of type ${colType.simpleString}")
}
}
This is then implemented by:
object MyFunctionality extends DataFrameImplicits {
def myfunc(df: DataFrame): DataFrame = {
df.testInputFields( ??? )
df.transform( ??? )
}
}
Now, looking at more Scala code recently, I see that the "standard" way to include implicits is to define them in an object and import them
import com.package.implicits._
or something like that.
Is there any reason to convert my code to work that way? Is there any reason not to include implicit conversions in Scala traits?