1

Hi I'm trying to log to a Kafka topic from a bunch of executors using Apache Spark with Log4J and the Kafka-Appender addition. I'm able to log with the executors using a basic File Appender but not to Kafka.

Here's my log4j.properties I made custom for this:

log4j.rootLogger=INFO, console, KAFKA, file

log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n


log4j.appender.KAFKA=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.KAFKA.topic=test2
log4j.appender.KAFKA.name=localhost
log4j.appender.KAFKA.host=localhost
log4j.appender.KAFKA.port=9092
log4j.appender.KAFKA.brokerList=localhost:9092
log4j.appender.KAFKA.compressionType=none
log4j.appender.KAFKA.requiredNumAcks=0
log4j.appender.KAFKA.syncSend=true
log4j.appender.KAFKA.layout=org.apache.log4j.PatternLayout
log4j.appender.KAFKA.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L %% - %m%n



log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=log4j-application.log
log4j.appender.file.MaxFileSize=5MB
log4j.appender.file.MaxBackupIndex=10
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n

Here's my code (so far). I tried to pass a logger definition so that each executor gets a copy but I don't know why it's not going to kafka:

import org.apache.log4j._
import org.apache.spark._
import org.apache.spark.rdd.RDD
import java.io._
import org.apache.kafka.log4jappender.KafkaLog4jAppender

class Mapper(n: Int) extends Serializable{
  @transient lazy val suplogger: Logger = Logger.getLogger("myLogger")

  def doSomeMappingOnDataSetAndLogIt(rdd: RDD[Int]): RDD[String] =
    rdd.map{ i =>
      val sparkConf: SparkConf =new org.apache.spark.SparkConf()
      logger.setLevel((Level) Level.ALL)
      suplogger.warn(sparkConf.toDebugString)
      val pid = Integer.parseInt(new File("/proc/self").getCanonicalFile().getName());
      suplogger.warn("--------------------")
      suplogger.warn("mapping: " + i)
      val supIterator = new scala.collection.JavaConversions.JEnumerationWrapper(suplogger.getAllAppenders())
      suplogger.warn("List is " + supIterator.toList)
      suplogger.warn("Num of list is: " + supIterator.size)

      //(i + n).toString
      "executor pid = "+pid + "debug string: " + sparkConf.toDebugString.size
    }
}

object Mapper {
  def apply(n: Int): Mapper = new Mapper(n)
}

object HelloWorld {
  def main(args: Array[String]): Unit = {
    println("sup")
    println("yo")
    val log = LogManager.getRootLogger
    log.setLevel(Level.WARN)
    val nameIterator = new scala.collection.JavaConversions.JEnumerationWrapper(log.getAllAppenders())
    println(nameIterator.toList)

    val conf = new SparkConf().setAppName("demo-app")
    val sc = new SparkContext(conf)
    log.warn(conf.toDebugString)
    val pid = Integer.parseInt(new File("/proc/self").getCanonicalFile().getName());
    log.warn("--------------------")
    log.warn("IP: "+java.net.InetAddress.getLocalHost() +" PId: "+pid)

    log.warn("Hello demo")

    val data = sc.parallelize(1 to 100, 10)

    val mapper = Mapper(1)

    val other = mapper.doSomeMappingOnDataSetAndLogIt(data)

    other.collect()

    log.warn("I am done")
  }

}

Here is some sample output from the log file:

2017-01-25 06:29:15 WARN  myLogger:19 - spark.driver.port=54335
2017-01-25 06:29:15 WARN  myLogger:21 - --------------------
2017-01-25 06:29:15 WARN  myLogger:23 - mapping: 1
2017-01-25 06:29:15 WARN  myLogger:25 - List is List()
2017-01-25 06:29:15 WARN  myLogger:26 - Num of list is: 0
2017-01-25 06:29:15 WARN  myLogger:19 - spark.driver.port=54335
2017-01-25 06:29:15 WARN  myLogger:21 - --------------------
2017-01-25 06:29:15 WARN  myLogger:23 - mapping: 2
2017-01-25 06:29:15 WARN  myLogger:25 - List is List()
2017-01-25 06:29:15 WARN  myLogger:26 - Num of list is: 0
2017-01-25 06:29:15 WARN  myLogger:19 - spark.driver.port=54335
2017-01-25 06:29:15 WARN  myLogger:21 - --------------------

Thanks for your help, if there's anything you guys (or gals) need from me please let me know!

Here's a copy of the spark-submit command

spark-submit \
    --deploy-mode client \
    --files spark_test/mylogger.props \
    --packages "com.databricks:spark-csv_2.10:1.4.0,org.apache.kafka:kafka-log4j-appender:0.10.1.1" \
    --num-executors 4 \
    --executor-cores 1 \
    --driver-java-options "-Dlog4j.configuration=file:///home/mapr/spark_test/mylogger.props" \
    --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:///home/mapr/spark_test/mylogger.props" \
    --class "HelloWorld" helloworld.jar
Atais
  • 10,857
  • 6
  • 71
  • 111

2 Answers2

0

I figured out what the issue was. I wasn't deploying to cluster, I was deploying in client mode only. Truth be told, I don't know why this worked when I sent to cluster.

I was using a MapR Sandbox VM https://www.mapr.com/products/mapr-sandbox-hadoop

If anybody can help explain why client/cluster made a difference here I'd be really grateful!

0

Problem

Your problem was that you did not properly pass the spark_test/mylogger.props file to executors.

Configure for

deploy-mode client

You need to upload the file anyway with files for executors.

spark-submit \
    --deploy-mode client \
    --driver-java-options "-Dlog4j.configuration=file:/home/mapr/spark_test/mylogger.props \
    --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:mylogger.props" \
    --files /home/mapr/spark_test/mylogger.props \
    ...

deploy-mode cluster

spark-submit \
    --deploy-mode cluster \
    --conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=file:mylogger.props" \
    --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:mylogger.props" \
    --files /home/mapr/spark_test/mylogger.props \
    ...

Need more options?

Check my full post about configuring Spark logging:
https://stackoverflow.com/a/55596389/1549135

And more detailed info about Spark + Kafka appender:
https://stackoverflow.com/a/58883911/1549135

Atais
  • 10,857
  • 6
  • 71
  • 111