I got messy code with debezium:
"doulist_name": "2013 豆瓣电影��碑榜】"
There are Chinese words in mysql database, i use debezium to send the data to kafka. I found the Chinese words become messy code when consume the message, how could i solve the problem? Is there any configuration I could use?
When I use flume and kafka producer to generate Chinese words, it works fine
part of the config:
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
connector.class=io.debezium.connector.mysql.MySqlConnector
database.server.id=18405
database.server.name=mysqlfullfillment
database.whitelist=test
database.history.kafka.bootstrap.servers=192.168.0.100:9092
database.history.kafka.topic=dbhistory.fullfillment-local
include.schema.changes=true
transforms=unwrap
transforms.unwrap.type=io.debezium.transforms.UnwrapFromEnvelope
mysql character set : utf8 mysql config picture
The version : debezium v0.7.5, kafka v1.1.1
Add:
When I test it with console./kafka-console-consumer.sh --zookeeper 192.168.0.100:2181 --topic mysqlfullfillment.test.doulist
I got messy code
"doulist_name": "2013 豆瓣电影��碑榜】"
In my spark code, I got the same messy code:
def main(args: Array[String]) {
val spark = SparkSession
.builder()
.master("local")
.appName("KafkaWordCount")
.config("spark.streaming.stopGracefullyOnShutdown", "true")
.getOrCreate()
simpleTestCode(spark)
}
def simpleTestCode(spark: SparkSession): Unit = {
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "localhost:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "KafkaWordCountgroup",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (true: java.lang.Boolean)
)
val topics = Array("mysqlfullfillment.test.doulist")
val ssc = new StreamingContext(spark.sparkContext, Seconds(2))
ssc.checkpoint("/home/feng/software/code/bigdata/spark-warehouse")
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferBrokers,
Subscribe[String, String](topics, kafkaParams)
)
stream.map(mapFunc = record => (record.key, record.value)).foreachRDD(
r => r.collect().foreach(t => print("message:" + t)))
ssc.start()
ssc.awaitTermination()
}