16

I am attempting to insert records into a MySql table. The table contains id and name as columns.

I am doing like below in a pyspark shell.

name = 'tester_1'
id = '103'  
import pandas as pd
l = [id,name]

df = pd.DataFrame([l])

df.write.format('jdbc').options(
      url='jdbc:mysql://localhost/database_name',
      driver='com.mysql.jdbc.Driver',
      dbtable='DestinationTableName',
      user='your_user_name',
      password='your_password').mode('append').save()

I am getting the below attribute error

AttributeError: 'DataFrame' object has no attribute 'write'

What am I doing wrong? What is the correct method to insert records into a MySql table from pySpark

mrsrinivas
  • 34,112
  • 13
  • 125
  • 125
User12345
  • 5,180
  • 14
  • 58
  • 105

2 Answers2

20

Use Spark DataFrame instead of pandas', as .write is available on Spark Dataframe only

So the final code could be

data =['103', 'tester_1']

df = sc.parallelize(data).toDF(['id', 'name'])

df.write.format('jdbc').options(
      url='jdbc:mysql://localhost/database_name',
      driver='com.mysql.jdbc.Driver',
      dbtable='DestinationTableName',
      user='your_user_name',
      password='your_password').mode('append').save()
mrsrinivas
  • 34,112
  • 13
  • 125
  • 125
  • 1
    I am getting this below error `java.lang.RuntimeException: org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not allow create table as select.`. Is there any other alternative to this. – User12345 Dec 22 '17 at 18:31
  • 1
    does that table exists in datasource and try playing with `mode` also. – mrsrinivas Dec 23 '17 at 04:09
  • I am getting this below error java.lang.RuntimeException: org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not allow create table as select.. Is there any other alternative to this. – Karn_way Jan 06 '20 at 06:41
  • @Karn_way: does the table exists in the target or creating one? – mrsrinivas Jan 06 '20 at 07:31
  • yes table does exists . I am using CDH image with mysql . I think its been fixed in spark 2.0 where in my image its 1.6 – Karn_way Jan 06 '20 at 09:52
  • please check this https://stackoverflow.com/questions/45716155/how-to-write-to-jdbc-source-with-sparkr-1-6-0 – Karn_way Jan 06 '20 at 09:52
0

Just to add @mrsrinivas answer's.

Make sure that you have jar location of sql connector available in your spark session. This code helps:

spark = SparkSession\
    .builder\
    .config("spark.jars", "/Users/coder/Downloads/mysql-connector-java-8.0.22.jar")\
    .master("local[*]")\
    .appName("pivot and unpivot")\
    .getOrCreate()

otherwise it will throw an error.

vegetarianCoder
  • 2,762
  • 2
  • 16
  • 27