0

I have an Apache Beam task that reads from a MySQL source using JDBC and it's supposed to write the data as it is to a BigQuery table. No transformation is performed at this point, that will come later on, for the moment I just want the database output to be directly written into BigQuery.

This is the main method trying to perform this operation:

    public static void main(String[] args) {
        Options options = PipelineOptionsFactory.fromArgs(args).withValidation().as(Options.class);

        Pipeline p = Pipeline.create(options);

        // Build the table schema for the output table.
        List<TableFieldSchema> fields = new ArrayList<>();
        fields.add(new TableFieldSchema().setName("phone").setType("STRING"));
        fields.add(new TableFieldSchema().setName("url").setType("STRING"));
        TableSchema schema = new TableSchema().setFields(fields);

        p.apply(JdbcIO.<KV<String, String>>read()
         .withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create(
             "com.mysql.jdbc.Driver", "jdbc:mysql://host:3306/db_name")
             .withUsername("user")
             .withPassword("pass"))
             .withQuery("SELECT phone_number, identity_profile_image FROM scraper_caller_identities LIMIT 100")
             .withRowMapper(new JdbcIO.RowMapper<KV<String, String>>() {
                public KV<String, String> mapRow(ResultSet resultSet) throws Exception {
                return KV.of(resultSet.getString(1), resultSet.getString(2));
             }
          })
         .apply(BigQueryIO.Write
            .to(options.getOutput())
            .withSchema(schema)
            .withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
            .withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)));

        p.run();
    }

But when I execute the template using maven, I get the following error:

Test.java:[184,6] cannot find symbol symbol: method apply(com.google.cloud.dataflow.sdk.io.BigQueryIO.Write.Bound)
location: class org.apache.beam.sdk.io.jdbc.JdbcIO.Read<com.google.cloud.dataflow.sdk.values.KV<java.lang.String,java.lang.String>>

It seems that I'm not passing BigQueryIO.Write the expected data collection and that's what I am struggling with at the moment.

How can I make the data coming from MySQL meets BigQuery's expectations in this case?

MC.
  • 481
  • 7
  • 15

1 Answers1

2

I think that you need to provide a PCollection<TableRow> to BigQueryIO.Write instead of the PCollection<KV<String,String>> type that the RowMapper is outputting.

Also, please use the correct column name and value pairs when setting the TableRow. Note: I think that your KVs are the phone and url values (e.g. {"555-555-1234": "http://www.url.com"}), not the column name and value pairs (e.g. {"phone": "555-555-1234", "url": "http://www.url.com"})

See the example here: https://beam.apache.org/documentation/sdks/javadoc/0.5.0/

Would you please give this a try and let me know if it works for you? Hope this helps.

Alex Amato
  • 1,685
  • 10
  • 15