3

I have a GCS where i get file every minute.I have created a streaming dataflow by using apache beam python sdk.i created pub/sub topic for input gcs bucket and output gcs bucket.my dataflow is streaming yet my output is not getting stored in the output bucket.this is my following code ,

from __future__ import absolute_import

    import os
    import logging
    import argparse
    from google.cloud import language
    from google.cloud.language import enums
    from google.cloud.language import types
    from datetime import datetime
    import apache_beam as beam 
    from apache_beam.options.pipeline_options import PipelineOptions
    from apache_beam.options.pipeline_options import SetupOptions
    from apache_beam.options.pipeline_options import GoogleCloudOptions
    from apache_beam.options.pipeline_options import StandardOptions
    from apache_beam.io.textio import ReadFromText, WriteToText

    #dataflow_options = ['--project=****','--job_name=*****','--temp_location=gs://*****','--setup_file=./setup.py']
    #dataflow_options.append('--staging_location=gs://*****')
    #dataflow_options.append('--requirements_file ./requirements.txt')
    #options=PipelineOptions(dataflow_options)
    #gcloud_options=options.view_as(GoogleCloudOptions)


    # Dataflow runner
    #options.view_as(StandardOptions).runner = 'DataflowRunner'
    #options.view_as(SetupOptions).save_main_session = True

    def run(argv=None):
        """Build and run the pipeline."""
        parser = argparse.ArgumentParser()
        parser.add_argument(
            '--output_topic', required=True,
            help=('Output PubSub topic of the form '
                '"projects/***********".'))
        group = parser.add_mutually_exclusive_group(required=True)
        group.add_argument(
            '--input_topic',
            help=('Input PubSub topic of the form '
                '"projects/************".'))
        group.add_argument(
            '--input_subscription',
            help=('Input PubSub subscription of the form '
                '"projects/***********."'))
        known_args, pipeline_args = parser.parse_known_args(argv)

      # We use the save_main_session option because one or more DoFn's in this
      # workflow rely on global context (e.g., a module imported at module level).
        pipeline_options = PipelineOptions(pipeline_args)
        pipeline_options.view_as(SetupOptions).save_main_session = True
        pipeline_options.view_as(StandardOptions).streaming = True
        p = beam.Pipeline(options=pipeline_options)


        # Read from PubSub into a PCollection.
        if known_args.input_subscription:
            messages = (p
                        | beam.io.ReadFromPubSub(
                            subscription=known_args.input_subscription)
                        .with_output_types(bytes))
        else:
            messages = (p
                        | beam.io.ReadFromPubSub(topic=known_args.input_topic)
                        .with_output_types(bytes))

        lines = messages | 'decode' >> beam.Map(lambda x: x.decode('utf-8'))

        class Split(beam.DoFn):
            def process(self,element):
                element = element.rstrip("\n").encode('utf-8')
                text = element.split(',') 
                result = []
                for i in range(len(text)):
                    dat = text[i]
                    #print(dat)
                    client = language.LanguageServiceClient()
                    document = types.Document(content=dat,type=enums.Document.Type.PLAIN_TEXT)
                    sent_analysis = client.analyze_sentiment(document=document)
                    sentiment = sent_analysis.document_sentiment
                    data = [
                    (dat,sentiment.score)
                    ] 
                    result.append(data)
                return result

        class WriteToCSV(beam.DoFn):
            def process(self, element):
                return [
                    "{},{}".format(
                        element[0][0],
                        element[0][1]
                    )
                ]

        Transform = (lines
                    | 'split' >> beam.ParDo(Split())
                    | beam.io.WriteToPubSub(known_args.output_topic)
        )
        result = p.run()
        result.wait_until_finish()

    if __name__ == '__main__':
      logging.getLogger().setLevel(logging.INFO)
      run()

what am i doing wrong please someone explain it to me .

1 Answers1

2

WriteToPubSub writes data to a PubSub topic, not to a GCS bucket. What you want to do is, perhaps, use WriteToText, or a DoFn that writes your data to the bucket using apache_beam.io.filesystems.

An extra note is that it doesn't look like your WriteToCsv transform is used anywhere.

Pablo
  • 10,425
  • 1
  • 44
  • 67
  • thanks for feedback,but what i was thinking is i have created a topic of the bucket where i get incoming files.so when i use ReadFromPubSub what exactly it does?is the output is the filename of the bucket?if yes then can i use that output of pubsub and give input as "gs://bucketname/outputof pubsub"? or the readfrompubsub directly streams new files one by one and i dont need to give any input filename?please help sir –  Mar 08 '19 at 05:45
  • 1
    1) I've used `apache_beam.io.WriteToText` to write streaming data (from ReadFromPubSub) to GCS.. but the streamed messages just stay in temp folder (within the destination bucket location). Until I Drain the pipeline and only then I see number of shards wth actual data appearing in desired destination. Is there any known issues ? 2) Also I'd like to clarify, is it only the windowed stream that's written to GCS? what's the expected behavior if I were to write each published message stream (non-windowed) to GCS? each message creates one file ? – Vibhor Jain Oct 08 '19 at 20:16