0

After HTTP trigger, I want to read .csv file from blob storage and append new data in that file. and want to save data in .csv format to blob storage. Please help me out.....

    scriptpath=os.path.abspath(__file__)
    scriptdir=os.path.dirname(scriptpath)

    train=os.path.join(scriptdir,'train.csv')
    train_df=pd.read_csv(train)
    train_df=train_df.append(test_df)

    
    train_df.to_csv(scriptdir+'tt.csv')
    
    block_blob_service.create_blob_from_path(
        'files',
        'mytest.csv',
        scriptdir+'tt.csv',
        content_settings=ContentSettings(content_type='application/CSV')
                )  

my problem is after appending data, I have to save that data to blob storage. So that, I have to save all data in csv file but the above error comes. Https trigger don't give me permission to save csv file. error shows

Exception: PermissionError: [Errno 13] Permission denied: 

'C:\\Users\\Shiva\\Desktop\\project\\loumus\\Imagetrigger'
Stack:   File "C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.7/WINDOWS/X64\azure_functions_worker\dispatcher.py", line 357, in _handle__invocation_request
    self.__run_sync_func, invocation_id, fi.func, args)
  File "C:\Users\Shiva\AppData\Local\Programs\Python\Python37\lib\concurrent\futures\thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.7/WINDOWS/X64\azure_functions_worker\dispatcher.py", line 542, in __run_sync_func
    return func(**params)
  File "C:\Users\Shiva\Desktop\project\loumus\Imagetrigger\__init__.py", line 276, in main
    mm.to_csv(scriptdir,'tt.csv')
  File "C:\Users\Shiva\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\generic.py", line 3403, in to_csv
    storage_options=storage_options,
  File "C:\Users\Shiva\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\io\formats\format.py", line 1083, in to_csv
    csv_formatter.save()
  File "C:\Users\Shiva\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\io\formats\csvs.py", line 234, in save
    storage_options=self.storage_options,
  File "C:\Users\Shiva\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\io\common.py", line 647, in get_handle
    newline="",
Carlos
  • 128
  • 5
BhuriStray
  • 87
  • 1
  • 12

1 Answers1

0

There are several issues.

  1. Azure Functions run inside a managed runtime environment, you do not have same level of access to a local storage/disk like you would when running on a laptop. Not to say you don't have local disk. STW and RTM:
  2. Use tempdir to create a temp directory. This would create it in area designated by underlying OS as temp storage.
  3. There is no specific reason to write to local storage and then upload to ADLS. You could:
    • Write csv file in memory (e.g. StringIO) then use SDK to write that str to a ADLS.
    • Install appropriate drivers (not sure if you're using pyspark or pandas or something else) and directly write Dataframe to ADLS. E.g. see one example.
Kashyap
  • 15,354
  • 13
  • 64
  • 103