15

I have a file which contains a list of names stored in a simple text file. Each row contains one name. Now I need to pro grammatically append a new name to this file based on a users input. For the input itself I use DataBricks widgets - this is working just fine and I have the new name stored in a string object. Now I need to append this name to my file.

the file is mounted in the DataBricks File System (DBFS) under /mnt/blob/myNames.txt

when trying to read the file like this:

f = open("/mnt/blob/myNames.txt", "r") 
print f

it returns an error "No such file or directory"

So I tried to wrap my new name into a dataframe and append it to the existing file but this also did not work as dataframe.write.save is designed to write into folders

what would be the most simple python could that I could use to append this new name to my file?

Gerhard Brueckl
  • 708
  • 1
  • 9
  • 24

2 Answers2

26

You can write and read files from DBFS with dbutils. Use the dbutils.fs.help() command in databricks to access the help menu for DBFS.

You would therefore append your name to your file with the following command:

dbutils.fs.put("/mnt/blob/myNames.txt", new_name)

You are getting the "No such file or directory" error because the DBFS path is not being found. Use dbfs:/ to access a DBFS path. This is how you should have read the file:

f = open("/dbfs/mnt/blob/myNames.txt", "r")
Elsis
  • 286
  • 3
  • 5
  • 1
    can you kindly let me know how to append a text to an already existing text file? i had used 'a'/ 'a+' but it is overwriting the file. PFB my code file =open("/dbfs/mnt/adls/QA/Log/test.txt", 'a+') file.write('Python is awesome ') – Maharajaparaman Mar 19 '20 at 05:55
  • this won't work once you start using clusters: `PicklingError: Could not serialize object: Exception: You cannot use dbutils within a spark job` – 123 Jun 17 '22 at 05:37
11

You can open the file in append mode using 'a'

with  open("/dbfs/mnt/sample.txt", "a") as f:
  f.write("append values")

Now you can view the contents using read mode 'r'

with  open("/dbfs/mnt/sample.txt", "r") as f_read:
  for line in f_read:
    print(line)

Solution: Here

user__42
  • 543
  • 4
  • 13
USB
  • 6,019
  • 15
  • 62
  • 93
  • 1
    Note: append mode `'a'` will not work on Azure Blob mounts as per this document: https://learn.microsoft.com/en-us/azure/databricks/kb/dbfs/errno95-operation-not-supported – Daniel Mar 08 '23 at 23:39