0

I am writing a blob into a file system file using the following code:

create or replace PROCEDURE SAVE_FILE_TO_DISK (image_id NUMBER, dir IN VARCHAR2 , file_name IN VARCHAR2) IS

image_blob        BLOB;
img_blob_size     INTEGER;
destination_file  utl_file.file_type;
memory_buffer     RAW(32767);
chunk_size        BINARY_INTEGER := 32767; --this will be overwritten after each call to dbms_lob.read
max_buffer        BINARY_INTEGER := 32767; --need this to return chunk_size to max after each dbms_lob.read call
blob_position     INTEGER := 1;

BEGIN
  SELECT image INTO image_blob FROM image WHERE img_id = image_id;
  destination_file := utl_file.fopen(dir, file_name, 'wb', chunk_size);
  --get the blob size
  img_blob_size := dbms_lob.getlength(image_blob);

  WHILE blob_position <= img_blob_size LOOP
    --check if we can read max possible bytes otherwise set chunk_size to remaining number of bytes
    IF blob_position + chunk_size - 1 > img_blob_size THEN
      chunk_size := img_blob_size - blob_position + 1;
    END IF;

    --read chunk_size bytes from blob to memory buffer
    dbms_lob.read(image_blob, chunk_size, blob_position, memory_buffer);
    --transfer from memory buffer to the file
    --little bit worried about the fact that memory_buffer might contain data from previous dbms_lob.read
    utl_file.put_raw(destination_file, memory_buffer, true);

    --advance blob_position by actual number of read bytes
    blob_position := blob_position + chunk_size;
    --reset chunk_size to max possible
    chunk_size := max_buffer;      
  END LOOP;  
END SAVE_FILE_TO_DISK;

Imagine if we have a blob with size 40000 bytes, we going to have 2 iterations of the WHILE loop. During the second iteration, chunk_size will be set to 7233 (40000 - 32767). My worry is that during second iteration:

--read chunk_size bytes from blob to memory buffer
dbms_lob.read(image_blob, chunk_size, blob_position, memory_buffer);

is going to overwrite the first 7233 bytes of the memory_buffer, leaving the rest of the memory_buffer with data from the first iteration, and then:

utl_file.put_raw(destination_file, memory_buffer, true);

is going to write the entire content of memory_buffer into the file (remaining 7233 bytes of the blob and 25534 bytes of data from the first iteration of the WHILE loop.

Am I correct here? Should I create a new memory_buffer on every iteration with the size equal to the exact number of bytes i am going to write into the file?

Rustam Issabekov
  • 3,279
  • 6
  • 24
  • 31
  • Rather than making an "appeal to authority" why not just build 2 versions. Run them both and analyze the results. Choose the one that generated the desired results and if both do choose the one you like better. – Belayer Feb 21 '21 at 18:18
  • I am in a situation where i can't install oracle on my local laptop (work laptop has limited set of available software) and i am still in the process of getting access to the directory on the db server where the files are being created – Rustam Issabekov Feb 21 '21 at 22:26

0 Answers0