0

My problem is that the file saved is only opened with Google Pdf viewer and the pdf file has parts that are unclear, and adobe acrobat isn't opening it at all because the file is damaged. There is certainly a problem with the file created but I don't see it.

And now the context: I use dbflow to handle sqlite db synchronization with the server. At the point of saving there is no error, and if I save the file directly to Downloads directory the file is being viewed okay. The blobs are saved in a new table that only has the id and the blob, and is retrieved in chunks like this:

  DatabaseDefinition database = FlowManager.getDatabase(Database.NAME);
        AndroidDatabase android = (AndroidDatabase) database.getWritableDatabase();
        Cursor rawQuery = android.rawQuery("select length(blob) from table where id=" + String.valueOf(id), null);
        int blobLength = 0;
        if (rawQuery != null) {
            while (rawQuery.moveToNext()) {
                blobLength = rawQuery.getInt(0);
            }
            rawQuery.close();
        }

        ByteArrayOutputStream outputStream = new ByteArrayOutputStream( );

        final int CHUNK_SIZE = 1024 * 4;

        loop:
        for(int i = 0; i< blobLength/CHUNK_SIZE + 1; i++) {
            int ceiling =  (i+1)*CHUNK_SIZE > blobLength ? blobLength : (i+1) *CHUNK_SIZE;
            Cursor readBlobChunk =  android.rawQuery("select substr(blob, " +  i*CHUNK_SIZE + 1 + "," + ceiling + ") from table where id =" + String.valueOf(id), null);

            try {
                if (readBlobChunk != null) {
                    readBlobChunk.moveToFirst();
                    outputStream.write(readBlobChunk.getBlob(0));
                }
            } catch(ArrayIndexOutOfBoundsException e ) {
                Log.e(TAG, "blob chunk read exception", e);
                break loop;
            } catch (IOException e) {
                Log.e(TAG, "blob chunk io exception", e);
            }
            readBlobChunk.close();
        }
        byte[] picture = outputStream.toByteArray();
        try {
            outputStream.flush();
            outputStream.close();
        } catch (IOException e) {
            Log.e(TAG, " io exception", e);
        }

        //save file from bytearray to download directory
        FileOutputStream out = null;
        try {
            out = new FileOutputStream(advResource,false);
            out.write(picture);
            out.flush();
            out.close();
        } catch (FileNotFoundException e) {
            Log.e(TAG, "exception", e);
        }  catch (IOException e) {
            Log.e(TAG, "exception", e);
        }

I remarked that depending on chunk size the file can look better or worse.

AncaS
  • 110
  • 7
  • 1
    why do you want to store so huge data in your sqlite db? whats wrong in storing just a path/uri pointing to where the real file is placed? – pskink Aug 01 '17 at 14:28
  • Have to agree with @pskink -- storing such large files in a database is a very bad practice. I would go even further and say that any time you are storing a blob, you should take a long look at your design and make sure that it actually is the best course of action. – Eric Bachhuber Aug 01 '17 at 14:30

1 Answers1

2

You want to read CHUNK_SIZE bytes at a time. While the offset is correct, the length given to substr() increases with each chunk:

        int ceiling =  (i+1)*CHUNK_SIZE > blobLength ? blobLength : (i+1) *CHUNK_SIZE;
        Cursor readBlobChunk =  android.rawQuery("select substr(Picture, " +  i*CHUNK_SIZE + 1 + "," + ceiling + ") from table where id =" + String.valueOf(id), null);

The logic becomes simpler if you track the remaining bytes:

remaining = blobLength;
while (remaining > 0) {
    int chunk_size = remaining > CHUNK_SIZE ? CHUNK_SIZE : remaining;
    query("SELECT substr(Picture, "+ (i*CHUNK_SIZE+1) + ", " + chunk_size + "...");
    ...
    remaining -= chunk_size:
}
CL.
  • 173,858
  • 17
  • 217
  • 259