7

We have a table in Oracle with a BLOB column that needs to be filled with a small amount of arbitrary byte data--we will never put in more than 4000 bytes of data.

I am working with an existing C++ OCI-based infrastructure that makes it extremely difficult to use bind variables in certain contexts, so I need to populate this BLOB column using only a simple query. (We are working to modernize it but that's not an option today,)

We had some luck with a query like this:

UPDATE MyTable
   SET blobData = HEXTORAW('0EC1D7FA6B411DA5814...lots of hex data...0EC1D7FA6B411DA5814')
 WHERE ID = 123;

At first, this was working great. However, recently we encountered a case where we need to put in more than 2000 bytes of data. At this point, we hit an Oracle error, ORA-01704: string literal too long because the string being passed to HEXTORAW was over 4000 characters. I tried splitting up the string and then concatenating with ||, but this didn't dodge the error.

So, I need a way to update this column and fill it with more than 2000 bytes' worth of data using a simple query. Is it possible?

(I know if I had bind variables at my disposal it would be trivial--and in fact other apps which interact with this table use that exact technique--but unfortunately I am not in a position to refactor the DB guts here. Just need to get data into the table.)

EDIT:

One promising approach that didn't work was concatenating RAWs:

UTL_RAW.CONCAT(HEXTORAW('...'), HEXTORAW('...'), HEXTORAW('...'))

This dodges the string-length limit, but it appears that Oracle also has a matching internal 2000 byte limit on the length of a RAW. So I can't populate the blob with a RAW. Maybe there is a function that concatenates multiple RAWs into a BLOB.

StilesCrisis
  • 15,972
  • 4
  • 39
  • 62

5 Answers5

6

To update a BLOB longer than 16383 bytes something like this may by used (each line has even number of hex digits up to 32766):

DECLARE
  buf BLOB; 
BEGIN
  dbms_lob.createtemporary(buf, FALSE);
  dbms_lob.append(buf, HEXTORAW('0EC1D7FA6B411DA58149'));
  --...lots of hex data...
  dbms_lob.append(buf, HEXTORAW('0EC1D7FA6B411DA58149'));
  UPDATE MyTable
     SET blobData = buf
   WHERE ID = 123;
END;

now the limit is only the size of the statement, which might be imposed by the operating environment (e.g. SQLPlus, Pro*C, VB, JDBC...). For very big statements, PL/SQL may also fail with "out of Diana nodes" error.

ThomasH
  • 22,276
  • 13
  • 61
  • 62
mik
  • 3,575
  • 3
  • 20
  • 29
  • Well, at some point if your PL/SQL gets too long, it will tell you "out of Diana nodes" and mysteriously fail. But that won't happen until you've put in thousands and thousands of lines of code. – StilesCrisis Jan 19 '16 at 15:33
  • I tried using this. If you don't have to break it into chunks, it works wonderfully. If you do have to break it up, though, for each append statement you do, it puts an extra leading 0 on each hex string when it loads it in the table. This is seen when you go to retrieve those bytes back out, if you convert those bytes to hex and compare it to what you put in. Resulted in me getting some corrupt files/images that wouldn't display, because the combined string of hex had these extra zeroes in it. – vapcguy Dec 09 '16 at 21:12
  • So it took me all day to figure this one out, and I didn't try it with anything bigger than 32K yet (I used a chunk size of 9999, to simulate a 10000 limit, and used a file slightly bigger than 10K), but how I avoided those leading zeroes was to add another variable, `cBuf CLOB`. I appended all my hex to that, as a string, in chunks, instead of to the `BLOB buf` variable. Then, at the end, I appended to `buf` using `dbms_lob.append(buf, hextoraw(cBuf));` – vapcguy Dec 10 '16 at 16:20
  • 1
    @vapcguy maybe that's because you are using uneven length hex strings? – mik Dec 13 '16 at 17:21
  • @mik, Strong possibility. Maybe I'll test and get back to you. But I would give that as a caveat, if so. – vapcguy Dec 13 '16 at 19:06
  • @mik, It was driving me mad thinking about that as the reason, so I tested and sure enough it doesn't add in the extra `0` hex byte if the chunk is an even size! Your code works fine when using an even number. Great deduction. Just need to make sure everyone knows this little piece of (non-)trivia! – vapcguy Dec 13 '16 at 20:33
  • @vapcguy, thanks for pointing this out, I have corrected my example (which used odd number of digits) and limits. – mik Dec 15 '16 at 14:06
5

Apparently you can exceed these limits if you use PL/SQL. It doesn't work if you do the HEXTORAW within the UPDATE statement directly, either--it needs to be done in a separate statement, like this:

DECLARE
  buf RAW(4000); 
BEGIN
  buf := HEXTORAW('C2B97041074...lots of hex...0CC00CD00');
  UPDATE MyTable
     SET blobData = buf
   WHERE ID = 462;
END;

For the life of me I'll never understand some of Oracle's limitations. It's like everything is its own little special case.

StilesCrisis
  • 15,972
  • 4
  • 39
  • 62
  • 1
    However, the above solution has a limit of 32767 characters in PL/SQL string literal, and another one of 32767 bytes in PL/SQL RAW datatype (which occurs later, because two characters represent one RAW byte). – mik Jan 19 '16 at 11:13
1

This is based on mik's answer, but I found a hole in his where appending more than one line of hex introduces an additional 0 hex character at the beginning of each string, when you use HEXTORAW in each append line. When you pull that hex back out of the database and compare it to what you thought you were putting in, you see this. If the hex was an image, and you bind those image bytes to an Image.Source, the zero is ignored if it there is only one line appended, but if you have multiple lines, it introduces this extra byte for each chunk and corrupts your data and you can't display the image. I imagine the same takes place for regular files and other data you want to upload.

Instead, I appended all my hex to a CLOB, which keeps it as a string of hex and also has the same 4 GB limit that a BLOB field does. So only this uncorrupted string is what gets written to the BLOB as RAW when a hex string is greater than the 32767 character/byte limit:

DECLARE
  buf BLOB; 
  cBuf CLOB;
BEGIN
  dbms_lob.createtemporary(buf, FALSE);
  dbms_lob.createtemporary(cBuf, FALSE);
  dbms_lob.append(cBuf, '0EC1D7FA6B411DA5814');
  --...lots of hex data...
  dbms_lob.append(cBuf, '0EC1D7FA6B411DA5814');
  -- now we append the CLOB of hex to the BLOB as RAW
  dbms_lob.append(buf, HEXTORAW(cBuf));
  UPDATE MyTable
     SET blobData = buf
     WHERE ID = 123;
END;

My scenario was where I was using SQLite as essentially a backup database, but I still needed a way to keep Oracle (my main database) in sync when a document was uploaded, when a connection to it could be re-established.

As a more complete answer of how to build this SQL programmatically, I thought I should show this, since I did so with my app. The code in my C# application would put the bytes of a file into hex, then I had a string variable with the above SQL that I would write to a file, and a service would later use that to update Oracle when the connection returns. So this is how I broke up how I fed my hex into this SQL string and file (and later, Oracle):

// This is all staged so someone can see how you might go from file
// to bytes to hex
string filePath = txtFilePath.Text; // example of getting file path after
    // OpenFileDialog places ofd.FileName in a textbox called txtFilePath
byte[] byteArray = File.ReadAllBytes(filePath);
string hexString = getHexFromBytes(byteArray); // Google: bytes to hex

// Here is the meat...
if (hexString.Length > 0)
{
    string sqlForOracle = "DECLARE buf BLOB; " + 
        "cBuf CLOB; " +
        "BEGIN " + 
            "dbms_lob.createtemporary(buf, FALSE); " + 
            "dbms_lob.createtemporary(cBuf, FALSE); "; + 
            "dbms_lob.open(buf, dbms_lob.lob_readwrite); ";

    int chunkSize = 32766;
    if (hexString.Length > chunkSize)
    {
        sqlForOracle += "dbms_lob.open(cBuf, dbms_lob.lob_readwrite); ";

        int startIdx = 0;
        decimal hexChunks = decimal.Divide(hexString.Length / chunkSize);
        for (int i = 0; i < hexChunks; i++)
        {
            int remainingHex = hexString.Length - (i * chunkSize);
            if (remainingHex > chunkSize)
                sqlForOracle += "dbms_lob.append(cBuf, '" + hexString.Substring(startIdx, chunkSize + "'); ";
            else
                sqlForOracle += "dbms_lob.append(cBuf, '" + hexString.Substring(startIdx, remainingHex) + "'); ";

            startIdx = startIdx + chunkSize;
        }

        sqlForOracle += "dbms_lob.close(cBuf); ";

        // Now we append the CLOB to the BLOB
        sqlForOracle += "dbms_lob.append(buf, HEXTORAW(cBuf)); ";
    }
    else  // write it straight to BLOB as we are below our chunk limit
        sqlForOracle += "dbms_lob.append(buf, HEXTORAW('" + hexString + "')); ";

    sqlForOracle += "dbms_lob.close(buf); ";
    sqlForOracle += "UPDATE MyTable SET blobDate = buf WHERE ID = 123; END;";
}

sqlForOracle is later written to a file using FileStream and StreamWriter, and the service sees if the file exists, reads it in, and updates Oracle with it.

UPDATES

Mik's answer is actually fine, as-is, if you use an even number with your chunks, so mine actually unnecessarily introduces an extra step if you don't need to use odd-numbered chunks. A larger file (it would have to rival your RAM, though) would therefore unnecessarily impact performance as it gets written to memory twice (CLOB, then BLOB) before conversion, as well, so take heed, but I did want to show in the C# just how the chunks would get broken up and how the SQL would actually get programmatically written. If you want to only use buf, just simply replace all the cBuf variables with buf, except you only need one dbms_lob.createtemporary() statement, and obviously only one set of .open() and .close() tags.

And so, about those tags, I also read an "AskTom" forum on Oracle.com where they say that doing a dbms_lob.open() and .close() on your lob appending is, while optional, more beneficial to performance when dealing with number of appends > 2000 (or 2000 * 32766 = 65.532 MB), where it takes almost double the time (178.19%) to complete, and just gets worse from there: Of course, it depends on the file sizes being dealt with if this is actually useful to you or not. I added them in, above.

vapcguy
  • 7,097
  • 1
  • 56
  • 52
  • As far as I know you cannot just convert CLOB to BLOB with HEXTORAW. It will work only for strings shorter than 32768 digits (by implicitly converting CLOB to VARCHAR2 first). – mik Dec 15 '16 at 14:24
  • So you'd have to read in the lines like was done to the CLOB, only to somehow again read them in chunks into the BLOB? Oh, well. Glad you let me know, and to look into the chunk size and make it an even number. I have it running without CLOBs right now, so I guess that's the safer bet. – vapcguy Dec 16 '16 at 22:45
0

Here is my solution for concatenating multiple RAWs into single BLOB using helper table type and stored function:

create or replace type raws as table of raw(2000);

create or replace function concat_raws(parts in raws) return blob
is
    temp blob;
begin
    if parts is null or parts.count = 0 then
       return null;
    end if;
    dbms_lob.createtemporary(temp, false, dbms_lob.CALL);
    for i in parts.first .. parts.last
    loop
        dbms_lob.append(temp, to_blob(parts(i)));
    end loop;
    return temp;
end;

-- usage example:
select concat_raws(raws(hextoraw('CAFE'), hextoraw('BABE'))) from dual;

This approach is also handy for automated sql generation as shown in my other answer to Inline BLOB / BINARY data types in SQL / JDBC.

See also multiple BLOBs concatenation in How to concatenate BLOB fields (Oracle)?

Vadzim
  • 24,954
  • 11
  • 143
  • 151
0

Another alternative since Oracle 12c is to use ALTER SYSTEM SET max_string_size=extended SCOPE=SPFILE; as documented in https://docs.oracle.com/database/121/REFRN/GUID-D424D23B-0933-425F-BC69-9C0E6724693C.htm#REFRN10321.

This would extend VARCHAR2 and RAW max size from 2000 to 32767.

Note that this requires sys permissions, DB restart and involves some gotchas: Oracle 12c extended to support varchar2 > 4000 bytes doesn't work for user who is not sysdba.

Vadzim
  • 24,954
  • 11
  • 143
  • 151