1

I am working in this issue since 2 weeks without any result. Do someone know how to manage with lipq with blobs or bytea without losing format and any data? The exported file size is 0B, I can not understand the steps I must follow to upload a file to a postgreSQL database from C and pick it again with the correct format and features. Any help will be great. I tryed near every example and theory on the net, even PG documents and manuals, no way. I am close to quit programing and go farmer (not jocking xD). Thank you in advance.

<CODE EDITED mantaining transaction open 24-5-21>

After code modifications, I pick a file 59bytes higher than the file uploaded as large object. Feeling I am closer but changing my mind about using Large Objects.

#include "libpq/libpq-fs.h"
#include "libpq-fe.h"

int main(int argc, char* argv[])
{
    //*************************** IMPORT TEST **********
    manager.conn = manager.ConnectDB();  // my manager, working fine    

    Oid blob;
    char* picName = new char[]{ "powerup.png" };
    PGresult* res;

    ress = PQexec(manager.conn, "begin");
    PQclear(res);

    blob = lo_import(manager.conn, "powerup.png");
    cout << endl << "import returned oid " << blob;

    //res = PQexec(manager.conn, "end");
    //PQclear(res);

    string sentenceB = "INSERT INTO testblob(filename, fileoid) VALUES('powerup.png', '" + std::to_string(blob) + "')";

    manager.GenericQuery(manager.conn, sentenceB); //same as PQexec + result evaluation, works ok
    PQclear(res);

    //*************************** EXPORT TEST **********

    OidManager oidm;

    oidm.exportFile(manager.conn, blob, picName);  // Ill show the function content at the end

    res = PQexec(manager.conn, "end"); //SAME TRANSACTION TO AVOID LOSING THE OID, CHANGES AFTER TRANSACTION...
    PQclear(res);

    manager.CloseConn(manager.conn);   // my manager, works fine

    return true;
}

    // oidm.exportFile() FUNCTION DETAIL
    // code from: 35.5 Example Program Chapter 34 Large Objects
    // https://www.postgresql.org/docs/10/lo-examplesect.html

void OidManager::exportFile(PGconn* conn, Oid lobjId, char* filename)
{
    int         lobj_fd;
    char        buf[BUFSIZE];
    int         nbytes,
        tmp;
    int         fd;

    /*
     * open the large object
     */
    lobj_fd = lo_open(conn, lobjId, INV_READ);
    if (lobj_fd < 0)
        fprintf(stderr, "cannot open large object %u", lobjId);

    /*
     * open the file to be written to
     */
    fd = _open(filename, O_CREAT | O_WRONLY | O_TRUNC, 0666);
    if (fd < 0)
    {                           /* error */
        fprintf(stderr, "cannot open unix file\"%s\"",
            filename);
    }

    /*
     * read in from the inversion file and write to the Unix file
     */
    while ((nbytes = lo_read(conn, lobj_fd, buf, BUFSIZE)) > 0)
    {
        tmp = _write(fd, buf, nbytes);
        if (tmp < nbytes)
        {
            fprintf(stderr, "error while writing \"%s\"",
                filename);
        }
    }

    lo_close(conn, lobj_fd);
    _close(fd);

    return;
}

Kornelius
  • 33
  • 5
  • You can find the solution to the recomendation done by @Laurenz Albe avoiding to use BLOBS and doing the job with BYTEA database slot type at this [link]https://stackoverflow.com/questions/67673603/postgresql-save-and-pick-files-using-bytea-binary-data-with-c-libpq Thanks. – Kornelius May 27 '21 at 15:37

1 Answers1

1

Like the documentation says:

The descriptor is only valid for the duration of the current transaction.

So you must call lo_open and lo_read in the same transaction.

Do not use large objects. They are slow, complicated to use and give you all kinds of serious trouble (for example, if you have many of them). Use bytea, then your code will become much simpler.

Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263
  • Thank you very mutch for the answer, I remember I have readed some of your posts before :) I tryed in a minut to close the transaction after the exportFile(). This time the size of the exported file is 59 bytes higher ._.' and unable to be recognized as an image. I have to leave the office but I will check this as the first thing tomorrow. – Kornelius May 20 '21 at 15:14
  • In case I succed picking the file with the correct binary data working correctly I will have to investigate how to find mi oids if the saved ID's are not the same that I store in the 'testblob' table after end transactions. I have worked a lot before, trying to do this with a table with a bytea slot with raw bytes without succed. I will prepare a piece of code showing what I was trying to do with the bytea Issue tomorrow. I hope understand what I am doing bad before to get fired xDDD – Kornelius May 20 '21 at 15:22
  • I will study all your links before posting my bytea atempts. Thanks again @Laurenz Albe ;) – Kornelius May 20 '21 at 15:22