1

My goal


Create a React JS WEB APP that can work offline via Service Workers and can query a SQlite file without any backend. I'm running a Linux OS.

The problem


I'm struggling with SQlite usage on a React Js web application. I'm using create-react-app and trying to query a SQlite database.

Starting with sqlite3

I can't quite install sqlite3 as a npm package, because it requires a dependency called aws-sdk and after installing this package, it outputs the following:

/node_modules/sqlite3/node_modules/node-pre-gyp/lib/util/compile.js
Module not found: Can't resolve 'npm' in '/home/user/Documents/snv3/node_modules/sqlite3/node_modules/node-pre-gyp/lib/util'

./node_modules/sqlite3/node_modules/node-pre-gyp/lib/util/nw-pre-gyp/index.html 1:0
Module parse failed: Unexpected token (1:0)
You may need an appropriate loader to handle this file type.
> <!doctype html>
| <html>
| <head>

Then...I moved on searching for another sqlite supporting library like better-sqlite3

Better-Sqlite3

When better-sqlite3 is imported, this is the output on my browser screen:

TypeError: Class is undefined
exports.wrap
node_modules/babel-loader/lib/index.js??ref--6-oneOf-2!/home/user/Documents/projectFolder/node_modules/better-sqlite3/lib/util.js:14

  11 | };
  12 | 
  13 | exports.wrap = function (Class, methodName, wrapper) {
> 14 |   var originalMethod = Class.prototype[methodName];
  15 | 
  16 |   if (typeof originalMethod !== 'function') {
  17 |     throw new TypeError("Missing method ".concat(methodName));

I have no clue on what that is supposed to mean. There is nothing on the troubleshooting section of the library, so it's impossible to debug.

Kripken's Sql.js

Let's try something built in Javascript and lightweight.

Oh no, my memory heap just exploded trying to compile it:

Starting the development server...


<--- Last few GCs --->
tart of marking 2696 ms) (average mu = 0.196, current mu = 0.078) alloca[19981:0x317f660]    30702 ms: Mark-sweep 1387.3 (1425.7) -> 1382.8 (1424.7) MB, 2779.6 / 0.0 ms  (+ 0.1 msin 91 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 2806 ms) (average mu = 0.104, current mu = 0.010) allocati[19981:0x317f660]    33503 ms: Mark-sweep 1386.7 (1424.7) -> 1385.2 (1425.7) MB, 2780.3 / 0.0 ms  (average mu = 0.056, current mu = 0.007) allocation failure scavenge might not succeed


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x30bcb58041bd]
Security context: 0x346742d1e789 <JSObject>
    1: get [0x146f266b8549] [/home/user/Documents/projectFolder/node_modules/@babel/traverse/lib/path/index.js:~99] [pc=0x30bcb6347fd5](this=0x2c6f95aa7759 <JSFunction NodePath (sfi =0x3a532511d061)>,/* anonymous */=0xb1cce8a8af9 <Object map = 0x3450d4054639>)
    2: /* anonymous */(aka /* anonymous */) [0xb1cce8a8899] [/home/user/Documents/projectFolder/node_modul...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x8b8210 node::Abort() [/usr/local/bin/node]
 2: 0x8b825c  [/usr/local/bin/node]
 3: 0xac1d1e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 4: 0xac1f38 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 5: 0xeb11f2  [/usr/local/bin/node]
 6: 0xebd14a v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]
 7: 0xebdab4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
 8: 0xec03e5 v8::internal::Heap::AllocateRawWithRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/usr/local/bin/node]
 9: 0xe888c4 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/usr/local/bin/node]
10: 0x112a2ae v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/local/bin/node]
11: 0x30bcb58041bd
Aborted (core dumped)

Okay, forget sqlite libraries, let's try PouchDB!

Current Solution


PouchDB

So, after all this mess, I've given up on using SQlite libraries and attempted something that was offline first friendly: Pouch DB. I've tought that it was possible to load a SQlite file to Pouch.

So I've asked this question: What is the correct way to load prebuilt SQlite databases with or without PouchDB in a React App

Nope, it's a noSQL database, it is not a possibility...

Then i had to convert all my SQlite database to .json files, did it all manually...

Next, let's create the database and import the data from the .json files.

Edit 1:

My SQlite file have something close to 5k regstries each table. So every .json file, that I have converted from a SQlite table, have 5 k objects or docs as you prefer. And there are 19 of those .json files, each one representing a table.

AAAAND...I need to change between the .json files (tables) to do the queries, but after the file conversion, there is no _rev attribute, so every time I load the registries using bulkDocs, conflicts happen between the tables, because they lack the _rev attribute.

This is the code of my db service:

import PouchDB from 'pouchdb';
import PDBFind from 'pouchdb-find';
import PDBSilverLining from 'pouchdb-silverlining';
import TableEnumerator from '../utils/tableEnumerator';

import {
    Example_Table_Name
} from '../../db'

PouchDB.plugin(PDBFind);
PouchDB.plugin(PDBSilverLining);

class DbService {
    constructor() {
        this._db = new PouchDB('database');
    }

    static async dbLoad (table) {
        const dbInstance = new DbService()._db

        try {
            const respose = await dbInstance.bulkDocs(TableEnumerator(table))
        } catch(e) {
            console.error(e)
        }
    }

    static query(query) {
        const dbInstance = new DbService()._db

        return dbInstance.sql(query)
    }
}

export default DbService;

Docs examples:

Table 1:

{ "_id": "table1_010101012",
    "Procedure": "XXXXX",
    "field4": "",
    "field5": "2B",
    *insert here some more irrelevant data* }

Table 2:

{ "_id": "table2_0555444777",
    "Procedure": "YYYYYYYYYYY",
    "field4": "",
    "field5": "2B",
    *insert here some more irrelevant data* }

On the first (dbLoad(Table1)) call, all docs are loaded and the _rev attributes are created in the first table.

When I do another call (dbLoad(Table2)), using Table2.json, they conflict, because the new file lacks the _rev attr and when Pouch creates this attribute, they are the same!

Edit 2:

I've tried to modify my code to this:

import PDBLoad from 'pouchdb-load';

PouchDB.plugin(PDBLoad);

static async dbLoad (table) {
        const db = new PouchDB(table);

        try {
            db.get('_local/preloaded').then(function (doc) {
            }).catch(function (err) {
              if (err.name !== 'not_found') {
                throw err;
              }
              // we got a 404, so the local docuent doesn't exist. so let's preload!
              return db.load('table_name.json').then(function () {
                // create the local document to note that we've preloaded
                return db.put({_id: '_local/preloaded'});
              });
            }).then(function () {
                console.log({include_docs: true})
              return db.allDocs({include_docs: true});
            })

        } catch(e) {
            console.error(e)
        }
    }

The .json file was in the same directory of the load function, but it doesen't loads the data.

So I'm sticking with the bulkDocs version.

The questions

I'm sorry for the very long post and after all the contextualization, there comes the questions:

  • Is there any possibily of setting up sqlite libraries in a React Web App environment?
  • And without any backend?
  • What is a recommended way of switching between my .json tables?
  • Should I delete the previous one stored and load the next?
  • Or Load every table manually and then dump them using pouch-dump-cli?

I hope everything is well explained, and I will be happy to clarify any questions that you might have.

Thank you for your time!

The possible answer


Okay, so I was able to solve my problem by creating a code like this.

import PouchDB from 'pouchdb';
import PDBFind from 'pouchdb-find';
import PDBSilverLining from 'pouchdb-silverlining';
import TableEnumerator from '../utils/tableEnumerator';

PouchDB.plugin(PDBFind);
PouchDB.plugin(PDBSilverLining);

class DbService {
    static async dbLoad (table) {
        const db = new PouchDB(table);

        try {
            await db.bulkDocs(TableEnumerator(table))
        } catch(e) {
            console.error(e)
        }

        return db
    }

    static async query(query, dbReference) {
        return dbReference.sql(query)
    }
}

export default DbService;

Where the switch statement is the name of the table and returns the table docs, for inserting on IndexedDB. For every .json file I'm creating a new Database and then for doing the queries, as I'm using React, I'm saving the reference for the DB on the components state.

I think there is a lack of optimization as everytime I change between tables I'm calling the function that uses bulkDocs. Maybe I'm supposed to check if the tables are already inserted on IndexedDB.

  • You make two mutually contradictory statements: "Then I was finally able to import the .json data to Pouch! What a relief!", but then you state "Next, let's create the database and import the data from the .json files." Your code examples seem to be trying to achieve the latter, but I cannot understand why you need that code at all if you were "... finally able to import the .json data to Pouch!" – Martin Bramwell Mar 01 '19 at 16:29
  • I've edited the question Martin, sorry for the unclear parts. – Giovanni Klein Campigoto Mar 01 '19 at 17:02
  • Ok, but then I do not understand why you created code to import the data from the JSON files. As I mentioned in the other question, there is an example of how quickly to initialize PouchDB from JSON files in part 2 of the blog post [Prebuilt databases with PouchDB](https://pouchdb.com/2016/04/28/prebuilt-databases-with-pouchdb.html). Prior to your recent update, you seemed to have been saying that you used that successfully. Did you? – Martin Bramwell Mar 01 '19 at 17:08
  • Oh yeah, I've forgot to mention that, when you try to load the files using pouchdb-load, it runs a JSON.parse command, which expects that your JSON is a string, or else it fails, despite it being a valid JSON so not viable? – Giovanni Klein Campigoto Mar 01 '19 at 17:19
  • That is the error that JSON.parse throws: SyntaxError: "JSON.parse: unexpected character at line 1 column 2 of the JSON data", it took me a while to realize that it was an error for the JSON type. You can check that here: https://developer.mozilla.org/pt-BR/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse – Giovanni Klein Campigoto Mar 01 '19 at 17:28
  • sorry, now it is ambiguous whether you have solved your earlier problem with `pouchdb-load`, or whether you know what it is but can't solve it. Also, would solving the `pouchdb-load` issue solve the entire question, or do you still want to do your own bulkDocs? – Martin Bramwell Mar 01 '19 at 17:33
  • I had to create class DbService because pouchdb-load can't parse a plain .json file, it needs to be a **STRING**, otherwise it will not work. That's why I'm using bulkDocs, it's the only way that worked to import the files to the IndexedDB storage. The problem is, if I issue a command to import another file, they conflict and I need to change between files so the user can be able to query the other tables which are stored in those files. – Giovanni Klein Campigoto Mar 01 '19 at 17:40
  • But, in the example, `db.load('turtles.txt')` is expected to pull the file from the origin server. Surely, 'SyntaxError: "JSON.parse: unexpected character at line 1 column 2 of the JSON data"', would mean that 'turtles.txt' is not found. – Martin Bramwell Mar 01 '19 at 17:48
  • Yeah so I've modified my code to do _db.load('table_name.json'), and it is in the same folder, but It fires the JSON.parse error, so I don't know what to do.. – Giovanni Klein Campigoto Mar 01 '19 at 18:14
  • Could you see what happens if you change the name to something like `_db.load('causeError.json'`, just as a way of checking that it is actually loading the file. Also, what happens if you validate the file at [JSON Pretty Print](https://jsonformatter.org/json-pretty-print)? – Martin Bramwell Mar 01 '19 at 19:07
  • The JSON is valid, but there seems to be no way to load the files, as load uses JSON.parse and its content needs to be a string – Giovanni Klein Campigoto Mar 03 '19 at 12:50
  • So, in effect `db.load()` as described in part 2 "Loading from a dump file" of the article [Prebuilt databases with PouchDB](https://pouchdb.com/2016/04/28/prebuilt-databases-with-pouchdb.html) is fundamentally broken and will no longer work? – Martin Bramwell Mar 03 '19 at 19:30
  • I'm not sure about that, it's not really working for my problem, because it can't load a plain .json file, only a string representation of that. – Giovanni Klein Campigoto Mar 04 '19 at 15:08
  • I have a suspicion that the root cause of your difficulty all along has been double-quotes within varchar columns. It'd be worth verifying that, rather than assuming that `pouchdb-load`, "...can't load a plain .json file..." – Martin Bramwell Mar 04 '19 at 17:59

1 Answers1

2

For future readers, this is how you can solve the problem.

  1. If you have a SQLite file, head over to my previous asked question: What is the correct way to load prebuilt SQlite databases with or without PouchDB in a React App
  2. Search for the question answer, and do the sqlite3 commands accordingly outputting the .json files.

  3. When issuing the command SELECT, if you have any columns that have special characters, like blank space or dots in it's name use square rackets operator on the column name [ ]:

    sqlite3 example ".output rslt.json" "SELECT '{ \"id\": \"someTable' || SUBSTR(\"000000000\" || id, LENGTH(\"000000000\" || id) - 8, 9) || '\", \"anattr\": \"' || anattr || '\", \"anothe.rattr\": \"' || [anoth.erattr] || '\" },' FROM some_table;";

  4. There was a problem that occurred to me, some table rows had double quotes like that: "Attr": "Lorem ipsum "dolor" ". This causes a parsing error. For fixing it use the escape character operator \, before the quotes: "Lorem ipsum \"dolor\" ". That should fix double quotes.

    You can easily use a regex for matching this occurrences: \"dolor\", this will find double quotes with your desired word within. For a quick replace write \"dolor\" and replace all the occurrences, just before doing it check which files you are modifying!!!!

As no one manifested anything about this, I'm answering my own question with my last update:

Okay, so I was able to solve my problem by creating a code like this.

import PouchDB from 'pouchdb';
import PDBFind from 'pouchdb-find';
import PDBSilverLining from 'pouchdb-silverlining';
import TableEnumerator from '../utils/tableEnumerator';

PouchDB.plugin(PDBFind);
PouchDB.plugin(PDBSilverLining);

class DbService {
    static async dbLoad (table) {
        const db = new PouchDB(table);

        try {
            await db.bulkDocs(TableEnumerator(table))
        } catch(e) {
            console.error(e)
        }

        return db
    }

    static async query(query, dbReference) {
        return dbReference.sql(query)
    }
}

export default DbService;

Where the switch statement is the name of the table and returns the table docs, for inserting on IndexedDB. For every .json file I'm creating a new Database and then for doing the queries, as I'm using React, I'm saving the reference for the DB on the components state.

I think there is a lack of optimization as everytime I change between tables I'm calling the function that uses bulkDocs. Maybe I'm supposed to check if the tables are already inserted on IndexedDB.

This creates a DATABASE for each .json file!!! Remember, this is not optimal! I will be working on a optimized approach for this.

Thanks and thanks Martin for your help!

  • You can ensure your Sqlite data migration query consistently exports pure clean JSON by means of Sqlite's string function `REPLACE()` So instead of `'\", \"anothe.rattr\": \"' || [anoth.erattr] || '\"` you'd have `'\", \"anothe.rattr\": \"' || REPLACE([anoth.erattr], '"', '\"' || '\"`. (Disclaimer: Not tested! You might have escape some of the quotes and possibly even escape the escapes,) – Martin Bramwell Mar 04 '19 at 18:07