1

I have a task which was working fine with javaSE application for about a year. That is: extracting blobs and text metadata from sqlite databases and populating big RDBMS.

When I moved this task to Wildfly (tried 10.0 and yesterday 10.1 also), strange thing occured. Very often Wildfly just totally dies with lone message

java: src/main/java/org/sqlite/core/NativeDB.c:521: 
Java_org_sqlite_core_NativeDB_column_1blob: Assertion `jBlob' failed.
/opt/wildfly-10.1.0.Final/bin/standalone.sh: line 307: 36275 Aborted                 "java" -D"[Standalone]" -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true "-Dorg.jboss.boot.log.file=/opt/wildfly-10.1.0.Final/standalone/log/server.log" "-Dlogging.configuration=file:/opt/wildfly-10.1.0.Final/standalone/configuration/logging.properties" -jar "/opt/wildfly-10.1.0.Final/jboss-modules.jar" -mp "/opt/wildfly-10.1.0.Final/modules" org.jboss.as.standalone -Djboss.home.dir="/opt/wildfly-10.1.0.Final" -Djboss.server.base.dir="/opt/wildfly-10.1.0.Final/standalone"

All other EJBs, servlets and so on stop working either.

My singleton EJB, which processes sqlite-files is wrapped in try - catch, but it doesn't help. So it is very frustrating problem.

OS is CentOS7 with latest stock jre. sqlite jdbc is 3.8.11.2 - the latest.

I tried to increase heap to 1024 and 2048 megabytes, it didn't help.

How can I investigate and overcome this problem? Moving back to javaSE is not desirable.

I get blobs with this code (wrapped in try-catch and checking for null after that)

System.out.print("getting data... ");
byte[] rawData = rs.getBytes("data");
System.out.println("ok");

As far as I debug, wildfly dies NOT on getBytes(), because I can see "getting data... ok" in console. After that I work only with byte-array, but wildfly dies with Assertion `jBlob' failed

Oleg Gritsak
  • 548
  • 7
  • 26

1 Answers1

4

I found this unfortunate piece of code in NativeDB.c:

length = sqlite3_column_bytes(toref(stmt), col);
jBlob = (*env)->NewByteArray(env, length);
assert(jBlob); // out-of-memory

Therefore you can try specifying even more memory to try and fix the problem, or perhaps a different sqlite JDBC driver.

Given the use of assert throughout this code I don't think anyone should let it anywhere near a server based solution. The default behaviour for a C assertion failure is to abort the current process.

Steve C
  • 18,876
  • 5
  • 34
  • 37
  • Wow! Good spot! – cassiomolin Sep 27 '16 at 07:07
  • I doubt that increasing JVM heap might somehow fix untidy C++ code. But thank you for answer, now it is clearer what's going on! By the way, I could workaround problem by using CentOS6. So, probably it doesn't matter java SE/EE, but some bugs with latest java it differs slightly 1.8.0.101 vs 1.8.0.102 – Oleg Gritsak Sep 27 '16 at 10:31
  • Maybe have a look at [xerial sqlite jdbc](https://bitbucket.org/xerial/sqlite-jdbc) driver as an alternative. It throws exceptions rather than falling over with an assertion failure – Steve C Sep 27 '16 at 14:53
  • I already use xerial (and didn't find any alternative). Interestingly, yesterday new version was released and it seems to solve the problem on CentOS7. – Oleg Gritsak Sep 29 '16 at 02:46