0

Background

We have 2 services written in Java - one handles database operations on different files (CRUD on database), the other handles long-running processing of those records (complicated background tasks). Simply we could say they are producer and consumer.

Supposed behavior is as follows:

Service 1 (uses the code bellow):

  1. Store file into DB

  2. If the file is of type 'C' put it into message queue for further processing

Service 2:

  1. Receive the message from message queue

  2. Load the file from the database (by ID)

  3. Perform further processing

The code of Service 1 is as follows (I changed some names for corporate reasons)

    private void persist() throws Exception {
        try (SqlSession sqlSession = sessionFactory.openSession()) {
            FileType fileType = FileType.fromFileName(filename);
            FileEntity dto = new FileEntity(filename, currentTime(), null, user.getName(), count, data);

            oracleFileStore.create(sqlSession, dto);
            auditLog.logFileUploaded(user, filename, count);
            sqlSession.commit();

            if (fileType == FileType.C) {
                mqClient.submit(new Record(dto.getId(), dto.getName(), user));
                auditLog.logCFileDetected(user, filename);
            }
        }
    }

Additional info

ActiveMQ 5.15 is used for message queue

Database is Oracle 12c

Database is handled by Mybatis 3.4.1

Problem

From time to time it happens, that Service 2 receives the message from MQ, tries to read the file from the database and surprisingly - file is not there. The incident is pretty rare but it happens. When we check the database, the file is there. It almost looks like the background processing of the file started before the file was put into database.

Questions

Is it possible that MQ call could be faster than the database commit? I created the file in DB, called commit and only after that I put the message into MQ. The MQ even contains the ID which is generated by database itself (sequence).

Does the connection needs to be closed to be sure the commit was performed? I always thought when I commit then it's in the database regardless if my transaction ended or not.

Can the problem be Mybatis? I've read some problems regarding Mybatis transactions/sessions but it doesn't seem similar to my problem

Update

I can provide some additional code although please understand that I cannot share everything for corporate reasons. If you don't see anything obvious in this, that's fine. Unfortunately I cannot continue in much more deeper analysis than this.

Also I basically wanted to confirm whether my understanding of SQL and Mybatis is correct and I can mark such response for correct as well.

SessionFactory.java (excerpt)

private SqlSessionFactory createLegacySessionFactory(DataSource dataSource) throws Exception
{
    Configuration configuration = prepareConfiguration(dataSource);
    return new SqlSessionFactoryBuilder().build(configuration);
}

//javax.sql.DataSource
private Configuration prepareConfiguration(DataSource dataSource)
{
    //classes from package org.apache.ibatis
    TransactionFactory transactionFactory = new JdbcTransactionFactory();
    Environment environment = new Environment("development", transactionFactory, dataSource);
    Configuration configuration = new Configuration(environment);

    addSettings(configuration);
    addTypeAliases(configuration);
    addTypeHandlers(configuration);

    configuration.addMapper(PermissionMapper.class);

    addMapperXMLs(configuration); //just add all the XML mappers
    return configuration;
}

public SqlSession openSession()
{
    //Initialization of factory is above
    return new ForceCommitSqlSession(factory.openSession());
}

ForceCommitSqlSession.java (excerpt)

/**
 * ForceCommitSqlSession is wrapper around mybatis {@link SqlSession}.
 * <p>
 * Its purpose is to force commit/rollback during standard commit/rollback operations. The default implementation (according to javadoc)
 * does
 * not commit/rollback if there were no changes to the database - this can lead to problems, when operations are executed outside mybatis
 * session (e.g. via {@link #getConnection()}).
 */
public class ForceCommitSqlSession implements SqlSession
{
    private final SqlSession session;

    /**
     * Force the commit all the time (despite "generic contract")
     */
    @Override
    public void commit()
    {
        session.commit(true);
    }

    /**
     * Force the roll back all the time (despite "generic contract")
     */
    @Override
    public void rollback()
    {
        session.rollback(true);
    }

    @Override
    public int insert(String statement)
    {
        return session.insert(statement);
    }

    ....
 }

OracleFileStore.java (excerpt)

public int create(SqlSession session, FileEntity fileEntity) throws Exception
{
    //the mybatis xml is simple insert SQL query
    return session.insert(STATEMENT_CREATE, fileEntity);
}
Miroslav
  • 444
  • 1
  • 7
  • 21
  • Have you checked if `sqlSession.commit` does a commit to database? How's the sessionFactory configured? Specifically what `transactionManager` type are you using? Please, show `oracleFileStore.create` implementation. – Roman-Stop RU aggression in UA Oct 31 '18 at 14:05
  • I added more code although I'm not sure how much deeper I can go with the analysis – Miroslav Oct 31 '18 at 15:08

1 Answers1

1

Is it possible that MQ call could be faster than the database commit?

If database commit is done the changes are in the database. The creation of the task in the queue happens after that. The main thing here is that you need to check that commit does happen synchronously when you invoke commit on session. From the configuration you provided so far it seems ok, unless there's some mangling with the Connection itself. I can imagine that there is some wrapper over the native Connection for example. I would check in debugger that the commit call causes the call of the Connection.commit on the implementation from the oracle JDBC driver. It is even better to check the logs on the DB side.

Does the connection needs to be closed to be sure the commit was performed? I always thought when I commit then it's in the database regardless if my transaction ended or not.

You are correct. There is no need to close the connection that obeys JDBC specification (native JDCB connection does that). Of cause you can always create some wrapper that does not obey Connection API and does some magic (like delays commit until connection is closed).

Can the problem be Mybatis? I've read some problems regarding Mybatis transactions/sessions but it doesn't seem similar to my problem

I would say it is unlikely. You are using JdbcTransactionFactory which does commit to the database. You need to track what happens on commit to be sure.

Have you checked that the problem is not on the reader side? For example it may use long transaction with serialized isolation level, in this case it wouldn't be able to read changes in the database.

In postgres if the replication is used and replicas are used for read queries reader may see outdated data even if commit successfully completed on master. I'm not that familiar with oracle but it seems that if replication is used you may see the same issue:

A table snapshot is a transaction-consistent reflection of its master data as that data existed at a specific point in time. To keep a snapshot's data relatively current with the data of its master, Oracle must periodically refresh the snapshot

I would check the setup of the DB to know if this is the case. If replicatiin is usedyou need to change your approach to this.

  • Thank you for great answer, this is what I was searching for. These ideas move me forward in my research. Afaik our connection is standard Connection with c3p0 pool but I will rather check it, it's a bit dark part of the code. The idea of replication is actually possible. Our DB provider is notorious for doing some weird shenanigans with the database so I wouldn't be surprised if this would be true. I will get some info and write here an update – Miroslav Oct 31 '18 at 16:12