0

Problem: We're querying our database with a lot of entries. The overall application performance is okay. But I think it could be better if our bottleneck, querying a special table, could be done parallel.

We are using for that dynamic finders like:

Object.findAllByObjectCollectionAndParentIsNullAndDeletedByUserIsNull(objectCollection, [sort: 'attr1.number', fetch: [objectType: 'eager']])

I think about this solution: Dividing the query into 2 steps.

  1. The first step loads only the Ids in a sorted way.
  2. The second step (could be done in parallel threads) loads the objects itself by the id and inserts it into the resultset.

I already googled for some hints about that, but find nothing.

  1. Is it possible, that it makes no sense?
  2. Or is there already a adequate solution in grails standard/extensions?
  3. If it makes sense and there is no solution: Can someone give me a hint implementing it? But, we need the ids in that sorted manner, explained in the example.

We're using grails 2.3.11, hibernate:3.6.10.13 with jdk 1.7 under it.

Mirko
  • 1,512
  • 1
  • 12
  • 19
  • 1
    have you logged the SQL of your findAll query there and tried to analyze, what it does? are there indices for parent and deleted? otherwise there is only one way to find out about things like this: try it out and benchmark it. when it comes to performance, there often is no silver bullet for all cases out there. – cfrick Apr 29 '15 at 13:22
  • Yes of course. These Objects we're loading holding very small elements of data, like a cell of a excel sheet. So when the page is rendered we're need lots of them. – Mirko Apr 30 '15 at 05:21

1 Answers1

0

I found a solution!

ArrayList<Long> objectIds = Attribute.executeQuery("select o.id from Object o, ObjectType ot where o.objectCollection = :oc and o.parent is null and o.deletedByUser is null and oc.typeUsage = ot order by ot.nr asc", [oc: objectCollection])
Object[] tmp = new Object[objectIds.size()]
        withPool(10, {
            objectIds.eachWithIndexParallel {
                Long objectId, i ->
                    Object o = Object.findById(objectId, [cache: true])
                    tmp[i] = a
            }
        })
        for (Object a : tmp)
            a.merge()

Its round about 3 times slower!

I think the reason is, that

  1. we have to create the threads and databaseconnections too.
  2. After loading the objects, they need to be merged into our local thread.

But in the end - we cant use it, beause the lazy loaded fields couldn't be loaded on demand later. So this could be only a solution for read-only Objects

Mirko
  • 1,512
  • 1
  • 12
  • 19