0

I have a Business Objects Excel report which has 2 worksheets each with multiple columns.
For each worksheet BO will do a SELECT from a different pipelined table function in Oracle DB (and each of these functions will call a Stored Procedure.)

The columns in each workbook are completely different from each other, however they both rely on the initial population of a temporary table (both with the same data). The starting point for the queries of each Stored Procedure is this temporary table.

For performance reasons, I would only like to fill this temporary table once, and have it used by both functions / sps. However, as the functions are called in parallel, how can one of them fill the temporary table, allowing both itself and the other function to use the inserted data. There is no guarantee that one function will be called before the other, or that the insert will be completed by the time the other function needs the data.

By the way, I'm trying here to replace what in my eyes is a horrible solution, whereby both worksheets get their data from a single function call, and both datasets are returned as a single dataset using a UNION and a flag to say for each row to which dataset they belong. The flag is literally the only common field between the 2 datasets, and for each row the fields of the other dataset are retuned as Nulls.

Surely there must be a better way of doing this?

MT0
  • 143,790
  • 11
  • 59
  • 117
  • IMO you need to create some kind of locking mechanism. For example when a function or stored procedure needs to insert data or access the temporary table- it should firstly: acquire a lock on that table, secondly: perform the required operation, and finally release the lock. You can read about the syntax here: https://docs.oracle.com/javadb/10.6.2.1/ref/rrefsqlj40506.html – Bartosz Olchowik Aug 02 '23 at 06:53
  • @BartoszOlchowik So from what I can see, I would use an exclusive lock which would prevent reads on the locked table. If my 2nd SP is attempting to read this table, but it is locked, will it simply wait until the lock is released? Or will there be an error? – David McKinney Aug 02 '23 at 10:27
  • there will be a lock, and it will simply wait until its released, however, it depends on the client you are using. If your client has some timeout defined, then the timeout will be hitted. Have in mind that this solution is dangerous and may lead to deadlocks (you would need to prevent it somehow). I dont know also if i got you, because: once you write that the data should be shared between those calls, and another time you want to add flag that would separate those data (what for?). Its either bad design, or i dont understood you correctly. I would advise you to study Global Temporary Table. – Bartosz Olchowik Aug 02 '23 at 11:33
  • @BartoszOlchowik To answer your last question, no, the flag to separate the data is in the existing solution that I'm trying to replace. It sends back both datasets in a single dataset with a flag to indicate for each row which dataset it belongs to. I want to get rid of this and return two separate datasets. Point taken about timeouts and potential deadlocks. Indeed I was wondering if it wasn't potentially better to just leave it as it is, as implementing some solution involving manual table locks seems pretty risky to me. Sounds like it would work if done correctly, but could go wrong. – David McKinney Aug 02 '23 at 12:08
  • To add one more possible solution: You might want to create materialized view, based on that table for each FN/SP. What is more- Materialized views become `Invalid Objects` when they are out of date, so you would perfectly know if they changed or not. Then you could create `instead of triggers` on these materialized views to handle DML operations on original table. I dont really know where you see a problem in current solution, and what you try to speed up. Im just trying to be helpfull and come here with solutions that i would take into account, however your current solution doesnt sound bad – Bartosz Olchowik Aug 02 '23 at 13:09

1 Answers1

0

I am making a few assumptions here:

  • You are using BO Live Office
  • You are not using an Oracle GLOBAL TEMPORARY TABLE, but just a regular table.
  • Your queries are passing parameters to the stored proc, which are used to populate the temp table; the parameters are not used in pipelining the temp table results back to BO.

Incidentally, how are you avoiding a collision if two people run the queries at the same time?

If I'm correct in the above, then I'd suggest a VBA solution. I haven't worked with LO a great deal, and not at all with its VBA libraries, but I think this should be possible:

Set up a total of four queries:

  • The first one uses your existing SP universe and passes the parameters to the stored proc. The stored proc generates a unique key, and populates the temp table using the passed parameters and includes the unique key on each row. The stored proc returns the key via pipelined table, and this is retrieved in the VBA script.
  • The next two queries correlate to your existing two functions. VBA passes the unique key as a parameter, and this key is used in querying the temp table. If the functions don't do any PL/SQL magic, then this step could be replaced with two "regular" BO universe queries, just using the unique key as a prompt.
  • The fourth query does nothing but calls another SP to delete the rows in the temp table associated with the unique key.

With that said, your existing solution doesn't sound that bad. It is duplicating the query, but it's avoiding the need to synchronize creation/deletion of the data, which inherently is going to include some complexity.

Bartosz mentioned using a lock. I think you would need a lot of additional logic to make that work. Each of the two procedures would need to know whether it is the one creating the temp table, or if it should wait because the other one is doing it. Once the table is populated and the lock released, you'd then need some way for the two procedures to determine which is the last one to be done with the data, so the temp table can be purged.

Joe
  • 6,767
  • 1
  • 16
  • 29