My case is the following: an external party delivers a huge SQL file with thousands of queries. These include updates, inserts, subqueries, etc. Those queries are assembled in third-party software (I think Excel, but I might be mistaken).
As a result, many of these queries tend to fail and the whole batch needs to succeed without one single error. I'm using mysql source file.sql
to execute the queries, and I'm using mysql tee log.txt
to log the output to a txt-file.
However, this has proven to be insufficient, as this is the output I'm getting:
Query OK, 1 row affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 1 row affected (0.00 sec)
ERROR 1242 (21000): Subquery returns more than 1 row
Query OK, 0 rows affected (0.00 sec)
Query OK, 1 row affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
It's kinda looking for a needle in a haystack. I'd have to count all the preceding logging messages to determine which exact query failed.
Is there any way to get a log file including the queries? Or is there another way of doing this efficiently?
- Executing in small batches is not an option. (It would take ages.)
- Executing them all on the commandline isn't either. (The messages fly by so fast it's impossible to read or capture them all.)