4

Especially when querying a remote KDB instance, having a typo in your query which causes it to take hours instead of seconds is a pain -- since the server is remote, I can't even access it to kill the KDB instance.

So I have wondering if there is some way to specify a maximum execution time?

Edit:

I'm looking for a way to do this from the client side on a per-query basis. Some queries might legitimately take hours, but others shouldn't take more than 10 secs. A hard limit on the server side wouldn't help this.

mchen
  • 9,808
  • 17
  • 72
  • 125
  • If your KDB instance is hanging, you can just get the PID for that process and send a signal using UNIX to interrupt the process? `>ps -aef | grep ` `>kill -SIGINT ` – WooiKent Lee Apr 11 '14 at 15:02

2 Answers2

4
\T 5

or starting kdb with

q -T 5

Would set a max 5 second time out on user queries.

You can send a SIGINT http://en.wikipedia.org/wiki/Unix_signal that has the same effect as pressing control-C which will attempt to stop a command being processed. Not all q code supports interrupts.

One way of doing this is having two q processes, one of which is just used to send linux kill interrupt signals.

Ryan Hamilton
  • 2,601
  • 16
  • 17
  • Thanks, but is there any way to do this from the client side on a per-query basis? Some queries might legitimately take hours, but others shouldn't take more than 10 secs. A hard limit on the server side isn't going to help this. – mchen Apr 11 '14 at 11:29
  • In response to your statement "drop the connection to cancel a query", after dropping the connection I cannot reconnect, presumably because the server is still busy with the previous query. – mchen Apr 11 '14 at 12:21
0

Try to forward query via own process:

  \T 3
  h:hopen `::3000
  0(h;({system "sleep 2";x+1};5))
6
  0(h;({system "sleep 4";x+1};5))
'stop