-1

I am trying to force killing (not closing) a q session once my query is done to save resources on my machine.

It is currently working using:

conn.sendAsync("exit 0")

Problem is, if I run a query right after it again (trying to reopen the connection and run another query), it might fail as the previous connection would still being killed as it is asynchronous.

Therefore, I am trying to do the same thing with a synchronous query, but when trying:

conn.sendSync("exit 0")

I get:

ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
python-BaseException

Can I specify a timeout such that the q session will be killed automatically after say 10 seconds instead, or maybe there is another way to force killing the q session?

My code looks like this:

conn = qc.QConnection(host='localhost', port=12345, timeout=10000)
conn.open()
res = None
try:
    res = conn.sendSync(query, numpy_temporals=True)
except Exception as e:
    print(f'Error running {query}: {e}')
conn.sendSync("exit 0")
conn.close()
JejeBelfort
  • 1,593
  • 2
  • 18
  • 39

2 Answers2

7

I'd suggest we take a step back and re-evaluate if it's really a right thing to kill the KDB process after your Python program runs a query. If the program isn't responsible to bring up the KDB process, most likely it should not bring the process down.

Given the rationale of saving resource, I believe it keeps many data in memory and thus takes time to start up. It adds another reason that you shouldnt kill it if you need to use it a second time.

Darren Sun
  • 101
  • 1
  • 3
  • 4
    I agree with this, if you're so tight on resources that you need to continuously kill and restart the kdb instance then you've bigger problems. Get more RAM, it's cheaper than the amount of effort you're putting in to avoid it. – terrylynch Jun 08 '22 at 13:14
6

You shouldn't be killing a kdb process you intend to query again. Some suggestions on points in your question:

once my query is done to save resources -> you can manually call garbage collection with .Q.gc[] to free up memory or alternatively and perhaps better enable immediate garbage collection with -g 1 on start. Note if you create large global variables in your query this memory will not be freed up / returned.

https://code.kx.com/q/ref/dotq/#qgc-garbage-collect

https://code.kx.com/q/basics/syscmds/#g-garbage-collection-mode

killed automatically after say 10 seconds -> if your intention here is to not allow client queries such as from your python process to run over 10 seconds you can set a query timeout with -T 10 on start or when process is running with \T 10 / system "T 10"

https://code.kx.com/q/basics/cmdline/#-t-timeout

Matt Moore
  • 2,705
  • 6
  • 13
  • Thank you, just validated your answer. Regarding the second part of the question, I was asking if there is a command to start a q session and stop it after say 4 hours if nothing is run or sent on this session. Something to avoid having stale q sessions. Is there such command? – JejeBelfort Jun 24 '22 at 03:52
  • 1
    Nah not really. You could probably implement something by modifying `.z.pg` and `.z.ts` but I wouldn't recommend it given you couldn't start the process that way. If you are on linux why not have a crontab to start/stop your q session everyday, say at 9am and stop at 6pm. – Matt Moore Jun 24 '22 at 09:25
  • `[WinError 10054]` going by this you may be on windows. If your q session runs on a windows machine you could use the Windows Task Scheduler – Matt Moore Jun 24 '22 at 09:27
  • I am actually running it on linux, so yes I will thing about the crontab. Thank you sir – JejeBelfort Jun 27 '22 at 01:29