I see a lot of info regarding serializing tables on kdb but is there a suggested best practice on getting functions to persist on a kdb server? At present, I and loading a number of .q files in my startup q.q on my local and have duplicated those .q files on the server for when it reboots.
As I edit, add and change functions, I am doing so on my local dev machine in a number of .q files all referencing the same context. I then push them one-by-one sending them to the server using code similar to below which works great for now but I am pushing the functions to the server and then manually copying each .q file and then manually editing the q.q file on the server.
\p YYYY;
h:hopen `:XXX.XXX.XX.XX:YYYY;
funcs: raze read0[`$./funcs/funcsAAA.q"];
funcs: raze read0[`$./funcs/funcsBBB.q"];
funcs: raze read0[`$./funcs/funcsCCC.q"];
h funcs;
I'd like to serialize them on the server (and conversely get
them when the system reboots. I've dabbled with on my local and seems to work when I put these in my startup q.q
`.AAA set get `:/q/AAAfuncs
`.BBB set get `:/q/BBBfuncs
`.CCC set get `:/q/CCCfuncs
My questions are:
- Is there a more elegant solution to serialize and call the functions on the server?
- Clever way to edit the q.q on the server to add the
.AAA set get :/q/AAAfuncs
- Am I thinking about this correctly? I recognize this could be dangerous in a prod enviroment
ReferencesKDB Workspace Organization