I know about nbconvert and use it to generate static html or ipynb files with the results output. However, I want to be able to generate a notebook that stays attached to a kernel that I already have running, so that I can do further data exploration after all of the template cells have been run. Is there a way to do that?
-
I understand why you might want to do this, but it seems fragile to me. If you can persist the result data generated by the notebook, then you can load it into a new kernel and explore there. This would allow more than one exploratory session over a generated dataset, which a persistent kernel connection wouldn't allow. – cco Nov 09 '16 at 01:21
-
That's an interesting suggestion. The thing I don't like is that it forces me to account for all the variables that I want to save from the first run and write code to save and load them. If I repeatedly do this pattern, I don't want to do that - I just want the equivalent of having opened a particular notebook in a particular kernel and clicked 'run all'. Presumably doing that is just a sequence of http calls to the server, so my question is, is there a programmatic way to do the same thing. – unsorted Nov 15 '16 at 20:22
-
If you want to preserve the state of your exploration, then the `%save` and `%load` magic functions might be what you want. I was thinking that the reason you wanted to attach to a running kernel was that the state was expensive to compute. `%save` lets you save the operations you've done, in the order you did them. – cco Nov 16 '16 at 01:43
3 Answers
Apparently, you can do this through the Python API. I didn't try it myself, but for someone who will be looking for a solution, this PR has an example in the comments:
from nbconvert.preprocessors.execute import executenb, ExecutePreprocessor
from nbformat import read as nbread
from jupyter_client.manager import start_new_kernel
nb = nbread('parsee.ipynb', as_version=4)
kernel_name = nb.metadata.get('kernelspec', {}).get('name', 'python')
km, kc = start_new_kernel(kernel_name=kernel_name)
executenb(nb, kernel=(km, kc))
kc.execute_interactive('a') # a is a variable defined in parsee.ipynb with 'a = 1'

- 81
- 1
- 5
Not quite sure about your purpose. But my general solutions are,
to execute the notebook in command line and see the execution at the same time,
jupyter nbconvert --debug --allow-errors --stdout --execute test.ipynb
this will show the execute through all cells in debug mode even exception happens. but I can't see the result until the end of the execution.
to output the result to a html file, and then open the html file to see the results. I found this is more convenient.
jupyter nbconvert --execute --allow-errors --stdout test.ipynb >> result.html 2>&1
if you open result.html, it will be,
and all the errors and results will be shown on the page.
I would like to learn other answers/solutions from you all. thank you.

- 2,341
- 2
- 15
- 30
-
Right, both of those methods will generate static output. But my goal is to have the notebook 'live' and attached to an existing kernel with all of the variables still defined, for further data exploration. – unsorted Nov 15 '16 at 20:14
If I understood correctly you wish to open a Python console, and connect Jupyter notebook to that kernel instance?
Perhaps your solution would be to edit jupyter scripts itself and run the server in separate thread/background task implementing some sort of connection between threads and work in the jupyter console? Currently it's impossible because main thread is running the server.
This would require some work and I don't have any solution as-is, but I will look into that and maybe edit this answer if I can make it work.
But it seems that the easiest solution is to simply add another field in the notebook and do whatever you wish to do there. Is there a reason for not doing that?

- 956
- 1
- 9
- 24
-
No I don't think that's the question. To clarify a bit, I have a jupyter notebook server running, and a notebook that hasn't been executed yet. I want a way to programmatically tell the server to execute a notebook and keep it running attached to that server. – unsorted Nov 15 '16 at 20:24
-
I still think that's doable if you mingle with jupyter scripts themselves – Piotr Kamoda Nov 21 '16 at 09:05
-
What I meant is that you probably can execute notebook programatically on opening. Then it should work. You just have to manage the notebook. @unsorted, and what's your end-goal? It seems like you are trying to communicate between notebooks? – Piotr Kamoda Nov 21 '16 at 16:09