There are a lot of pieces so I don't expect anyone to be able to answer this without seeing every configuration. But maybe people can tell me how to look for diagnostics or kind of how the major pieces fit together so that I can understand what I'm doing wrong.
I have a Tencent CVM instance running Ubuntu Server.
I also have a domain name pointing to the ip address of that server.
I start an nginx service to listen to port 1337 and pass requests to example.com/parse
I have mongodb running inside of a docker container, listening on port 27017.
Inside of index.js, I have the databaseURI set as 'mongodb://localhost:27017/dev' and the SERVER_URL set as 'https://example.com/parse'
When it's time to deploy the Parse Server instance, I use screen inside of my current ssh session, run npm start, and then detach the screen, and then kill my ssh session by closing the terminal.
Finally, I run the parse dashboard on my local machine with serverURL 'https://example.com/parse'
And everything works great. I add items to the database via the test page that comes with the Parse Server repo. I add items to the database via cloudcode calls from Python. I add and delete classes and objects via the dashboard. The behavior is exactly like I should expect.
And it continues that way for anywhere between 12-72 hours.
But after a few days of normal operation, it will happen that I open parse dashboard and everything is gone. I can start adding things again and everything works right, but nothing persists for more than 72 hours.
There's a lot I don't understand about the anatomy of this, so I figured maybe using screen and then detaching and closing the terminal causes some process to get killed and that's my mistake. But when I run top, I can see everything. I can see npm start is running. I can see mongo is running. When I docker ps to check the container, it's still there doing fine. The nginx service is still running.
Can anyone suggest a way for me to start diagnosing what the problem is? I feel like it's not any of the configs because if that was the problem, it wouldn't work fine for so long. I feel like it must be how I'm deploying it is causing something to reset or causing some process that's supposed to be always running to die.
Edit: For posterity I'll summarize the below as a solution in case you've come here struggling with the same issue. @Joe pointed me to db.setProfilingLevel(), level 2 with the slowms=0 option for max verbosity. Those logs are written to the file indicated within mongodb.conf. Docker doesn't persist storage by default, so you should have a named volume. The syntax is $docker volume create <volume_name>. And the syntax to attach the volume when you create the container is to add a v flag like -v <volume_name>:. And finally, I was running mongodb in a container because that's the workflow I saw in tutorials. But it's solving a problem I didn't have and it was simpler to start mongodb as a service without a container.