0

There are a lot of pieces so I don't expect anyone to be able to answer this without seeing every configuration. But maybe people can tell me how to look for diagnostics or kind of how the major pieces fit together so that I can understand what I'm doing wrong.

I have a Tencent CVM instance running Ubuntu Server.

I also have a domain name pointing to the ip address of that server.

I start an nginx service to listen to port 1337 and pass requests to example.com/parse

I have mongodb running inside of a docker container, listening on port 27017.

Inside of index.js, I have the databaseURI set as 'mongodb://localhost:27017/dev' and the SERVER_URL set as 'https://example.com/parse'

When it's time to deploy the Parse Server instance, I use screen inside of my current ssh session, run npm start, and then detach the screen, and then kill my ssh session by closing the terminal.

Finally, I run the parse dashboard on my local machine with serverURL 'https://example.com/parse'

And everything works great. I add items to the database via the test page that comes with the Parse Server repo. I add items to the database via cloudcode calls from Python. I add and delete classes and objects via the dashboard. The behavior is exactly like I should expect.

And it continues that way for anywhere between 12-72 hours.

But after a few days of normal operation, it will happen that I open parse dashboard and everything is gone. I can start adding things again and everything works right, but nothing persists for more than 72 hours.

There's a lot I don't understand about the anatomy of this, so I figured maybe using screen and then detaching and closing the terminal causes some process to get killed and that's my mistake. But when I run top, I can see everything. I can see npm start is running. I can see mongo is running. When I docker ps to check the container, it's still there doing fine. The nginx service is still running.

Can anyone suggest a way for me to start diagnosing what the problem is? I feel like it's not any of the configs because if that was the problem, it wouldn't work fine for so long. I feel like it must be how I'm deploying it is causing something to reset or causing some process that's supposed to be always running to die.

Edit: For posterity I'll summarize the below as a solution in case you've come here struggling with the same issue. @Joe pointed me to db.setProfilingLevel(), level 2 with the slowms=0 option for max verbosity. Those logs are written to the file indicated within mongodb.conf. Docker doesn't persist storage by default, so you should have a named volume. The syntax is $docker volume create <volume_name>. And the syntax to attach the volume when you create the container is to add a v flag like -v <volume_name>:. And finally, I was running mongodb in a container because that's the workflow I saw in tutorials. But it's solving a problem I didn't have and it was simpler to start mongodb as a service without a container.

D. Kupra
  • 343
  • 1
  • 9
  • This sounds like someone discovered your database server was not protected by a username/password and helpfully deleted the data before it could be compromised. Was there a ransom message left in a different database? – Joe Nov 20 '21 at 07:16
  • No. And I'm still in the environment-building phase. So the only things that ever get onto that database are a test user, a few test pushes from the Parse Server repo's test page, and then some various test strings and files from a mobile app. Nothing valuable, sensitive, or organized. – D. Kupra Nov 20 '21 at 07:21
  • But as long as we're on the subject, is it *not* protected by a username and password? I need my username and password to initiate the ssh connection. And from there I open the ports I mentioned for their specific purposes. Is that bad practice? – D. Kupra Nov 20 '21 at 07:23
  • You may want to enable profiling with a really small (maybe even0) slowms setting. The logs will be huge, but at least then you can check the log to see what happened. – Joe Nov 20 '21 at 07:29
  • Ok, I took your advice and set a profiler at level 2, which the docs say means collect everything. But according to the docs, the logs are stored in a collection called system.profile. When this resetting episode happens again 72 hours from now, that I expect that collection to be gone along with the rest of the collections. – D. Kupra Nov 20 '21 at 08:03
  • My point was to set the slowms to 0 so they would be written to the _log_ – Joe Nov 20 '21 at 17:56
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/239421/discussion-between-d-kupra-and-joe). – D. Kupra Nov 21 '21 at 03:59

0 Answers0