1

I am working on setting up CI for a product. I am using an in-house Gitlab server with Gitlab CI managing the tests and deployment of the project.

Deploying the project is not as simple as syncing the files. It needs to be built, checked for inconsistencies and rolled back if the update did not succeed.

My worry is that the gitlab runner instance which run the tests and manages the integration of new code has access to the production instance.

My first idea was for the runner to simply send an HTTP POST request to the production server, asking it to update itself and then report back with a message of whether it succeeded or failed. In this way, the runner has no access to the file system on the production server, and the production server can decided whether or not to respond to the request or not. The problem with this approach is that the POST request will most likely time out waiting for the response, since the build is not instantaneous.

My second approach would be to let the gitlab ci runner have password-less ssh login to the production server with a restricted user, whose only purpose is to build and run the project, so that the runner could simply use a fabric script to update the server and have failure reported directly. This would be easy to implement and would work well with the Gitlab CI system.

Update: I tried the fabric approach, but was halted, when the runner did not seem to want to log in. So what is your ideas? Some docker magic?

Eldamir
  • 179
  • 1
  • 10

0 Answers0