2

Suppose i want to move mu current acceptance test CI environment to dockers, so i can take benefit of performance improvements and also quickly setting up multiple clones for slow acceptance tests.

I would have a lot of services. The easy ones would be postgres, mongodb, reddis and such, which are updated rarely.

However, how would i go about, if my own product has lots of services aswell? - over 10-20 services, that all need to work together for tests. Is it even feasible to handle this with dockers, i.e., how can CI efficiently control so many containers automatically AND make clones of them to run acceptance tests in parallel.

Also, how would i automatically update the containers easily for the CI? Would the CI simply need to rebuild every container at the start of the every run with the HEAD of every service branch? Or would the CI run git pull and some update/migrate command on every service?

In VM-s its easy to control these services, but i would like to be convinced that dockers are good or better for it as well.

user1047833
  • 343
  • 2
  • 7
  • 17

3 Answers3

2

drone is a docker based open source CI plus online service: https://drone.io

Generally it runs build and test in docker containers, and remove all containers after built. you just need to provide a file named .drone.yml with similar configuration like .travis.yml to configure your build.

it will manage your services like database, cache as linked container.

For your build environment, you can use exiting docker images as template of dependencies.

So far, it supports github.com and gitlab. for your own CI system, you can use drone CLI only or its web interface.

shawnzhu
  • 7,233
  • 4
  • 35
  • 51
  • Thanks you. While im grateful that you introduced me to this interesting technology, it doest really answer the question. – user1047833 May 22 '14 at 07:39
2

I'm in the same position as you and have recently gotten this all working to my liking.

First of all, while docker is generally intended to run a single process, for testing I've found it works better for the docker container to run all services needed. There is some duplication in going this route, but you don't have to worry about shared services, like Mongo or PostgreSQL. This can be accomplished by using something like Supervisor: http://docs.docker.com/articles/using_supervisord/

The idea is to configure supervisor to start all necessary services inside the container, so they are completely isolated from other containers. In my environment, I have mongo, xvfb, chrome and firefox all running in a single container. So really, you still are running a single process (supervisor) but it starts many others.

As for adding repositories to your container, I just have the host machine checkout the code and then when I run docker, I use the -v flag to add the repo to the container. This way you don't need to rebuild the container each time. I build containers nightly with the latest code to be able to add all necessary gems for a faster 'gem install' at testing time.

Lastly I have a script as the entrypoint of the container that allows me to pass in what test I want to run.

Jenkins then just runs the docker commands and passes in the tests to run. These can be done in parallel, sequentially or any other way you like. I'm currently looking into having these tests run on slave Jenkins instances in an auto-scaling group in AWS.

Hope that helps.

pharsicle
  • 1,209
  • 1
  • 14
  • 18
wrangler
  • 1,995
  • 2
  • 19
  • 22
1

I recommend to use Jenkins docker plugin, though it is new, it starts to expose the power of docker used inside jenkins, the configuration is well written there. (let me know if u have problem)

The strategy I planned to use it.

  • create different app images to serve different service like postgres, mongodb, reddis and such, since it is rare updated, they will be configured globally as "cloud" template in advance, each VM will have label to indicate the service
  • In each jenkins job, each images will be selected as slave node (use that label as name)

When the job is triggered, it will automatically start the docker container as slave in seconds

It shall work for you.

BTW: As the time I answered (2014.5), the plugin is not mature enough, but it is the right direction.

Larry Cai
  • 55,923
  • 34
  • 110
  • 156
  • Thanks, i will take a look. It might be exactly what i am looking for, if it allows me to create a lot of slave nodes in a single job easily and later destory them. I probably want to create postgres, mongo, etc for each build aswell, so i can run the tests in parallel. – user1047833 May 25 '14 at 14:12