1

I have a series of Specs2 test files, each in the following form:

class NetworkToolsIT extends PlaySpecification {
  sequential

  val databaseName = "xxxx"
  val addConf = Map(...)
  val application = FakeApplication(additionalConfiguration = addConf)
  val server = TestServer(port = 8888, application)

  step(server.start())
  step(resetDB(databaseName))

  "My test" should {
    "test 1" in {
      ...
    }

    "test 2" in {
      ...
    }
  }

  step(server.stop())
}

My problem is that if I run each specs2 file separately, they all succeed. On the contrary running them all together (with sbt test) they fail.

This issue is related to the fact that I'm using singleton objects as DAOs (as explained in this post)

I would like to know if there is a way to explicitly destroy a FakeApplication once the test is finished, so that different test files are executed as if they were run separately.

I tried adding this to my project configuration, but it didn't work.

parallelExecution in Test := false

I also tried to add this to at the end of each test file:

step(play.api.Play.stop())

but didn't work.

P.S. I'm using Play! 2.3.7

Community
  • 1
  • 1
tano
  • 836
  • 1
  • 10
  • 25

1 Answers1

0

You have three options:

Use the built in WithServer scope:

class NetworkToolsIT extends PlaySpecification {

    sequential

    val serverPort = 8888
    val databaseName = "xxxx"
    val addConf = Map(...)

    step(resetDB(databaseName))

    "My test" should {
        "test 1" in new WithServer(app = FakeApplication(additionalConfiguration = addConf), port = serverPort) {
            ...
        }

        "test 2" in new WithServer(app = FakeApplication(additionalConfiguration = addConf), port = serverPort) {
            ...
        }
    }
}

So Play will handle server start/stop automatically.

Use your own BeforeAfterAll specs2 scope.

This is recommend if you want just one TestServer for this suite:

class NetworkToolsIT extends PlaySpecification with BeforeAfterAll {

    val serverPort = 8888
    val databaseName = "xxxx"
    val addConf = Map(...)

    val server = TestServer(port = serverPort, application)

    override def beforeAll = {
      resetDB(databaseName)
      server.start()
    }

    override def afterAll = {
      server.stop()
    }

    "My test" should {
        "test 1" in {
            ...
        }

        "test 2" in {
            ...
        }
    }
}

I'm assuming here that you don't have to reset the database for each spec and so it can be done just once for this suite. Also, you won't need the to run the steps here in sequential.

Use a ephemeral port to your TestServer:

This is based on this comment made by James Roper:

When you use a port number of 0, that says to the socket API "choose a free ephemeral port". This is the normal case when you make a client connection - you specify the remote port, but you say that the local port is 0, and then the OS assigns are free port (which for TCP is defined by any port that isn't already being used to talk to that remote ip/port) to use for that connection. But, you can also use it when binding a server socket. After you've bound the server socket, you can then ask the socket API what port it's bound to.

So, you can change all of your tests to instantiate TestServer using port 0, like this:

class NetworkToolsIT extends PlaySpecification with BeforeAfterAll {

     val serverPort = 0
     val databaseName = "xxxx"
     val addConf = Map(...)

     val server = TestServer(port = serverPort, application)

     override def beforeAll = {
       resetDB(databaseName)
       server.start()
     }

     override def afterAll = {
       server.stop()
     }

     "My test" should {
         "test 1" in {
             val assignedPort = server.port
             ...
         }

         "test 2" in {
             val assignedPort = server.port
             ...
         }
     }
 }

This way, all your tests can run in parallel without port conflicts.

marcospereira
  • 12,045
  • 3
  • 46
  • 52
  • I'm sure the first solution doesn't solve my issue since it is the way I initially implemented my tests, and I used to have the same problem. I abandoned that approach in order to reduce the test execution time. – tano Feb 17 '16 at 14:59
  • `BeforeAfterAll` inserts one `Step` before all the examples and one `Step` after all of them, that is exactly what I explicitly did with `step(server.start())` and `step(server.stop())` – tano Feb 17 '16 at 14:59
  • What exactly is happening? Which error message are you getting? – marcospereira Feb 17 '16 at 15:31
  • I've added another option and also made some small corrections to the previous examples. See if this helps you. – marcospereira Feb 17 '16 at 15:39