35

Michael Feathers, in Working Effectively With Legacy Code, on pages 13-14 mentions:

A unit test that takes 1/10th of a second to run is a slow unit test... If [unit tests] don't run fast, they aren't unit tests.

I can understand why 1/10th a second is too slow if one has 30,000 tests, as it would take close to an hour to run. However, does this mean 1/11th of a second is any better? No, not really (as it's only 5 minutes faster). So a hard fast rule probably isn't perfect.

Thus when considering how slow is too slow for a unit tests, perhaps I should rephrase the question. How long is too long for a developer to wait for the unit test suite to complete?

To give an example of test speeds. Take a look at several MSTest unit test duration timings:

0.2637638 seconds
0.0589954
0.0272193
0.0209824
0.0199389
0.0088322
0.0033815
0.0028137
0.0027601
0.0008775
0.0008171
0.0007351
0.0007147
0.0005898
0.0004937
0.0004624
0.00045
0.0004397
0.0004385
0.0004376
0.0003329

The average for all 21 of these unit tests comes to 0.019785 seconds. Note the slowest test is due to it using Microsoft Moles to mock/isolate the file system.

So with this example, if my unit test suite grows to 10,000 tests, it could take over 3 minutes to run.

Matt
  • 14,353
  • 5
  • 53
  • 65

6 Answers6

34

I've looked at one such project where the number of unit tests made the system take too long to test everything. "Too long" meaning that you basically didn't do that as part of your normal development routine.

However, what they had done was to categorize the unit tests into two parts. Critical tests, and "everything else".

Critical tests took just a few seconds to run, and tested only the most critical parts of the system, where "critical" here meant "if something is wrong here, everything is going to be wrong".

Tests that made the entire run take too long was relegated to the "everything else" section, and was only run on the build server.

Whenever someone committed code to the source control repository, the critical tests would again run first, and then a "full run" was scheduled a few minutes into the future. If nobody checked in code during that interval, the full tests was run. Granted, they didn't take 30 minutes, more like 8-10.

This was done using TeamCity, so even if one build agent was busy with the full unit test suit, the other build agents could still pick up normal commits and run the critical unit tests as often as needed.

Lasse V. Karlsen
  • 380,855
  • 102
  • 628
  • 825
  • +1 for how you defined "too long." "Too-long" is often defined for UI commands as about 3 seconds - the amount of time the user is willing to wait before they either give up or become severely annoyed. In this case we have to remember that we are still users, and your definition is just such a reminder. – devgeezer Nov 11 '11 at 15:38
  • I like your idea of using critical as a dimension for deciding how to split the tests. However, critical is orthogonal to speed. In our case, we have tests that are both "if something is wrong here, everything is going to be wrong" and slow because they require an SQL database running. (Yes, we must have SQL execute to determine correctness. Mocking it would involve re-writing an RDBMS.) We use sqlite3, but need PostgreSQL and MySQL support too in our Python 3 project. Still figuring this out... – Matthew Cornell Jun 06 '13 at 14:35
13

I've only ever worked on projects where the test suite took at least ten minutes to run. The bigger ones, it was more like hours. And we sucked it up and waited, because they were pretty much guaranteed to find at least one problem in anything you threw at them. The projects were that big and hairy.

I wanna know what these projects are that can be tested comprehensively in seconds.

(The secret to getting things done when your project's unit tests take hours is to have four or five things you're working on at the same time. You throw one set of patches at the test suite and you task-switch, and by the time you're done with the thing you switched to, maybe your results have come back.)

zwol
  • 135,547
  • 38
  • 252
  • 361
  • 4
    It sounds like you don't understand the difference between UNIT testing and INTEGRATION testing. By definition a unit test should test one unit (neat how the name indicates its purpose) of code. Using an MVC example, if you have a Controller test that instantiates an object (or more) in your business layer which in turns instantiates one or more objects in your repository layer, you don't have a UNIT test. You have an integration test. While instantiating a mock object is not "free", good mock frameworks are still way faster than hitting your repository. – Andrew Steitz Nov 07 '12 at 15:07
  • 2
    If you have a test suite that truly consists of UNIT tests only and it still takes hours to run, then your enterprise architects have failed because they have allowed you to create a seriously monolithic system that should instead be broken down into parts which expose functionality as services. – Andrew Steitz Nov 07 '12 at 15:10
  • 11
    I guess you've never worked on, for instance, a web browser. Validating the CSS parser in Firefox, *all by itself* -- no related functionality, just "parse this and check that the resultant data structure is what it should be" -- involves order of 100,000 tests which, in total, take about five minutes to run. That is how big and complicated the CSS grammar is. – zwol Nov 07 '12 at 15:32
  • Yeah, you got me on that. Thing is, not many people out there have. Most work on LOB (line of business) apps and my comments are more than appropriate for that. – Andrew Steitz Nov 07 '12 at 19:17
  • I guess there's a two-cultures thing here -- I've never worked (professionally) on anything that _wasn't_ that complicated. – zwol Nov 07 '12 at 20:42
  • 1
    You are exactly correct! Initially I thought **two-cultures** might be a little too extreme but I admit I cannot think of a less emphatic term that would still convey the same degree of distinction. So, for the folks out there who work on super-cool, über-complicated stuff like web browsers or css parsers, take your cues from Zack. For us mere mortals who work on LOB apps, take a second look at your tests and figure out if you have UNIT tests or INTEGRATION tests. – Andrew Steitz Nov 08 '12 at 15:29
  • 5
    Even if you break down a large project into modules, unless you're deleting some unit tests at the same time, the tests are still going to take the same amount of time to run. Even if all your tests are unit tests, if you have a lot of fiddly behaviour, your will have a lot of tests to test that. And if you're using mocks heavily, breaking down into modules to add some abstraction isn't going to help your tests (it will probably help understanding the code though, so do it anyway.) – Hakanai Nov 19 '12 at 00:35
4

I've got unit tests that takes a few seconds to execute. I've got a method which does very complicated computing and billions and billions of operations. There are a few know good values that we use as the basis for unit testing when we refactor this tricky and uber-fast method (which we must optimize the crap out of it because, as I said, it is performing billions and billions of computations).

Rules don't adapt to every domain / problem space.

We can't "divide" this method into smaller methods that we could unit test: it is a tiny but very complicated method (making use of insanely huge precomputed tables that can't be re-created fast enough on the fly etc.).

We have unit tests for that method. They are unit tests. They takes seconds to execute. It is a Good Thing [TM].

Now of course I don't dispute that you use unit testing libraries like JUnit for things that aren't unit testing: for example we also use JUnit to test complex multi-threaded scenario. These ones aren't "unit test" but you bet that JUnit still rules the day :)

SyntaxT3rr0r
  • 27,745
  • 21
  • 87
  • 120
2

EDIT See my comment to another answer (Link). Please note that there was a lot of back and forth about Unit Testing so before you decide to upvote or downvote this answer, please read all the comments on that answer.

Next, use a tool like Might-Moose (Mighty-Moose was abandoned but there are other tools) that only runs the tests affected by your code change (instead of your entire test library) every time you check-in a file.

Andrew Steitz
  • 1,856
  • 15
  • 29
  • 2
    "see my comment to Zack's answer" makes this answer very poor. I am reading it now and I have no idea where is "Zack's answer" – ianmandarini Oct 22 '19 at 23:58
  • 2
    Good point. Zack changed his user name to zwol. The answer was posted "answered Sep 29 '10 at 19:00" and my comment was added on "Nov 7 '12 at 15:07" – Andrew Steitz Nov 13 '19 at 19:00
1

How long is too long for a developer to wait for the unit test suite to complete? It really depends how long the devs are happy to wait for feedback of their change. I'd say if you start talking minutes than it's too slow and you should probably break up the test suite into individual test projects and run them separately.

trendl
  • 1,139
  • 1
  • 10
  • 17
0

So what's your question? :-) I agree, the true metric here is how long developers have to wait for a complete run of the unit tests. Too long and they'll start cutting corners before committing code. I'd like to see a complete commit build take less than a minute or two, but that's not always possible. At my work, a commit build used to take 8 minutes and people just started only running small parts of it before committing - so we bought more powerful machines :-)

dty
  • 18,795
  • 6
  • 56
  • 82