0

Suppose I'm working on new feature using some pieces of code from existing codebase. I'm test driving my design, so I have isolated tests with stubbed/mocked collaborators for my feature parts. Now I'd like to test if they play together nicely.

Should I write one huge test with all that bunch of real dependencies wired together (except of some like external systems etc)? In other words should I write integration test for whole story, or split it into several smaller pieces, testing let's say 3-4 objects playing together doing just part of this story? Then I'd finally write test for whole feature from end to end. But how many objects' collaboration should I exercise in one test case?

If the latter is the case, I need to prepare setup (wire dependencies, stub some of them), prepare test data and expected conditions for every test. Now going upper (grouping more and more modules on higher level) I still need to "duplicate" this preparation step in some way. Isn't it this "duplication" bad?

I'm talking about "test levels" like below:

---------------------------------------------------------------
| ------------------------------------
|| ------  ------                    |
|| |unit|  |unit|  units integration |
|| ------  ------                    | 
|-------------------------------------     integration of some
|                                          already integrated
|-------------------------------------     units, etc
|| ------  ------                    |
|| |unit|  |unit|  units integration |
|| ------  ------                    | 
|-------------------------------------
|---------------------------------------------------------------

Also as "classicals" (not "mockers") TDD practitioners say, I should use as many real implementation as possible. But then testing object having 3 levels of dependencies and having DB or external system at the end means I still have to stub/mock something. So should I mock only this heavy/external service at the end?

The triggger for asking this question is that keeping all my tests maintained is getting hadred and harder and I think I failed somewhere. Every medium change in code results with bunch of tests failing. I'd like to find out what did I do wrong.

Thanks in advance for all hints and answers.

grafthez
  • 3,921
  • 4
  • 28
  • 42

1 Answers1

1

Reflect on why you're tests are so fragile. (Some of my thoughts.. though directed at end-to-end tests). I'd need more information on your situation to propose a remedy.

The tests should fail if functionality is broken - if you refactor your code i.e change structure via behavior-preserving transformations then your tests should not break.

My current method is

  • have lots of tiny, focussed, ultra fast unit tests (microtests if you isolate each class from its dependencies). This should constitute the bulk of your tests
  • integration tests : Refer to the ports and adapters (hexagonal) architecture. Your app interfaces with external subsystems e.g. the Database or the Web.. Clearly define the port (interface) between your app and the subsystem. Next write integration tests that verify that any implementation that plugs into the port conforms to your contract (e.g. you would test that MySqlDataRepository can actually persist information by testing against a real DB). By doing this, you verify that the MySqlDataRepository thing works. Now all your unit tests do not need to be slow.. you can use a mockDatabase without losing any confidence
  • finally you need a few end-to-end tests that verify that all the pieces are wired up correctly. As you said, these would be the slowest and the most painful to maintain. However they have value ; you can minimize the maintenance hassle by choosing the right tests to run end-to-end.

More info:

Community
  • 1
  • 1
Gishu
  • 134,492
  • 47
  • 225
  • 308