0

I have a side project that I love to code, I spend time with it when I can, since I'm still finishing my university studies. When I started it, I barely knew good programming practices and TDD among other things, I only coded it for fun.

Several iterations, refactors, improvements, and accumulated knowledge after, brought me to write unit tests and integration tests I could, before implementing new functionalities. However, I still don't have enough time to really do all the tests in order to have an acceptable code coverage... although the software works good.

So when I have time to spend in this project, I want to implement new functionalities (this time yes, doing the unit tests in parallel) not doing a lot of tests that, have to say are very boring, and many of them hard to do because of mocking and stuff...

Should I keep adding functionality or should I finish all the tests before?

I determined by this that the software should be in beta version until a reasonable code coverage is reached. At this time it's on version 0.9-beta.

If I add new functionality, should I follow the semantic version keeping the beta? For example, being the next iterations 0.10-beta, 0.11-beta and so on until the tests are done, when finally it would turn to non-beta versions.

If you want to check my project, here is the link: octaviospain.github.io/Musicott

transgressoft
  • 155
  • 2
  • 13
  • 1
    Forget about code coverage. The high code coverage usually turns into a target (instead of being just a low-value metric) and when this happen the quality of the tests is affected. The programmer starts writing unnatural, convoluted test cases to run the uncovered code. Concentrate on writing tests that describe the functionality, instead. Run the test suite with code coverage from time to time. If the tests are clear and cover the expected functionality then the code coverage tells you that the uncovered code is actually not needed. Delete it instead of writing tests to get it covered. – axiac Jan 31 '17 at 13:57
  • Well, 8% is a low code coverage, indeed. Write more tests but don't use the line-by-line code coverage as a guide of what to test. Writing the tests even after you wrote the tested code will help you understand the application flow better. – axiac Jan 31 '17 at 13:59
  • @axiac thank you for your answer, I wanted to have a higher code coverage to make the project _look better_, like a lot of open source projects on github, with the CI _passing_ on green, high green coverage and so on. I understand that turn in into a target is not the proper way to test my code ofc. – transgressoft Jan 31 '17 at 15:15
  • 1
    Don't get me wrong, I'm not against code coverage. Just don't aim to 100%. A higher code coverage also increases your confidence when you need to implement changes or to refactor your code. The ideal situation is when you write the tests before writing the code. However, this requires a very good understanding of how the code behaviour is expected from the outside. Anyway, write more tests for the existing code before implementing new features. Writing tests for existing code helps you discover small bugs and ways to improve or reorganize the code. It happens to me all the time. – axiac Jan 31 '17 at 15:23

2 Answers2

5

Writing tests for existing code is not test driven development. For a side project like that I would only do it if you are worried that your code might not work correctly. What you could do and what I would recommend if you want to test your existing code is to write acceptance tests.

Acceptance tests are tests that cover a user story, meaning a series of actions a user would want to perform, and check if the behavior of your whole system meets the requirements. Since acceptance tests are mostly end to end tests you don't need to mock too much of your system. Having these acceptance tests would give you certainty that your system reacts properly to common user input.

After that you can focus on adding new functionality using the TDD cycle. I would recommend using acceptance tests there as well. Start by writing an acceptance test that covers a whole feature or user story and then repeat the Red-Green-Refactor cycle until your acceptance passes. By then you know that a feature is working correctly and you can start working on the next one.

If you want to know more about acceptance testing I recommend reading 'Growing Object Oriented Software, Guided By Tests'. It gets a little boring sometimes, the authors repeat themselves a few times, but it's worth the read.

Pox
  • 387
  • 1
  • 10
  • Thanks for your answer. I didn't think in acceptance tests directly, but in system tests instead, since is an application in which the user input is entered in the GUI. I think the result is the same, as you say: write tests that covers a use case/user/story in order to achieve the correct funcionality. I didn't mention that I was trying to do TDD writing tests for existing code. I tried to say that I didn't do TDD in the proper moment and I wanted to write the tests after a good amount of code was written. – transgressoft Jan 31 '17 at 15:26
1

In my opinion, you shouldn't try to write all the tests in one go, because that would be too time consuming and horrendous task to accomplish. Besides writing tests for code already written doesn't really qualify as TDD (IMHO), since your tests aren't deriving the design of your code and isn't affecting the code quality (unless you refactor). Just make sure to write tests for any further code you right. With that said, whenever you start working on a particular feature do ensure that you write some high level integration/regression tests, which will ensure that you don't break anything too critical.

All this is based on an assumption that your code is written decently enough to be testable. If that's not the case, then you have to bite the bullet and refactor the code related to the feature first, before you start working on it.

hspandher
  • 15,934
  • 2
  • 32
  • 45