0

So, It seems I have arrived at an impasse. I have asked to write unit test for a legacy code base in .Net however I am frequently coming across code that is implementing the 'using' statement. This is problem because I am unable to mock the objects.

using (var package1 = new ExcelPackage(sourceHierarchyFile))               
{                     
     ExcelWorksheet worksheet = package1.Workbook.Worksheets[0];                     
     var result = _industryBAL.SaveSourceHierarchy(industryId, userEnterpriseId, worksheet);
}

I am using the FakeItEasy mocking framework. But there is no provision for this type of code. Plus Wrapper methods would be one of the options. Basically guys I really need your help since not implementing these is causing an issue in the Code Coverage percentage.

  • 1
    I'm sorry, I fear I don't quite understand. Are you asking how to fake the `package1` variable? If so, the problem isn't exactly the `using` (although it might provide a small complication) so much that the code above is instantiating the object itself. To use FakeItEasy to fake the `package1`, you'll need to provide the object to your production code. Perhaps using constructor or property injection, or by providing it as a parameter to the method that you want to test. – Blair Conrad Mar 15 '23 at 14:05

1 Answers1

0

I suggest a couple of options. Option 3 is what I recommend.

  • Option 1: Ideally, you would refactor the code for Dependency Injection, like @Blair Conrad was saying. Assuming you are using the EPPlus library, the ExcelPackage class is sealed and only inherits from System.Object and implements IDisposable, so extracting something like an IExcelPackage interface would not be easy.

  • Option 2: Apply a Simple Factory pattern and use a factory method there and inside the factory method you could add logic to decide if its debug then and then return a New ExcelPackage from a known test file.

    using (var package1 = ExcelPackageFactory.Create(sourceHierarchyFile))
    {
    ExcelWorksheet worksheet = package1.Workbook.Worksheets[0];
    var result = _industryBAL.SaveSourceHierarchy(industryId, userEnterpriseId, worksheet); }

However, refactoring so much legacy code is often not fun or efficient. So what I often do is instead write integration tests, often in NUnit or some UI testing system, that can run against a real higher level object, even if that is the website or console application, etc itself. Often I will have a set of tests that run against the dev site as a post-deployment step. Depending on how you run that, it may not give you credit in code coverage. Code coverage is not an ideal metric anyway. Branch coverage would be better, but it can be hard selling management on letting go of code coverage metrics.

  • Option 3: Add the ExcludeFromCodeCoverage attribute above this method. There is no business logic in the example you shared, so there isn't really much practical use it writing a worthless test just to get your code coverage up. It is only doing File IO.

    [ExcludeFromCodeCoverage] public void Foo() { // excel code here }

Dan Csharpster
  • 2,662
  • 1
  • 26
  • 50
  • In the end I suppose i have to exclude the codes, because i am not allowed to change the source code. – indrajit bagchi Mar 15 '23 at 18:24
  • Some code will always need to be excluded from code or branch coverage. Aiming for 100% code coverage without exclusions is both really hard and a waste of time, in my opinion. As code changes, you often end up having to maintain unit tests that will never ever catch a bug. Some examples would be controller actions with no business logic in them and ORM viewmodels that autogenerated by a tool. And then you have legacy code that is impossible to unit test, especially if you are not allowed to change it. Its better to just wrap them in integration tests. – Dan Csharpster Mar 15 '23 at 18:34
  • 1
    If you're not allowed to change the source code, there's going to be a LOT of code that you won't be able to unit test... – Thomas Levesque Mar 16 '23 at 14:11
  • Agreed. Low code coverage and widespread inability to unit test is the price of not designing it better in the past for testing (Inversion of Control, etc) combined with locking the code down now. And I'm not saying that locking it for changes is the wrong choice. We'd need more context around that. But for now, I'd suggest to unit test what you can, exclude the rest and add a healthy amount of integration tests as early in the build and release pipeline as possible. – Dan Csharpster Mar 16 '23 at 15:11