-1

I need to verify log information(error messages) along with the result set. Here logging can be also understood as report generation in my case.

Externalized Logging

Should I store the log messages(for any errors) along with result and do the logging after the business logic step

Advantages:

  1. This gives me the log information that I can use to verify negative cases during unit testing, versus parsing the log file.
  2. Separate out the logging from business logic.
  3. I can implement logging as a separate feature, where I can log in different formats based on implementation (HTML, JSON, etc)

Disadvantages

  1. This will have code duplication, as I end up with the same loops for logging as for the computation of the result set.
  2. During the logging phase the parent will have to fetch the child info. And storing all this info makes it complex and unreadable.

Internalized Logging

Should I do the logging at the same time as I perform business logic

Advantages

  1. Not storing any information and simplifying the solution and effectively passing the context of the parent objects to the child object,
  2. Logging when exception occurs.

Disadvantages

  1. But not able to separate logging/reporting from business logic.
  2. I will not get a log information to verify negative cases for unit-tests. So I will need to parse the log file to verify.

More context below:

I am building this tool for comparison of properties in two resources that can be of type - JSON, properties, VMs, REST API, etc in Python.

The tool reads a metadata json having a structure like follows:

{
  "run-name": "Run Tests"
  "tests": [
    {
      "name": "Test 1",
      "checks":[
         {
          "name": "Dynamic Multiple",
          "type": "COMPARE",
          "dynamic": [
            {
              "file": "source.json",
              "type": "JSON",
              "property": "topology.wlsClusters.[].clusterName"
            }
          ],
          "source": {
            "file": "source.json",
            "type": "JSON",
            "property": "topology.wlsClusters.[clusterName == ${1}].Xms"
          },
          "target": {
            "file": "target.properties",
            "type": "PROPERTY",
            "property": "fusion.FADomain.${1}.default.minmaxmemory.main",
            "format": "-Xms{}?"
          }
        },
      ]
    }
  ]
}

The above JSON tells my tool to:

  1. Fetch 'clusterName' from each wlsCluster object in topology.wlsClusters. This gives a list of 'clusterNames'.
  2. From 'source.json' fetch Xms values from each wlsCluster object where 'clusterName' belongs to the above list.
  3. Similarly fetch a all Xms values from target.properties file using the above list.
  4. Compare each value from source Xms list to target Xmx list.
  5. If all match then SUCCESS else FAILURE.

Intuitively above JSON can be mapped to its corresponding objects as:

  • Test
  • Check
  • Resource

Now ideally I know I should be doing following steps:

  • Run all tests, and all checks in each test.
  • For each test if type is compare
    • Read and compute 'dynamic' values
    • Read 'source' and replace dynamic values in property field and fetch the corresponding properties
    • Similarly read 'target' and fetch the corresponding properties
    • Compare and return 'PASSED' or 'FAILED'

So broadly I have these steps:

  1. FETCH and STORE VALUES.
  2. COMPARE VALUES

I also want to print logs in following format.

[<TIMESTAMP> <RUN-NAME> <TEST-NAME> <CHECK-NAME> <ERROR-LEVEL> <MESSAGE-TYPE> <RESOURCE-NAME>] custom-msg

where

ERROR-TYPE: INFO DEBUG etc
MESSAGE-TYPE: COMPARE, SYNATAX-ERROR, MISSING-PROPERTY etc

Now if I follow the above object model, and each object is responsible for handling it's own logging, it would not have all this information. So I need to either:

  • pass this information down to the child objects,
  • or have the parent read the information of the child object.

I prefer the second approach as then I can store the results of fetch and delay the logging (if any) to after comparison. In this way I can also run validations (unit-tests) as I validate the error message(negative scenario) as well.

But this where my solution is getting complicated.

  • I need to store the results of fetch in each object, which can be the value found or 'None' when no value is found. Now I need to store the error type and error message as well when no value is found. Lets call this class Value.
  • Each Property can have produce a list of such Value.
  • Each Resource can produce a list of such Property.
  • Each Check can produce a list of such Resource.

NOTE: This is developed in Python. (If it matters to you.)

arvindkgs
  • 373
  • 4
  • 12

1 Answers1

0

Each class should be responsible of it's own state. When you let classes make decisions based on properties in other classes you will end up with spagetti code eventually.

Code like if (test.check.resource.AProperty == aValue) is a clear indication that your spagetti have started cooking.

In this case you want to log at all in the classes. You want o decide whether a sequence of actions were completed successfully or not. And as a consequence of that, you want to log the result.

With that in mind, don't let the classes log at all, but only report what they tested/checked and the result of that.

A common approach is to supply a context object which are used to receive the result.

Here is some c# code to illustrate (i don't know python well enough):

interface VerifierContext
{
  void AddSuccess(string checkName, string resourceName, string message);
  void AddFailure(string checkName, string resourceName, SomeEnum failureType, string message);
}
public class SomeChecker
{
    public void Validate(VerifierContext context)
    {
        context.AddFailure("CompanyNameLength", "cluster.Company", SomeEnum.LengthRestriction, "Company name was 30chars, can only be 10");
    }
}

That will give you a flat list of validations. If you want to get nested you can add Enter/Exit methods:

public class SomeChecker
{
    public void Validate(VerifierContext context)
    {
        context.Enter("CompanyValidations");

        foreach (var validator in _childValidators)
            validator.Validate(context);

        context.Exit("CompanyValidations");
    }
}

You can of course design it in a lot of different ways. My main point is that each class in your check/parser should just make a decision on if everything went OK or not. It should not decide how things should be logged.

The class that triggers the work can then go through all result and choose log level depending on the errorType etc.

All classes are also easily tested since they only depend on the context.

jgauffin
  • 99,844
  • 45
  • 235
  • 372
  • Thank you. I was also aiming for that. But as I tried refactoring, I got bogged down/intimidated by the number of classes I end up having. Maybe I will try this approach again. – arvindkgs May 02 '19 at 16:40