0

Versions of the libraries we're using:

snowconn==3.7.1
snowflake-connector-python==2.3.10
snowflake-sqlalchemy==1.2.3
SQLAlchemy==1.3.23
great_expectations==0.13.10
pandas==1.1.5

Note we're grabbing data from Snowflake on our own and then feeding a dataframe of it into Great Expectations. I'm aware GE has a Snowflake data source and it's on my list to add it. But I think this setup should work even without using that data source.

We have the following Great Expectations data context config:

    data_context_config = DataContextConfig(
        datasources={
            datasource_name: DatasourceConfig(
                class_name='PandasDatasource',
                data_asset_type={
                    'module_name': 'dataqa.dataset',
                    'class_name': 'CustomPandasDataset'
                }
            )
        },
        store_backend_defaults=S3StoreBackendDefaults(
            default_bucket_name=METADATA_BUCKET,
            expectations_store_prefix=EXPECTATIONS_PATH,
            validations_store_prefix=VALIDATIONS_PATH,
            data_docs_prefix=DATA_DOCS_PATH,
        ),
        validation_operators={
            "action_list_operator": {
                "class_name": "ActionListValidationOperator",
                "action_list": [
                    {
                        "name": "store_validation_result",
                        "action": {"class_name": "StoreValidationResultAction"},
                    },
                    {
                        "name": "store_evaluation_params",
                        "action": {"class_name": "StoreEvaluationParametersAction"},
                    },
                    {
                        "name": "update_data_docs",
                        "action": {"class_name": "UpdateDataDocsAction"},
                    },
                ],
            }
        }
    )
    ge_context = BaseDataContext(project_config=data_context_config)

CustomPandasDataset is defined as:

class CustomPandasDataset(PandasDataset):
    _data_asset_type = "CustomPandasDataset"

    @MetaPandasDataset.multicolumn_map_expectation
    def expect_column_A_equals_column_B_column_C_ratio(
        self,
        column_list,
        ignore_row_if='any_value_is_missing'
    ):
        column_a = column_list.iloc[:,0]
        column_b = column_list.iloc[:,1]
        column_c = column_list.iloc[:,2]

        return abs(column_a - (1.0 - (column_b/column_c))) <= 0.001

and called like:

    cols = ['a', 'b', 'c']
    batch.expect_column_A_equals_column_B_column_C_ratio(
        cols,
        catch_exceptions=True
    )

Later on we validate the data context like so:

    return ge_context.run_validation_operator(
        "action_list_operator",
        assets_to_validate=batches,
        run_id=run_id)["success"]

Often times, columns a and b are null in our data. Given I've set the ignore_row_if='any_value_is_missing' flag on the custom expectation, I'm expecting rows with null values in any of columns a, b, or c to be skipped. But Great Expectations doesn't skip them, instead adding them to the unexpected, or "failed" field of output:

result  
element_count   1000
missing_count   0
missing_percent 0
unexpected_count    849
unexpected_percent  84.89999999999999
unexpected_percent_total    84.89999999999999
unexpected_percent_nonmissing   84.89999999999999result 
element_count   1000
missing_count   0
missing_percent 0
unexpected_count    849
unexpected_percent  84.89999999999999
unexpected_percent_total    84.89999999999999
unexpected_percent_nonmissing   84.89999999999999


partial_unexpected_list 

0   
a   null
b   null
c   1.63

I'm unsure why this is happening. In the Great Expectations source, the multicolumn_map_expectation does:

...
            elif ignore_row_if == "any_value_is_missing":
                boolean_mapped_skip_values = test_df.isnull().any(axis=1)
...
            boolean_mapped_success_values = func(
                self, test_df[boolean_mapped_skip_values == False], *args, **kwargs
            )
            success_count = boolean_mapped_success_values.sum()
            nonnull_count = (~boolean_mapped_skip_values).sum()
            element_count = len(test_df)

            unexpected_list = test_df[
                (boolean_mapped_skip_values == False)
                & (boolean_mapped_success_values == False)
            ]
            unexpected_index_list = list(unexpected_list.index)

            success, percent_success = self._calc_map_expectation_success(
                success_count, nonnull_count, mostly
            )

which I interpret as ignoring null-containing rows (not adding them to the unexpected list and not using them to determine percent_success). I've dropped a pdb in our code and verified that the dataframe we're calling the expectation on can be manipulated in the correct way to get "sensible" data (test_df.isnull().any(axis=1)), but for some reason Great Expectations is allowing those nulls to slip through. Anyone know why?

Ryan Schuster
  • 494
  • 4
  • 15

1 Answers1

1

I believe the poster filed a Github issue here: https://github.com/great-expectations/great_expectations/issues/2460. The progress can be tracked there.