Is there a way to split the data a batch in two streams of data:
- one for which the expectations are met
- The second one for which expectations fail
That is to split the tested batch of data into two table/pandas data frames? one that is clean and the other that is not? I am trying to use great expectations with Postgres tables as data source in an ETL pipeline. The only thing is that I wouldn't want to fail the entire ETL process just quarantine the data that fails the test cases.