I have a pandas dataframe like:
df = pd.DataFrame({'Last_Name': ['Smith', None, 'Brown'],
'First_Name': ['John', None, 'Bill'],
'Age': [35, 45, None]})
And could manually filter it using:
df[df.Last_Name.isnull() & df.First_Name.isnull()]
but this is annoying as I need to write a lot of duplicate code for each column/condition. It is not maintainable if there is a large number of columns. Is it possible to write a function which generates this python code for me?
Some Background: My pandas dataframe is based on an initial SQL-based multi-dimensional Aggregation (grouping-sets) https://jaceklaskowski.gitbooks.io/mastering-spark-sql/spark-sql-multi-dimensional-aggregation.html so always some different columns are NULL. Now, I want to efficiently select these different groups and analyze them separately in pandas.