Suppose I have the following pyspark dataframe:
>>> df = spark.createDataFrame([('A', 'Amsterdam', 3.4), ('B', 'London', None), ('C', None, None), ('D', None, 11.1)], ['c1', 'c2', 'c3'])
>>> df.show()
+---+---------+----+
| c1| c2| c3|
+---+---------+----+
| A|Amsterdam| 3.4|
| B| London|null|
| C| null|null|
| D| null|11.1|
+---+---------+----+
How can I now select or filter for any row, containing at least one null value, like so?:
>>> df.SOME-COMMAND-HERE.show()
+---+---------+----+
| c1| c2| c3|
+---+---------+----+
| B| London|null|
| C| null|null|
| D| null|11.1|
+---+---------+----+