2

I have a pyspark dataframe where occasionally the columns will have a wrong value that matches another column. It would look something like this:

| Date         | Latitude      |
| 2017-01-01   | 43.4553       |
| 2017-01-02   | 42.9399       |
| 2017-01-03   | 43.0091       |
| 2017-01-04   | 2017-01-04    |

Obviously, the last Latitude value is incorrect. I need to remove any and all rows that are like this. I thought about using .isin() but I can't seem to get it to work. If I try

df['Date'].isin(['Latitude'])

I get:

Column<(Date IN (Latitude))>

Any suggestions?

pault
  • 41,343
  • 15
  • 107
  • 149
lengthy_preamble
  • 404
  • 3
  • 14
  • 35

1 Answers1

2

If you're more comfortable with SQL syntax, here is an alternative way using a pyspark-sql condition inside the filter():

df = df.filter("Date NOT IN (Latitude)")

Or equivalently using pyspark.sql.DataFrame.where():

df = df.where("Date NOT IN (Latitude)")
pault
  • 41,343
  • 15
  • 107
  • 149