I have a dataframe with two column, address and street name.
from pyspark.sql.functions import *
import pyspark.sql
df = spark.createDataFrame([\
['108 badajoz road north ryde 2113, nsw, australia', 'north ryde'],\
['25 smart street fairfield 2165, nsw, australia', 'smart street']
],\
['address', 'street_name'])
df.show(2, False)
+------------------------------------------------+---------------+
|address |street_name |
+------------------------------------------------+---------------+
|108 badajoz road north ryde 2113, nsw, australia|north ryde |
|25 smart street fairfield 2165, nsw, australia |smart street |
+------------------------------------------------+---------------+
I want to find whether street_name
exist in address
and return a boolean in a new column. I can search the pattern manually like below.
df.withColumn("new col", col("street").rlike('.*north ryde.*')).show(20,False)
----------------------------------------------+---------------+-------+
|address |street_name |new col|
+------------------------------------------------+------------+-------+
|108 badajoz road north ryde 2113, nsw, australia|north ryde |true |
|25 smart street fairfield 2165, nsw, australia |smart street|false |
+------------------------------------------------+------------+-------+
But I want to replace the manual value with the column street_name
as below
df.withColumn("new col", col("street")\
.rlike(concat(lit('.*'),col('street_name'),col('.*))))\
.show(20,False)