3

I have a dataframe with two column, address and street name.

from pyspark.sql.functions import *
import pyspark.sql

df = spark.createDataFrame([\
    ['108 badajoz road north ryde 2113, nsw, australia', 'north ryde'],\
    ['25 smart street fairfield 2165, nsw, australia', 'smart street']
  ],\ 
  ['address', 'street_name'])

df.show(2, False)

+------------------------------------------------+---------------+
|address                                         |street_name    |
+------------------------------------------------+---------------+
|108 badajoz road north ryde 2113, nsw, australia|north ryde     |
|25 smart street fairfield 2165, nsw, australia  |smart street   |
+------------------------------------------------+---------------+

I want to find whether street_name exist in address and return a boolean in a new column. I can search the pattern manually like below.

df.withColumn("new col", col("street").rlike('.*north ryde.*')).show(20,False)
----------------------------------------------+---------------+-------+
|address                                         |street_name |new col|
+------------------------------------------------+------------+-------+
|108 badajoz road north ryde 2113, nsw, australia|north ryde  |true   |
|25 smart street fairfield 2165, nsw, australia  |smart street|false  |
+------------------------------------------------+------------+-------+

But I want to replace the manual value with the column street_name as below

 df.withColumn("new col", col("street")\
  .rlike(concat(lit('.*'),col('street_name'),col('.*))))\
  .show(20,False) 
Ali
  • 7,810
  • 12
  • 42
  • 65

3 Answers3

2

You can do that by simply using contains function. For more details, refer to this:

from pyspark.sql.functions import col, when

df = df.withColumn('new_Col',when(col('address').contains(col('street_name')),True).otherwise(False))
df.show(truncate=False)

+------------------------------------------------+------------+-------+ 
|address                                         |street_name |new_Col|      
+------------------------------------------------+------------+-------+ 
|108 badajoz road north ryde 2113, nsw, australia|north ryde  |true   | 
|25 smart street fairfield 2165, nsw, australia  |smart street|true   | 
+------------------------------------------------+------------+-------+
cph_sto
  • 7,189
  • 12
  • 42
  • 78
1

A simple solution would be to define an UDF and use that. For example,

from pyspark.sql.functions import udf

def contains_address(address, street_name):
  return street_name in address

contains_address_udf = udf(contains_address, BooleanType())
df.withColumn("new_col", contains_address_udf("address", "street_name")

Here, simply using in is possible, but if more complicated functionality is required simply replace it with a regular expression instead.

Shaido
  • 27,497
  • 23
  • 70
  • 73
1

just use the expr function :

from pyspark.sql import functions as F

df.select(
    "address", 
    "street_name", 
    F.expr("address like concat('%',street_name,'%')")
).show()

+--------------------+------------+--------------------------------------+
|             address| street_name|address LIKE concat(%, street_name, %)|
+--------------------+------------+--------------------------------------+
|108 badajoz road ...|  north ryde|                                  true|
|25 smart street f...|smart street|                                  true|
+--------------------+------------+--------------------------------------+
Steven
  • 14,048
  • 6
  • 38
  • 73