2

I have a Pyspark dataframe as below and need to create a new dataframe with only one column made up of all the 7 digit numbers in the original dataframe. The values are all strings. Column1 should be ignored. Ignoring non-numbers and single 7 digit numbers in Column2 is fairly straightforward, but for the values that have two separate 7 digit numbers, I'm having difficulty pulling them out individually. This needs to be automated and able to run on other similar dataframes. The numbers are always 7 digits and always begin with a '1.' Any tips?

+-----------+--------------------+
|    COLUMN1|             COLUMN2|
+-----------+--------------------+
|     Value1|           Something|
|     Value2|     1057873 1057887|
|     Value3| Something Something|
|     Value4|                null|
|     Value5|             1312039|
|     Value6|     1463451 1463485|
|     Value7|     Not In Database|
|     Value8|     1617275 1617288|
+-----------+--------------------+

The resulting dataframe should be as below:

+-------+
|Column1|
+-------+
|1057873|
|1057887|
|1312039|
|1463451|
|1463485|
|1617275|
|1617288|
+-------+
  • UPDATE:

The responses are great, but unfortunately I'm using a older version of Spark that doesn't agree. I used the below to solve the problem, though it's a bit clunky...it works.

from pyspark.sql import functions as F

new_df = df.select(df.COLUMN2)

new_df = new_df.withColumn('splits', F.split(new_df.COLUMN2, ' '))

new_df = new_df.select(F.explode(new_df.splits).alias('column1'))

new_df = new_df.filter(new_df.column1.rlike('\d{7}'))
Dr.Data
  • 167
  • 1
  • 10

3 Answers3

3

Here is an approach with higher order lambda functions for spark 2.4+ wherein we split the column by space and then filter the words which starts with 0-9 and are length n (7), then explode:

n = 7
df.selectExpr(f"""explode(filter(split(COLUMN2,' '),x-> 
            x rlike '^[0-9]+' and length(x)={n})) as COLUMN1""").show(truncate=False)

+-------+
|COLUMN1|
+-------+
|1057873|
|1057887|
|1312039|
|1463451|
|1463485|
|1617275|
|1617288|
+-------+
anky
  • 74,114
  • 11
  • 41
  • 70
  • I'm running Spark 2.3.0.cloudera3 and there may be issues with versions. For example, my setup does not recognize the '->' – Dr.Data Mar 09 '22 at 21:50
  • 3
    @anky stated 2.4+, any reasons why you may not want to upgrade. Beware of issues with legacy code. – wwnde Mar 09 '22 at 23:48
  • 1
    @Dr.Data may be try `df.selectExpr("""explode(regexp_extract_all(COLUMN2,'[0-9]{7}',0))""").show()` not sure if it will run in 2.3 but you can give it a shot – anky Mar 10 '22 at 02:57
  • 1
    Nope, the regexp_extract_all is undefined for 2.3, apparently. I'd upgrade, but I'm not in charge of that! I did find a way to do it, though not as straight-forward as these one-liners (I'll update the OP). It does the trick. I'm sure the above responses work great for 2.4+, though. – Dr.Data Mar 10 '22 at 19:42
3

I like @nky and voted for it. An alternative Can also use pysparks exists in a higher order function in 3.0+

new = df.selectExpr("explode(split(COLUMN2,' ')) as COLUMN1").where(F.expr("exists(array(COLUMN1), element ->  element rlike '([0-9]{7})')"))

new.show()

+-------+
|COLUMN1|
+-------+
|1057873|
|1057887|
|1312039|
|1463451|
|1463485|
|1617275|
|1617288|
+-------+
wwnde
  • 26,119
  • 6
  • 18
  • 32
2

IIUC, you could use a regex and str.extractall:

df2 = (df['COLUMN2'].str.extractall(r'(\b\d{7}\b)')[0]
      .reset_index(drop=True).to_frame(name='COLUMN1')
      )

output:

   COLUMN1
0  1057873
1  1057887
2  1312039
3  1463451
4  1463485
5  1617275
6  1617288

regex:

(      start capturing
\b     word boundary
\d{7}  7 digits       # or 1\d{6} for "1" + 6 digits
\b     word boundary
)      end capture
mozway
  • 194,879
  • 13
  • 39
  • 75
  • 1
    Apologies. I should have specified pyspark df, rather than pandas. This does work great for pandas though and I'll use it if I can't figure this out directly in pyspark. – Dr.Data Mar 09 '22 at 21:37