I have a Pyspark dataframe as below and need to create a new dataframe with only one column made up of all the 7 digit numbers in the original dataframe. The values are all strings. Column1 should be ignored. Ignoring non-numbers and single 7 digit numbers in Column2 is fairly straightforward, but for the values that have two separate 7 digit numbers, I'm having difficulty pulling them out individually. This needs to be automated and able to run on other similar dataframes. The numbers are always 7 digits and always begin with a '1.' Any tips?
+-----------+--------------------+
| COLUMN1| COLUMN2|
+-----------+--------------------+
| Value1| Something|
| Value2| 1057873 1057887|
| Value3| Something Something|
| Value4| null|
| Value5| 1312039|
| Value6| 1463451 1463485|
| Value7| Not In Database|
| Value8| 1617275 1617288|
+-----------+--------------------+
The resulting dataframe should be as below:
+-------+
|Column1|
+-------+
|1057873|
|1057887|
|1312039|
|1463451|
|1463485|
|1617275|
|1617288|
+-------+
- UPDATE:
The responses are great, but unfortunately I'm using a older version of Spark that doesn't agree. I used the below to solve the problem, though it's a bit clunky...it works.
from pyspark.sql import functions as F
new_df = df.select(df.COLUMN2)
new_df = new_df.withColumn('splits', F.split(new_df.COLUMN2, ' '))
new_df = new_df.select(F.explode(new_df.splits).alias('column1'))
new_df = new_df.filter(new_df.column1.rlike('\d{7}'))