12

I have a DataFrame called 'df' like the following:

+-------+-------+-------+
|  Atr1 |  Atr2 |  Atr3 |
+-------+-------+-------+
|   A   |   A   |   A   |
+-------+-------+-------+
|   B   |   A   |   A   |
+-------+-------+-------+
|   C   |   A   |   A   |
+-------+-------+-------+

I want to add a new column to it with incremental values and get the following updated DataFrame:

+-------+-------+-------+-------+
|  Atr1 |  Atr2 |  Atr3 |  Atr4 |
+-------+-------+-------+-------+
|   A   |   A   |   A   |   1   |
+-------+-------+-------+-------+
|   B   |   A   |   A   |   2   |
+-------+-------+-------+-------+
|   C   |   A   |   A   |   3   |
+-------+-------+-------+-------+

How could I get it?

jartymcfly
  • 1,945
  • 9
  • 30
  • 51

1 Answers1

13

If you only need incremental values (like an ID) and if there is no constraint that the numbers need to be consecutive, you could use monotonically_increasing_id(). The only guarantee when using this function is that the values will be increasing for each row, however, the values themself can differ each execution.

from pyspark.sql.functions import monotonically_increasing_id

df.withColumn("Atr4", monotonically_increasing_id())
Shaido
  • 27,497
  • 23
  • 70
  • 73
  • 1
    Thanks! Nice solution! – jartymcfly Sep 14 '17 at 09:18
  • Note that this answer does in fact address the question, however it should be noted given the example specifies a dataframe "like the following", one might assume the example would extend to an infinite amount of consecutive numbers, however `monotonically_increasing_id()` does not produce consecutive numbers, only monotonically increasing numbers and thus the assumption would break down with a larger dataframe. – Jomonsugi Jul 18 '19 at 20:58
  • @Jomonsugi: That is correct. I highlighted that part of the answer to make this constraint more obvious. – Shaido Jul 19 '19 at 01:55