-1

I have these labels and features like

labels features
 [2.3]   1 5.1 7.2 5 5 5
 [5.4]   4.5 3 2 4 6 4
 [6.3]   3.3 1.3 5.4 6

Like this, I have more than 10K entries. How can I use Logistic regression to train a model in spark? I know we can use linear regression. But still, I want to use LogisticReg to check it's performance. Till now what I did was mapping those classes to discrete values like (2.3->0,5.4->1, 6.3->2) I have found 11101 unique labels. But computation taking a lot of time.

Kiran Gali
  • 101
  • 6
  • Logistic regression is used for classification problems, so it doesn't make much sense to use logistic regression if your labels are continuous. – antonioACR1 Sep 26 '18 at 15:42

1 Answers1

1

Those labels you have seem to be continuous variables (and not discrete). As far as I know, logistic regression in Spark can be only used for classification, and not for regressions (https://spark.apache.org/docs/2.2.0/ml-classification-regression.html).

Guasacaca
  • 31
  • 2