1

I am running Spark 1.1.0 with PySpark.

When I run the example taken straight from the documentation:

from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.classification import SVMWithSGD
import array

data = [
    LabeledPoint(0.0, [0.0]),
    LabeledPoint(1.0, [1.0]),
    LabeledPoint(1.0, [2.0]),
    LabeledPoint(1.0, [3.0])
]

svm = SVMWithSGD.train(sc.parallelize(data))
svm.predict(array([1.0])) > 0

I get an error:

TypeError                                 Traceback (most recent call last)
<ipython-input-5-68bcda022b28> in <module>()
     12 
     13 svm = SVMWithSGD.train(sc.parallelize(data))
---> 14 svm.predict(array([1.0])) > 0

TypeError: 'module' object is not callable

What could be the issue?

zero323
  • 322,348
  • 103
  • 959
  • 935
poiuytrez
  • 21,330
  • 35
  • 113
  • 172

1 Answers1

1

I was importing the wrong array package.

import array

instead of

from numpy import array
poiuytrez
  • 21,330
  • 35
  • 113
  • 172