I have followed the QuickStart Guide at http://predictionio.incubator.apache.org/templates/recommendation/quickstart/
Installed PredictionIO. EventServer works fine. New App created successfully. GET and POST requests work fine. Import the sample-data worked fine.
pio build
worked fine. but when i try to run
pio train
i get this error
[INFO] [Engine] Extracting datasource params...
[INFO] [WorkflowUtils$] No 'name' is found. Default empty String will be used.
[INFO] [Engine] Datasource params: (,DataSourceParams(MyApp1,None))
[INFO] [Engine] Extracting preparator params...
[INFO] [Engine] Preparator params: (,Empty)
[INFO] [Engine] Extracting serving params...
[INFO] [Engine] Serving params: (,Empty)
[WARN] [Utils] Your hostname, damiano-asus resolves to a loopback address: 127.0.0.1; using 10.0.10.150 instead (on interface wlp3s0)
[WARN] [Utils] Set SPARK_LOCAL_IP if you need to bind to another address
[INFO] [Remoting] Starting remoting
[INFO] [Remoting] Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.0.10.150:33231]
[WARN] [MetricsSystem] Using default name DAGScheduler for source because spark.app.id is not set.
[INFO] [Engine$] EngineWorkflow.train
[INFO] [Engine$] DataSource: damiano.company.DataSource@483b0690
[INFO] [Engine$] Preparator: damiano.company.Preparator@fb0a08c
[INFO] [Engine$] AlgorithmList: List(damiano.company.ALSAlgorithm@6a5e167a)
[INFO] [Engine$] Data sanity check is on.
[INFO] [Engine$] damiano.company.TrainingData does not support data sanity check. Skipping check.
[INFO] [Engine$] damiano.company.PreparedData does not support data sanity check. Skipping check.
[WARN] [BLAS] Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
[WARN] [BLAS] Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
[WARN] [LAPACK] Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK
[WARN] [LAPACK] Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK
[Stage 29:> (0 + 0) / 4][ERROR] [Executor] Exception in task 1.0 in stage 40.0 (TID 137)
[ERROR] [Executor] Exception in task 0.0 in stage 40.0 (TID 136)
[ERROR] [Executor] Exception in task 3.0 in stage 40.0 (TID 139)
[ERROR] [Executor] Exception in task 2.0 in stage 40.0 (TID 138)
[WARN] [TaskSetManager] Lost task 1.0 in stage 40.0 (TID 137, localhost): java.lang.StackOverflowError
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2901)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1700)
i guess it is due to the jvm heap/stack dimension? anyone know how ti fix this? thank you