When trying to use the SparkGraphComputer to count the number of vertices on a titan graph on a cluster I get an error that I have no idea how to deal with. I am using tinkerpop 3.1.1-incubating and Titan 1.1.0-SNAPSHOT in my code and on the cluster I have installed datastax community edition 2.1.11 and spark 1.5.2-bin-hadoop2.6
I have put together a minimal Java example to reproduce my problem:
private void strippedDown() {
// a normal titan cluster
String titanClusterConfig = "titan-cassandra-test-cluster.properties";
// a hadoop graph with cassandra as input and gryo as output
String sparkClusterConfig = "titan-cassandra-test-spark.properties";
String edgeLabel = "blank";
// add a graph
int n = 100;
Graph titanGraph = GraphFactory.open(titanClusterConfig);
Vertex superNode = titanGraph.addVertex(T.label, String.valueOf(0));
for (int i=1;i<n;i++) {
Vertex currentNode = titanGraph.addVertex(T.label, String.valueOf(i));
currentNode.addEdge(edgeLabel,superNode);
}
titanGraph.tx().commit();
//count with titan
Long count = titanGraph.traversal().V().count().next();
System.out.println("The number of vertices in the graph is: "+count);
// count the graph using titan graph computer
count = titanGraph.traversal(GraphTraversalSource.computer(FulgoraGraphComputer.class)).V().count().next();
System.out.println("The number of vertices in the graph is: "+count);
// count the graph using spark graph computer
Graph sparkGraph = GraphFactory.open(sparkClusterConfig);
count = sparkGraph.traversal(GraphTraversalSource.computer(SparkGraphComputer.class)).V().count().next();
System.out.println("The number of vertices in the graph is: "+count);
}
The counts using OLTP and using OLAP with the FulgoraGraphComputer return the correct answer. The OLAP count using SparkGraphComputer however throws org.apache.spark.SparkException: Job aborted due to stage failure:
Interestingly if I run a similar script through the gremlin console packaged with Titan I get a different error for what seems to be the same algorithm:
graph = GraphFactory.open('titan-cassandra-test-cluster.properties')
graph.addVertex(T.label,"0")
graph.addVertex(T.label,"1")
graph.addVertex(T.label,"2")
graph.tx().commit()
sparkGraph = GraphFactory.open('titan-cassandra-test-spark.properties')
sparkGraph.traversal(computer(SparkGraphComputer)).V().count()
This throws org.apache.thrift.protocol.TProtocolException: Required field 'keyspace' was not present! Struct: set_keyspace_args(keyspace:null)
twice but completes and returns 0 which is incorrect.
I am aware of this article in the mailing list but I am having trouble understanding it or solving the issue. Could anyone explain to me what is happening and how to fix this? I have pasted my configs below.
gremlin.graph=com.thinkaurelius.titan.core.TitanFactory
storage.backend=cassandrathrift
storage.hostname=node1
storage.cassandra.keyspace=mindmapstest
storage.cassandra.replication-factor=3
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.5
and
gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
gremlin.hadoop.graphInputFormat=com.thinkaurelius.titan.hadoop.formats.cassandra.CassandraInputFormat
gremlin.hadoop.graphOutputFormat=org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
gremlin.hadoop.jarsInDistributedCache=true
gremlin.hadoop.inputLocation=none
gremlin.hadoop.outputLocation=none
####################################
# Cassandra Cluster Config #
####################################
titanmr.ioformat.conf.storage.backend=cassandrathrift
titanmr.ioformat.conf.storage.cassandra.keyspace=mindmapstest
titanmr.ioformat.conf.storage.hostname=node1,node2,node3
####################################
# SparkGraphComputer Configuration #
####################################
spark.master=spark://node1:7077
spark.executor.memory=250m
spark.serializer=org.apache.spark.serializer.KryoSerializer
####################################
# Apache Cassandra InputFormat configuration
####################################
cassandra.input.partitioner.class=org.apache.cassandra.dht.Murmur3Partitioner
EDIT: this script will reproduce the error
graph = TitanFactory.open('titan-cassandra-test-cluster.properties')
superNode = graph.addVertex(T.label,"0")
for(i in 1..100) {
currentNode = graph.addVertex(T.label,i.toString())
currentNode.addEdge("blank",superNode)
}
graph.tx().commit()
graph.traversal().V().count()
graph.traversal(computer()).V().count()
sparkGraph = GraphFactory.open('titan-cassandra-test-spark.properties')
sparkGraph.traversal(computer(SparkGraphComputer)).V().count()