I am developing a semantic web application, and using Jung library to apply some graphs calculations, such as closeness, betweenness, etc. I was able to find the betweenness value for each node in my rdf and normalized it too. However, this is not the case with the ClosenessCentrality as I got NaN (not a number) score for some nodes.Below is my code:
int n = graph.getVertexCount();// number of vertex
double d = (double)(n-1)*(n-2)/2.0d; // this is to normalize the node value
System.out.println("Applying ClosenessCentrality");
ClosenessCentrality<RDFNode, Statement> closeness = new ClosenessCentrality<RDFNode, Statement>(graph);
double[] closenessValues = new double[n];
Collection<RDFNode> closenessVertices = graph.getVertices();
int i = 0;
for (RDFNode vertex : closenessVertices)
closenessValues[i++] = closeness.getVertexScore(vertex) / d; // get the normalized score for each node
for (double score : closenessValues)
System.out.println(score); // print all values.
So, as I mentioned before for some reason I got NAN score for some nodes. I feel that there is a bug on the ClosenessCentrality algorithm implementation as I got NaN. Any explanation guys ? am I doing something wrong ?
Thanks for the help