Although it is a while since this question has been posted, I thought of pitching into this thread with my experience.
For your concern with processing time, it depends on how much processing are you doing with your data ? Are you doing all of the above calculation in a single MR Job or multiple MR jobs in the same program ? If yes, then its possible that it might take time. Also how many iterations are you running for calculating page rank ? What is the size of your cluster ?
I would go with Masoud's answer of selecting Giraph for doing graph processing and would want to add in more. There are several reasons why Graph processing is hard with the Map Reduce Programming model.
You would need to partition graphs to as they wouldn't fit on a
single machine. (Range partitioning to keep neighborhoods together
for example if you had nodes/users from 5 different universities then most
likely you would have all nodes from a single University on the same
machine)
You might need to perform replication of your data.
Reduce cross-partition communication.
Coming back to your second concern, not having any knowledge about Apache Twister, I would go for Apache Giraph as it is specifically built for Large Scale Distributed Graph Algorithms where the framework handles all of the heavy processing needs that come along. These are basically because of the feature of graphs algorithms like traversing a graph, passing information along their edges to other nodes etc.
I recently used Giraph for one of my Big Data projects and it was great learning. You should look into that if I am not replying too late.
You could refer to these slides for a detailed explanation.