To see an example of Typed Dependencies, check out the end of the output from this online example.
When I run the stanford parser on the commandline using lexparser.sh, it outputs the tree and the typed dependencies.
But when I run it using nltk.parse.stanford, all I get is the tree, with no typed dependencies. I can modify it to return the dependencies by setting -outputFormat="penn,typedDependencies" as documented here, though I'd just get the text. I wonder if somebody else has already done the work to process this into a more useful form.
The Stanford CoreNLP website lists several extensions for Python, though most of them seem like related forks. From glancing at the source code, this one looks promising for dealing with dependencies, though it is totally undocumented and I'm not sure how to use it.
Many of these libraries offer to run as a service and communicate via HTTP. I wonder if that would be faster than the way NLTK interacts with the parser, since it might not require a new JVM to start up repeatedly.
I'm not quite sure what the difference between CoreNLP and Stanford Parser are.
I also found this, though it uses JPype and I wasn't able to get that to compile.