This should look basically the same as counting anything else in Python. spaCy lets you just iterate over the document, and you get back a sequence of Token objects. These can be used to access the annotations.
from __future__ import print_function, unicode_literals
import spacy
from collections import defaultdict, Counter
nlp = spacy.load('en')
pos_counts = defaultdict(Counter)
doc = nlp(u'My text here.')
for token in doc:
pos_counts[token.pos][token.orth] += 1
for pos_id, counts in sorted(pos_counts.items()):
pos = doc.vocab.strings[pos_id]
for orth_id, count in counts.most_common():
print(pos, count, doc.vocab.strings[orth_id])
Note that the .orth and .pos attributes are integers. You can get the strings that they map to via the .orth_ and .pos_ attributes. The .orth attribute is the unnormalised view of the token, there's also the .lower, .lemma etc string-view. You might want to bind a .norm function, to do your own string normalisation. See the docs for details.
The integers are useful for your counts because you can make your counting program much more memory efficient, if you're counting over a large corpus. You could also store the frequent counts in a numpy array, for additional speed and efficiency. If you don't want to bother with this, feel free to count with the .orth_ attribute directly, or use its alias .text.
Note that the .pos attribute in the snippet above gives a coarse-grained set of part-of-speech tags. The richer treebank tags are available on the .tag attribute.