Does Unicode store stroke count information about Chinese, Japanese, or other stroke-based characters?
5 Answers
A little googling came up with Unihan.zip, a file published by the Unicode Consortium which contains several text files including Unihan_RadicalStrokeCounts.txt
which may be what you want. There is also an online Unihan Database Lookup based on this data.

- 2,992
- 1
- 27
- 31

- 9,171
- 33
- 51
In Python there is a library for that:
>>> from cjklib.characterlookup import CharacterLookup
>>> cjk = CharacterLookup('C')
>>> cjk.getStrokeCount(u'日')
4
Disclaimer: I wrote it

- 2,150
- 1
- 24
- 18
-
thanks for the great package! Today I tried it with these changes: 1. pip install cjklib3; 2. "C:\Users\your_name\AppData\Local\Programs\Python\Python310\Lib\site-packages\cjklib\util.py" needs a change to "from collections.abc import MutableMapping" from "from collections import MutableMapping"; 3. "cjk = characterlookup.CharacterLookup('C')" rather than "cjk = CharacterLookup('C')" – Mark K Jun 06 '23 at 03:38
You mean, is it encoded somehow in the actual code point? No. There may well be a table somewhere you can find on the net (or create one) but it's not part of the Unicode mandate to store this sort of metadata.

- 854,327
- 234
- 1,573
- 1,953
If you want to do character recognition goggle HanziDict.
Also take a look at the Unihan data site:
http://www.unicode.org/charts/unihanrsindex.html
You can look up stroke count and then get character info. You might be able to build your own look up.

- 2,434
- 3
- 25
- 30
UILocalizedIndexedCollation can be a total solution.

- 147
- 2
- 2
-
first, call "UILocalizedIndexedCollation sectionForObject:collationStringSelector:" to get an index of section. then back to check which section this index mapping to in "UILocalizedIndexedCollation.sectionTitles" – Jerry Juang Mar 27 '14 at 17:43
-