I'm still unsure about what you want, but assuming Glenn Jackman's interpretation is correct then you would want to take his idea a bit further in order to be able to search for a given field name. E.g.,
awk -v FN="xxxx" -F '"' '{
i=1;
while (i<=NF-2) {
if ($i==FN) {
print $(i+2) "\t" $0;
next
} else {
i++
}
}
}' filename | sort | cut -d $'\t' -f 2-
Here you would replace "xxxx" with "name", "age" or whatever field you want to use for sorting.
This script is not foolproof, of course. Fields cannot contain tab characters, and neither can they contain keywords such as "name", "age", etc.
Edit: I will briefly describe what this script does. Basically, awk takes a given field name, and for each line it extracts this field's value. So for each input line, it outputs that same line, but with this field's value prepended to it, and separates both elements with a tab character. This output is taken by the sort command, which sorts it lexicographically, and thus it is mostly sorted based on that prepended value, which is the field value you selected. Once sorted this way, this is taken by the cut command, which splices it on the tab character, discarding the field that was used for sorting, and only showing the rest (which corresponds to lines from your original file, but now sorted as you wanted).
Some more details:
In AWK (actually, in the Gawk variant) the -v switch defines a variable, in this case named FN. The -F switch defines a field separator, which will split each and every line that AWK reads from its input file. The main block defined between curly braces is the AWK program, which is run once for every input line. Each line field, as split according to the -F switch, is referenced with $1, $2, ... , $(NF-1), $NF. (NF is a builtin variable that is always equal to the number of fields on the current line).
As I said, AWK reads the input line by line and runs this program for each one. For example, if it takes this line:
a:2:{s:4:"name";s:12:"Jim Morrison";s:3:"age";s:2:"25";}
Then it splits it on the double quotes, like this:
$1 = a:2:{s:4:
$2 = name
$3 = ;s:12:
$4 = Jim Morrison
$5 = ;s:3:
$6 = age
$7 = ;s:2:
$8 = 25
$9 = ;}
The script then iterates over each field searching for an exact match on FN. So if for example we have defined FN=age, the loop will stop at $6, then it will print $8 (i.e., $(6+2), which is "25" here) concatenated with a tab character and then with the whole input line itself ($0). Then the next line will be read and the whole process will begin again.
This script relies on the assumption that the keywords cannot happen anywhere else. And this assumption is not easy to work around. There needs to be some more insight about how this input file is structured if you want to violate this assumption. For most purposes such insight is achievable, because this ambiguity would also affect any serialization parser. For example, if you know that the field name (say, "age") can appear exactly inside other fields, but only in fields ordered to be after the age field, then this script is fine as-is. In the given example, it would be strange to have a name field equal to "age" (like that, no capitalization, etc.). Anyway, this is a difficult problem and entire books deal with it so I won't summarize it here. Google for "compiler theory" if you're interested.
One such insight might be the one you mention: knowing the order of the fields. In that case, this whole script is not much better than Glenn's. You could adapt his simpler script to match each field you want. For example, consider:
awk -F '"' '{print $8 "\t" $0}' filename |
sort |
cut -d $'\t' -f 2-
This script is almost identical to the one Glenn proposed, only it selects on the eighth field ("age") instead of the fourth ("name").