There is absolutely no need to use protobuf with redis; the key is usually simply: to pick a serialization framework that is going to reliably get your data back today, tomorrow and next year. You could just as well use json, xml, etc. In many cases, a single string value is more than sufficient, bypassing serialization completely (unless you count "encoding" as serialization).
I would usually advise against platform-proprietary serializations, as they might not help you if you need to get the data back in (say) C++ in a years time, and they are usually less flexible in terms of versioning.
Protobuf is a reasonable choice, as it has as key features:
- small output (reduces bandwidth between app and redis, and storage requirements)
- CPU-efficient processing (reduces processing in your app)
- designed for version tolerance
- cross-platform
However, other serializers would work too. You could even just use plain text and a redis hash, i.e. a hash-property per object-property. However, in most cases you want to get an entire object, so a simple "get" and handing the data to a suitable serialization API is usually more appropriate.
In our own use of redis, we do happen to use protobuf, but we also do a speculative "does the protobuf output compress with gzip at all?" - if it does, we send the gzip data (or we store the original uncompressed data if smaller - and obviously a marker to say which it is).