Sounds possible. Kafka Streams uses RocksDB as default storage engine allowing to spill to disk and thus a properly scaled-out app can hold huge state. One main consideration will be how many shards you need for good performance -- beside the actual storage requirement, also the input data rate needs to be considered.
Also note, because RocksDB does spill to disk, if an instance goes down and you restart it on the same machine, it's not necessary to re-load the state from the Kafka changelog topic as the local state will still be there. (For Kubernetes deployments, using stateful sets help for this case.) In general, if you have large state and want to avoid that state is migrated (ie, trade-off some "partial/temporary unavailability" for a more "stable" deployment), you should consider using static group membership.
For a size estimation, note that the number of input topic partitions determined the maximum number of instances you can use to scale out your application. Thus, you need to configure you input topic with enough partitions. For client side storage estimation, check out the capacity planning docs: https://docs.confluent.io/current/streams/sizing.html