Elaborating on VoiceOfUnreason's answer, if there's a chance you'll ever want to replay all events (and there almost certainly is, considering that all manner of operational snafus can result in a need to replay all events after some point in time), that implies that projection processes will need access to some durable log of the events.
The journal table serving as the write model is an example of a durable log of the events (in this case represented as a table).
Another possibility could be a message bus that events are published to, if that message bus happens to meet the durability requirements (Kafka is potentially close enough (though it has some limitations to be aware of; Pulsar doesn't have the same limitations as Kafka in this area). Things that are more MQ-like probably won't be usable for this.
Any stream of the events is itself a projection of the events emitted by the aggregate when processing commands: a projection to a durable log of the events is thus perhaps the simplest projection. Accordingly, if projections might not have access to the journal table (or the journal table might aggressively purge events after snapshotting) and the message bus is not durable, then one could make the first projector one which consumes events from the message bus and writes them to a durable log for the other projectors to consume: if this projector is the only one which acks to the message bus, it can ensure that all events eventually make it to the durable log. This "ur-projector" is likely to be so simple that it doesn't need to evolve.
You can take this a step further and have a hierarchy of projectors which are effectively taking the durable log published by another projector and building another durable log with the same events. For instance, you might have a retention policy that only the most recent 90 days of events are guaranteed to be in a particular Kafka cluster, but events from the beginning of time are in object storage like S3; this can be accomplished with a projector from Kafka to S3.