We are currently architecting a system which should be capable of processing large amount of sensor events.
Since the requirement is to handle millions of different sensors instances, I thought the Service Fabric Actor Model would be a perfect fit. So the idea was to have one Actor which is responsible for processing events of one sensor (SensorId=ActorId).
The mapping is easy and since we only need to query the data by a specific SensorId, we have it all at one place, which enables really fast lookups.
The problem is now that (a few) sensors are sending data at rates a single actor can't handle anymore.
This is were we are stuck now, we can't hint the system and tell it to distribute load to more Actors for specific sensors like Sensor123 and Sensor567.
Is there any possibility to solve this with the virtual Actor System provided by Service Fabric?
Update 1:
I think we don't have a problem scaling a single actor. We get around 5k messages/s for one unique actor. But some sensors need a target throughput of 50-100k/s. So by design (single threaded execution) a single actor won't be able to acomplish this.
So to clarify the initial question: We are looking more or less for a way to automatically partition "some" actors.
(Of course we could create 10 actors for each sensor to partition the load. But that would make the lookups inefficient and additionally we need 10x more RAM. That doesn't seem to be justifiable because 0.5-1% of the sensors need more throughput)