I think you need to step back and question why it's a problem that a sensor consumes a full worker slot.
Airflow is a scheduler, not a resource allocator. Using worker concurrency, pools and queues, you can limit resource usage, but only very crudely. In the end, Airflow naively assumes a sensor will use the same resources on worker nodes as a BashOperator that spawns a multi-process genome sequencing utility. But sensors are cheap and sleep 99.9% of the time, so that is a bad assumption.
So, if you want to solve the problem of sensors consuming all your worker slots, just bump your worker concurrency. You should be able to have hundreds of sensors running concurrently on a single worker.
If you then get problems with very uneven workload distribution on your cluster nodes and nodes with dangerously high system load, you can limit the number of expensive jobs using either:
- pools that expensive jobs must consume (will start the job and wait until a pool resource is available). This creates a cluster-wide limit.
- special workers on each node that only take the expensive jobs (using
airflow worker --queues my_expensive_queue
) and have a low concurrency setting. This creates a per-node limit.
If you have more complex requirements than that, then consider shipping all non-trivial compute jobs to a dedicated resource allocator, e.g. Apache Mesos, where you can specify the exact CPU, memory and other requirements to make sure your cluster load is distributed more efficiently on each node than Airflow will ever be able to do.