2

I need to limit number of clients processing the same resource at the same time
so I've tried to implement analog to

lock.lock();
try {
     do work
} finally {
    lock.unlock();
}

but in nonblocking manner with Reactor library. And I've got something like this.

But I have a question:
Is there a better way to do this
or maybe someone know about implemented solution
or maybe this is not how it should be done in the reactive world and there is another approach for such problems?

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.publisher.EmitterProcessor;
import reactor.core.publisher.Flux;
import reactor.core.publisher.FluxSink;

import javax.annotation.Nullable;
import java.time.Duration;
import java.util.Objects;
import java.util.concurrent.atomic.AtomicInteger;

public class NonblockingLock {
    private static final Logger LOG = LoggerFactory.getLogger(NonblockingLock.class);

    private String currentOwner;
    private final AtomicInteger lockCounter = new AtomicInteger();
    private final FluxSink<Boolean> notifierSink;
    private final Flux<Boolean> notifier;
    private final String resourceId;

    public NonblockingLock(String resourceId) {
        this.resourceId = resourceId;
        EmitterProcessor<Boolean> processor = EmitterProcessor.create(1, false);
        notifierSink = processor.sink(FluxSink.OverflowStrategy.LATEST);
        notifier = processor.startWith(true);
    }

    /**
     * Nonblocking version of
     * <pre><code>
     *     lock.lock();
     *     try {
     *         do work
     *     } finally {
     *         lock.unlock();
     *     }
     * </code></pre>
     * */
    public <T> Flux<T> processWithLock(String owner, @Nullable Duration tryLockTimeout, Flux<T> work) {
        Objects.requireNonNull(owner, "owner");
        return notifier.filter(it -> tryAcquire(owner))
                .next()
                .transform(locked -> tryLockTimeout == null ? locked : locked.timeout(tryLockTimeout))
                .doOnSubscribe(s -> LOG.debug("trying to obtain lock for resourceId: {}, by owner: {}", resourceId, owner))
                .doOnError(err -> LOG.error("can't obtain lock for resourceId: {}, by owner: {}, error: {}", resourceId, owner, err.getMessage()))
                .flatMapMany(it -> work)
                .doFinally(s -> {
                    if (tryRelease(owner)) {
                        LOG.debug("release lock resourceId: {}, owner: {}", resourceId, owner);
                        notifierSink.next(true);
                    }
                });
    }

    private boolean tryAcquire(String owner) {
        boolean acquired;
        synchronized (this) {
            if (currentOwner == null) {
                currentOwner = owner;
            }
            acquired = currentOwner.equals(owner);
            if (acquired) {
                lockCounter.incrementAndGet();
            }
        }
        return acquired;
    }

    private boolean tryRelease(String owner) {
        boolean released = false;
        synchronized (this) {
            if (currentOwner.equals(owner)) {
                int count = lockCounter.decrementAndGet();
                if (count == 0) {
                    currentOwner = null;
                    released = true;
                }
            }
        }
        return released;
    }
}

and this is how I suppose it should work

@Test
public void processWithLock() throws Exception {
    NonblockingLock lock = new NonblockingLock("work");
    String client1 = "client1";
    String client2 = "client2";
    Flux<String> requests = getWork(client1, lock)
            //emulate async request for resource by another client
            .mergeWith(Mono.delay(Duration.ofMillis(300)).flatMapMany(it -> getWork(client2, lock)))
            //emulate async request for resource by the same client
            .mergeWith(Mono.delay(Duration.ofMillis(400)).flatMapMany(it -> getWork(client1, lock)));
    StepVerifier.create(requests)
            .expectSubscription()
            .expectNext(client1)
            .expectNext(client1)
            .expectNext(client1)
            .expectNext(client1)
            .expectNext(client1)
            .expectNext(client1)
            .expectNext(client2)
            .expectNext(client2)
            .expectNext(client2)
            .expectComplete()
            .verify(Duration.ofMillis(5000));
}
private static Flux<String> getWork(String client, NonblockingLock lock) {
    return lock.processWithLock(client, null,
            Flux.interval(Duration.ofMillis(300))
                    .take(3)
                    .map(i -> client)
                    .log(client)
    );
}
Roman M.
  • 45
  • 1
  • 7
  • Could you please describe a real-world scenario when you suppose to use this kind of lock? I mean that it's more or less clear what you try to achieve but why? – Oleg Kurbatov Oct 26 '18 at 09:58
  • I have web application with in-memory storage, and I need to provide consistency in its data. So it is necessary that only one client could apply changes to data within a "transaction". Another use case - is to make pool of resources. So if there is no available resource at the moment - just wait until there is one frees – Roman M. Oct 26 '18 at 14:20
  • It also can be used for nonblocking cache. As Mono.cache() has particularity to preserve Error or Complete without value signals, that is not desirable behavior if I want to cache only successful result with data. And Mono.cache() is not so flexible as blocking cache (like guava cache). So with such Lock I can use blocking cache for data store and fill it after successful nonblocking recalculation of expensive operation. I think there are few more use cases, but I was surprised that it is not implemented yet. So I have filling that I am doing something wrong. – Roman M. Oct 26 '18 at 16:41
  • I saw the answers from [Cache the result of a Mono from a WebClient call...](https://stackoverflow.com/questions/52787925/cache-the-result-of-a-mono-from-a-webclient-call-in-a-spring-webflux-web-applica) from @brian-clozel and alexander-pankin but in case of 10 simultanious requests their solution would make 10 recalculations (invocations of remote service) and it is waste of server and client resources if they get eventially the same result. But with such Lock it is posiible make just 1 expensive invocation while other subsribers would wait for result – Roman M. Oct 26 '18 at 18:52
  • I did lock for exclusive calls of remote service with same parameters using CacheMono in one of my projects. Don't think that it would be the good answer to your more generous question, but I could share it in couple days. – Alexander Pankin Oct 26 '18 at 19:54
  • @alexander-pankin, yes, it would be nice and helpful, please share your solution – Roman M. Oct 27 '18 at 07:46

3 Answers3

4

Now that Reactor has introduced Sinks, it is easier to implement such locks. I have written a library, with which you may code like this:

import party.iroiro.lock.Lock;
import party.iroiro.lock.ReactiveLock;

Flux<String> getWork(String client, Duration delay, Lock lock) {
    return Mono.delay(delay)
              .flatMapMany(l -> lock.withLock(() ->
                  Flux.interval(Duration.ofMillis(300))
                      .take(3)
                      .map(i -> client)
                      .log(client)));
}

It internally uses a queue of Sinks.Empty to keep track of lock requests. On each unlock it just polls from the queue and emits to the Mono an ON_COMPLETE signal, which might work slightly better than broadcasting to all requesters with Sinks.many().multicast(). It makes use of the feature that a Sinks.Empty cannot be emitted to more than once, and therefore cancelling the lock (for those who want to set a timeout or handle complex cases) will block the emission of ON_COMPLETE, and vise versa.

And by wrapping Flux.using around the lock, one can make sure that the lock is unlocked correctly in all cases like try-finally.

Here is a portion of the implementation if you are interested. The original answer is synchronized and maybe blocking on race conditions, and the following is rewritten with CAS operations so that locks are non-blocking. (In the library, all the locks are implemented with CAS operations now.)

    private volatile int count = 0;  // 0 if unlocked

    public LockHandle tryLock() {
        if (COUNT.compareAndSet(this, 0, 1)) {
            // Optimistic acquiring
            return LockHandle.empty();
        } else {
            LockHandle handle = SinkUtils.queueSink(queue);
            fairDecrement(false);
            return handle;
        }
    }
    public void unlock() {
        if (fairness) {
            fairDecrement(true);
        } else {
            COUNT.set(this, 0);
            fairDecrement(false);
        }
    }
    /*
     * If not "unlocking", fairDecrement first increments COUNT so that it does not end up unlocking a lock.
     * If "unlocking", we jump directly to the decrementing.
     */
    private void fairDecrement(boolean unlocking) {
        /*
         * COUNT states:
         * - COUNT == 0: The lock is unlocked, with no ongoing decrement operations.
         * - COUNT >= 1: Either the lock is being held, or there is an ongoing decrement operation.
         *               Note that the two are mutual exclusive, since they both require COUNT++ == 0.
         *
         * If "unlocking", then we are responsible for decrements.
         *
         * Otherwise,
         * 1. If COUNT++ >= 1, either someone is holding the lock, or there is an ongoing
         *    decrement operation. Either way, some thread will eventually emit to pending requests.
         *    We increment COUNT to signal to the emitter that the queue could have potentially been
         *    appended to after its last emission.
         * 2. If COUNT++ == 0, then we are responsible for decrementing.
         */
        if (unlocking || COUNT.incrementAndGet(this) == 1) {
            do {
                if (SinkUtils.emitAnySink(queue)) {
                    /*
                     * Leaves the decrementing job to the next lock holder, who will unlock somehow.
                     */
                    return;
                }
                /*
                 * It is now safe to decrement COUNT, since there is no concurrent decrements.
                 */
            } while (COUNT.decrementAndGet(this) != 0);
        }
    }

Also, if you want to limit the number of clients to N instead of one, the library provides ReactiveSemaphore, which corresponds to java.util.concurrent.Semaphore.

Kana
  • 186
  • 1
  • 2
  • 3
2

I have a solution for exclusive calls of remote service with same parameters. Maybe it could be helpful in your case.

It is based on immediate tryLock with error if resource is busy and Mono.retryWhen to "wait" releasing.

So I have LockData class for lock's metadata

public final class LockData {
    // Lock key to identify same operation (same cache key, for example).
    private final String key;
    // Unique identifier for equals and hashCode.
    private final String uuid;
    // Date and time of the acquiring for lock duration limiting.
    private final OffsetDateTime acquiredDateTime;
    ...
}

LockCommand interface is an abstraction of blocking operations on the LockData

public interface LockCommand {

    Tuple2<Boolean, LockData> tryLock(LockData lockData);

    void unlock(LockData lockData);
    ...
}

UnlockEventsRegistry interface is abstraction for unlock events listeners collector.

public interface UnlockEventsRegistry {
    // initialize event listeners collection when acquire lock
    Mono<Void> add(LockData lockData);

    // notify event listeners and remove collection when release lock
    Mono<Void> remove(LockData lockData);

    // register event listener for given lockData
    Mono<Boolean> register(LockData lockData, Consumer<Integer> unlockEventListener);
}

And Lock class can wrap source Mono with lock, unlock and wrap CacheMono writer with unlock.

public final class Lock {
    private final LockCommand lockCommand;
    private final LockData lockData;
    private final UnlockEventsRegistry unlockEventsRegistry;
    private final EmitterProcessor<Integer> unlockEvents;
    private final FluxSink<Integer> unlockEventSink;

    public Lock(LockCommand lockCommand, String key, UnlockEventsRegistry unlockEventsRegistry) {
        this.lockCommand = lockCommand;
        this.lockData = LockData.builder()
                .key(key)
                .uuid(UUID.randomUUID().toString())
                .build();
        this.unlockEventsRegistry = unlockEventsRegistry;
        this.unlockEvents = EmitterProcessor.create(false);
        this.unlockEventSink = unlockEvents.sink();
    }

    ...

    public final <T> Mono<T> tryLock(Mono<T> source, Scheduler scheduler) {
        return Mono.fromCallable(() -> lockCommand.tryLock(lockData))
                .subscribeOn(scheduler)
                .flatMap(isLocked -> {
                    if (isLocked.getT1()) {
                        return unlockEventsRegistry.add(lockData)
                                .then(source
                                        .switchIfEmpty(unlock().then(Mono.empty()))
                                        .onErrorResume(throwable -> unlock().then(Mono.error(throwable))));
                    } else {
                        return Mono.error(new LockIsNotAvailableException(isLocked.getT2()));
                    }
                });
    }

    public Mono<Void> unlock(Scheduler scheduler) {
        return Mono.<Void>fromRunnable(() -> lockCommand.unlock(lockData))
                .then(unlockEventsRegistry.remove(lockData))
                .subscribeOn(scheduler);
    }

    public <KEY, VALUE> BiFunction<KEY, Signal<? extends VALUE>, Mono<Void>> unlockAfterCacheWriter(
            BiFunction<KEY, Signal<? extends VALUE>, Mono<Void>> cacheWriter) {
        Objects.requireNonNull(cacheWriter);
        return cacheWriter.andThen(voidMono -> voidMono.then(unlock())
                .onErrorResume(throwable -> unlock()));
    }

    public final <T> UnaryOperator<Mono<T>> retryTransformer() {
        return mono -> mono
                .doOnError(LockIsNotAvailableException.class,
                        error -> unlockEventsRegistry.register(error.getLockData(), unlockEventSink::next)
                                .doOnNext(registered -> {
                                    if (!registered) unlockEventSink.next(0);
                                })
                                .then(Mono.just(2).map(unlockEventSink::next)
                                        .delaySubscription(lockCommand.getMaxLockDuration()))
                                .subscribe())
                .doOnError(throwable -> !(throwable instanceof LockIsNotAvailableException),
                        ignored -> unlockEventSink.next(0))
                .retryWhen(errorFlux -> errorFlux.zipWith(unlockEvents, (error, integer) -> {
                    if (error instanceof LockIsNotAvailableException) return integer;
                    else throw Exceptions.propagate(error);
                }));
    }
}

Now if I have to wrap my Mono with CacheMono and lock, I can do it like this:

private Mono<String> getCachedLockedMono(String cacheKey, Mono<String> source, LockCommand lockCommand, UnlockEventsRegistry unlockEventsRegistry) {
    Lock lock = new Lock(lockCommand, cacheKey, unlockEventsRegistry);

    return CacheMono.lookup(CACHE_READER, cacheKey)
            // Lock and double check
            .onCacheMissResume(() -> lock.tryLock(Mono.fromCallable(CACHE::get).switchIfEmpty(source)))
            .andWriteWith(lock.unlockAfterCacheWriter(CACHE_WRITER))
            // Retry if lock is not available
            .transform(lock.retryTransformer());
}

You could find code and tests with examples on GitHub

Alexander Pankin
  • 3,787
  • 1
  • 13
  • 23
  • 1
    Thanks @Alexander, this approach can solve my problem and it is very similar to my first attempt to create nonblocking lock, but instead of Mono.error() - retry() on faild tryLock() I just return Mono.empty() and then do Mono.repeatWhenEmpty(..). So It doesn't create unnecessary exception which was just signal to retry. The second concern about such approach is that it is just a handmade nonblockking loop, and maybe it is not as efficient as event driven approach, and it adds latency of getting result and in the worst case it is about 100 ms delay from actual calculation of a result. – Roman M. Oct 28 '18 at 17:10
  • If I return empty Mono when lock is busy, I couldn't handle if source Mono is empty too. Retry function could be enhanced to react on the lock releasing, it's a good comment. For my cases non-blocking loop is ok, cause concurrent executions are rare and delay is not critical. I'll play with my retry function to react on lock releasing and maybe change my mind about loop. Thanks for feedback. – Alexander Pankin Oct 28 '18 at 19:43
  • Changed this solution to react on the unlock events. – Alexander Pankin Nov 03 '18 at 15:07
  • @AlexanderPankin thanks for sharing this, now wince EmitProcessor is deprecated, do you have a sample equivalent without EmitProcessor (using Sinks.Many)? – hmble May 04 '21 at 19:53
  • 1
    @hmble , hello. I have only smartphone this week, so I will share new solution later. Now you can find some examples on GitHub (link is in the end of my answer) https://github.com/alex-pumpkin/reactor-lock – Alexander Pankin May 05 '21 at 04:22
2

I know this already has a couple reasonable answers, but I thought there was a (subjectively) simpler solution that leverages flatMap (in a semaphore-like use-case) or concatMap (in a lock/synchronized use-case) to control parallelization.

This solution only uses Sinks and Reactor operators to support locking. Publishers that aren't subscribed to also will not consume a lock.

public class ReactiveSemaphore {

    /**
     * This can be thought of as a queue of lock handles. The first argument of the tuple is a signaler that accepts a value
     * value when a lock is available. The second argument is a Mono that completes when the lock is released.
     */
    private final Sinks.Many<Tuple2<Sinks.One<Boolean>, Mono<Boolean>>> taskQueue;
    private final Sinks.One<Boolean> close = Sinks.one();

    /**
     * Creates a ReactiveSemaphore that only allows one Publisher to be subscribed at a time. Executed by order
     * of subscription.
     */
    public ReactiveSemaphore() {
        this(1);
    }

    /**
     * Creates a ReactiveSemaphore that allows up to poolSize Publishers to be subscribed in parallel.
     * @param poolSize The number of allowed subscriptions to run in parallel.
     */
    public ReactiveSemaphore(int poolSize) {
        taskQueue = Sinks.many().unicast().onBackpressureBuffer();

        Flux<Boolean> tasks;
        if (poolSize <= 1)
            // We could use flatMap with parallelism of 1, but that seems weird
            tasks = taskQueue
                    .asFlux()
                    .concatMap(ReactiveSemaphore::dispatchTask);
        else {
            tasks = taskQueue
                    .asFlux()
                    .flatMap(ReactiveSemaphore::dispatchTask, poolSize);
        }

        tasks
                .takeUntilOther(close.asMono())
                .subscribe();
    }

    private static Mono<Boolean> dispatchTask(Tuple2<Sinks.One<Boolean>, Mono<Boolean>> task) {
        task.getT1().tryEmitValue(true); // signal that lock is available and consume lock
        return task.getT2(); // return Mono that completes when lock is released
    }

    @PreDestroy
    private void cleanup() {
        close.tryEmitValue(true);
    }

    public <T> Publisher<T> lock(Publisher<T> publisher) {
        return Flux.defer(() -> this.waitForNext(publisher));
    }

    public <T> Mono<T> lock(Mono<T> publisher) {
        return Mono.defer(() -> this.waitForNext(publisher).next());
    }

    public <T> Flux<T> lock(Flux<T> publisher) {
        return Flux.defer(() -> this.waitForNext(publisher));
    }

    /**
     * Waits for an available lock in the taskQueue. When ReactiveSemaphore is ready, a lock will be allocated for the task
     * and will not be released until the provided task errors or completes. For this reason this operation should
     * only be performed on a hot publisher (a publisher that has been subscribed to). Therefore, this method should
     * always be wrapped inside a call to {@link Flux#defer(Supplier)} or {@link Mono#defer(Supplier)}.
     * @param task The task to execute once the ReactiveSemaphore has an available lock.
     * @return The task wrapped in a Flux
     * @param <T> The type of value returned by the task
     */
    private <T> Flux<T> waitForNext(Publisher<T> task) {
        var ready = Sinks.<Boolean>one();
        var release = Sinks.<Boolean>one();
        taskQueue.tryEmitNext(Tuples.of(ready, release.asMono()));
        return ready.asMono()
                .flatMapMany(ignored -> Flux.from(task))
                .doOnComplete(() -> release.tryEmitValue(true))
                .doOnError(err -> release.tryEmitValue(true));
    }
}

Usage:

ReactiveSemaphore semaphore = new ReactiveSemaphore();
semaphore.lock(someFluxMonoOrPublisher);

Example Test - in this test, we create 10 Monos that emit a value after 1 second and try running all of them in parallel, but we wrap them in a ReactiveSemaphore with a pool size of 2 so that no more than 2 ever run in parallel:

@Test
public void testParallelExecution() {
    var semaphore = new ReactiveSemaphore(2);
    var monos = IntStream.range(0, 10)
            .mapToObj(i -> Mono.fromSupplier(() -> {
                        log.info("Executing Mono {}", i);
                        return i;
                    })
                    .delayElement(Duration.ofMillis(1000)))
            .map(mono -> semaphore.lock(mono));

    var allMonos = Flux.fromStream(monos).flatMap(m -> m).doOnNext(v -> log.info("Got value {}", v));

    StepVerifier.create(allMonos)
            .expectNextCount(10)
            .verifyComplete();
}

/* OUTPUT:
12:52:40.752 [main] INFO my.package.ReactiveSemaphoreTest - Executing Mono 0
12:52:40.755 [main] INFO my.package.ReactiveSemaphoreTest - Executing Mono 1
12:52:41.762 [parallel-1] INFO my.package.ReactiveSemaphoreTest - Got value 0
12:52:41.765 [parallel-1] INFO my.package.ReactiveSemaphoreTest - Executing Mono 2
12:52:41.767 [parallel-2] INFO my.package.ReactiveSemaphoreTest - Got value 1
12:52:41.767 [parallel-2] INFO my.package.ReactiveSemaphoreTest - Executing Mono 3
12:52:42.780 [parallel-3] INFO my.package.ReactiveSemaphoreTest - Got value 2
12:52:42.780 [parallel-4] INFO my.package.ReactiveSemaphoreTest - Executing Mono 4
12:52:42.780 [parallel-3] INFO my.package.ReactiveSemaphoreTest - Got value 3
12:52:42.780 [parallel-4] INFO my.package.ReactiveSemaphoreTest - Executing Mono 5
12:52:43.790 [parallel-6] INFO my.package.ReactiveSemaphoreTest - Executing Mono 6
12:52:43.790 [parallel-5] INFO my.package.ReactiveSemaphoreTest - Got value 4
12:52:43.790 [parallel-5] INFO my.package.ReactiveSemaphoreTest - Got value 5
12:52:43.791 [parallel-6] INFO my.package.ReactiveSemaphoreTest - Executing Mono 7
12:52:44.802 [parallel-7] INFO my.package.ReactiveSemaphoreTest - Got value 6
12:52:44.802 [parallel-7] INFO my.package.ReactiveSemaphoreTest - Got value 7
12:52:44.802 [parallel-8] INFO my.package.ReactiveSemaphoreTest - Executing Mono 8
12:52:44.802 [parallel-8] INFO my.package.ReactiveSemaphoreTest - Executing Mono 9
12:52:45.814 [parallel-10] INFO my.package.ReactiveSemaphoreTest - Got value 9
12:52:45.814 [parallel-10] INFO my.package.ReactiveSemaphoreTest - Got value 8 
Wet Noodles
  • 755
  • 7
  • 15