0

Assume RequestID is a Long value

I have 2 Threads which keep getting called to process a "RequestID" .

These 2 threads can keep working in parallel if they are working on different RequestID but cannot process same RequestID simultaneously.

I want to get some sort of lock on a RequestID so that other thread cannot work on it unless first thread is done with RequestID.

What will be the best way to do this ?

  • _I have 2 Threads which keep getting called to process a "RequestID" ._ What does that mean? Show some code! – Andrew S Jul 06 '23 at 18:46

4 Answers4

0

EDIT: after some discussions, this is not safe for use! :)

I wrote something similiar already, but its definitly untested in production. I had some tests for it, but its hard to test something like this.

The idea is to have an internal static concurrent hashmap, which stores "semaphores" for each key. Each thread will try to look in this map for appearance of the semaphore and create it if it doesnt exist.

public class Blocking {

  private static final ConcurrentHashMap<String, Semaphore> internalMap = new ConcurrentHashMap<>();

  public static <O> O byKey(String keys, Supplier<O> action) {

    var semaphores = new Semaphore[1];
    try {
      semaphores[0] = internalMap.computeIfAbsent(keys, k -> new Semaphore(1));
      semaphores[0].acquire();
      return action.get();
    } finally {
      internalMap.remove(keys);
      semaphores[0].release();
    }
  }
}

Usage:

Blocking.byKey("KEY", () -> doYourStuff())
Loading
  • 1,098
  • 1
  • 12
  • 25
  • i had a different version and edited it on the fly. I think the idea is clear here. – Loading Jul 06 '23 at 20:40
  • Are you sure you should unconditionally remove the semaphore after a thread is done processing it? Another thread might still be using it ... and if yet another thread were to arrive, they wouldn't notice this conflict and enter the critical section. – meriton Jul 06 '23 at 22:02
  • if a thread is done processing it, it is not a problem if another thread takes over, even if the semaphore is still blocked. The critical section is already done. If we switch the finally statements, then it could happen that we release a semaphore and then remove it, but another thread cuts in between and acquires the semaphore, before the original thread removes it from the list. Then another thread wouldnt find the semaphore in the map and creates a new one – Loading Jul 07 '23 at 10:02
  • 1
    I think you misunderstood the scenario I am concerned about. Suppose there are 3 threads, and the following sequence of synchronization events: T1: computeIfAbsent -> Semaphore1, acquire. T2: computeIfAbsent -> Semaphore1, acquire (blocks). T1: remove, release. T3: computeIfAbsent -> Semaphore2, acquire. T2: acquire (succeeds since its on Semaphore1). That is, unconditionally removing a semaphore from the map, while another thread still holds a reference to it, can result in different threads having different semaphores for the same key, and therefore allow both to enter the critical section. – meriton Jul 09 '23 at 15:59
  • this is a good point. Maybe we really need a sync point in between or we never remove the semaphores from the map and purge the oldest semaphores without use – Loading Jul 09 '23 at 18:44
0

NOTE: I have not tried the code shown. Shutdown responsibility is not in RequestProcessor. You could add a shutdown method to that class which delegates to the wrapped executor.

import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class Main {

    public static void main(String[] args) {
        final ExecutorService executor = Executors.newCachedThreadPool();
        final long requestId = 5;
        
        executor.execute(() -> {
            //you could create processors for request which returns different types
            //For instance Boolean or any custom type
            //You could provide different implementation of ExecutorService
            final var booleanProcessor = new RequestProcessor<Boolean>(executor);
            
            final Callable<Boolean> aTask = new Callable<>() {

                @Override
                public Boolean call() throws Exception {
                    System.out.println("TASK 1 TRUE wait 5 seconds");
                    
                    Thread.sleep(5000);
                    
                    return true;
                }
                
            };
            
            booleanProcessor.runATaskForId(aTask, requestId);
            
            booleanProcessor.runATaskForId(() ->  {
                System.out.println("TASK 2 FALSE wait 4 seconds" );
                
                Thread.sleep(4000);
                
                return false;
            }, requestId);
            
    
        });
        
        executor.submit(()-> {
            final var stringProcessor = new RequestProcessor<String>(executor);
            
            //another tusk with lambda expression
            stringProcessor.runATaskForId(() -> {
                    System.out.println("Another Task That Returns String For Request Id Given");
                    
                    System.out.println("TASK 3 wait 10 seconds" );
                    
                    Thread.sleep(10000);
                    
                    return "";
                },
                requestId
            );
            
        });
        
        System.out.println("END");
    }


}

import java.util.concurrent.Callable;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Future;

public class RequestProcessor<T> {
    private class RequestTask implements Callable<T>{
        private final long requestId;
        private final Callable<T> wrappedCallable;
        
        private T result;
        
        public RequestTask(long requestId, Callable<T> wrappedCallable) {
            this.requestId = requestId;
            this.wrappedCallable = wrappedCallable;
        }
        
        public long getRequestId() {
            return requestId;
        }
        
        @Override
        public T call() throws Exception {
            return wrappedCallable.call();
        }

        public void setResult(T result) {
            this.result = result;
        }

        public T getResult() {
            return result;
        }
    }

    private static final ConcurrentHashMap<Long, Future<?>> inProgressRequestIds = new ConcurrentHashMap<>();
    
    private final ExecutorService executor;

    public RequestProcessor(ExecutorService executor) {
        this.executor = executor;
    }
    
    public T runATaskForId(Callable<T> task, long Id) {
        return processRequest(new RequestTask(Id, task));
    }
    
    private T processRequest(RequestTask task) {
        inProgressRequestIds.compute(
            task.getRequestId(), 
            (Long key, Future<?> existingFuture) -> {
                task.setResult(retrieveResultOf(executor.submit(task)));
                
                return null;
            }
        );
    
        return task.getResult();
    }
    
    private T retrieveResultOf(Future<T> future) {
        boolean isInterrupted = false;
        T value = null;
        
        while(true) {
            try {
                value = future.get();
                break;
            } catch (InterruptedException e) {
                 isInterrupted = true;
            } catch (Exception e) {
                throw new RequestProcessingException(e);
            }
        }
        
        if(isInterrupted)
            Thread.currentThread().interrupt();
        
        return value;
    }

}

public class RequestProcessingException extends RuntimeException{

    /**
     * 
     */
    private static final long serialVersionUID = 1775615938643533445L;

    public RequestProcessingException(String message) {
        super(message);
    }

    public RequestProcessingException(String message, Throwable cause) {
        super(message, cause);
    }

    public RequestProcessingException(Throwable cause) {
        super(cause);
    }
    
}

   
Ahmet M
  • 71
  • 3
0

You need to perform 2 operations

  • Check if requestId is used by other thread
  • If not used then add the requestId as "in process"

The above 2 operations needs to be atomic which could be achieved by using lock (either implicit using synchronization) or external lock. Either way it will lead to contention since every thread needs to fetch the lock before doing any operations

Using ConcurrentHashMap finds good use here.Since putIfAbsent is atomic and it internally uses bucket level lock which could reduce contention for each requestId. You can below refer code snippet for one of implementation

public class LongThreadSafe implements Runnable{

    ConcurrentHashMap<Long,Long> map;

    public LongThreadSafe(ConcurrentHashMap map) {
        this.map = map;
    }

    @Override
    public void run() {

        List<Long> list = Arrays.asList(2L, 3L, 4L, 5L, 23L, 43L);

        for (Long requestId:list) {
            //we don't have any problem if multiple threads are updating value
            Long previousValue = map.putIfAbsent(requestId, requestId);
            if (previousValue == null){
                //run your task
                //update database record using (requestId)
                map.remove(requestId);
            }else {
                System.out.println("Current requestId: "+requestId+" is being processed by another thread");
            }
        }


    }
}
class client{

    public static void main(String[] args) {

        ConcurrentHashMap<Long, Long> map = new ConcurrentHashMap<>();

        Thread t1 = new Thread(new LongThreadSafe(map));
        Thread t2 = new Thread(new LongThreadSafe(map));
        t1.start();
        t2.start();
    }

}
Sagar
  • 104
  • 1
  • 5
0

A simple way to control access to some object between threads and have an infinite supply of such objects is to use a striped lock.

The striped lock is an array of locks (or just objects).

So based on the hash of the object you determine the index within this array of locks and then acquire it.

Example illustrating the approach:

   // e.g. part of a constructor
   locks = new Object[1024];
   for(int k=0;k<locks.length;k++){
      locks[k]=new Object();
   }


   // part of the processing function
   int hash = requestId.hashCode();
   int lockIndex = hashToIndex(hash, locks.length);
   synchronized(locks[lockIndex)){
      ... do your stuff
   }


   public static int hashToIndex(int hash, int length) {
        if(length <= 0) {
           throw new IllegalArgumentException();
        }

        if (hash == Integer.MIN_VALUE) {
            return 0;
        }

        return abs(hash) % length;
    }

The big advantage of a striped lock is that you don't need to deal with the destruction of locks since they can be kept for the duration of the program. So you don't get any litter and you have nice and simple code.

This is a very basic solution to the program; if the locks are be kept for a long period, then this can lead to contention (even false contention because different requestIds map to the same lock).

In that case, restructuring the program might be a better solution. E.g. you could have an array of single thread executors and toss the requestId into the executor, based on hash of the requestId with the above hashToIndex function. This way the same requestId is guaranteed to be processed by the same thread and hence no locking is needed.

pveentjer
  • 10,545
  • 3
  • 23
  • 40