0

I am just beginning with repast python. I want to simulate a small order handling process with multiple steps.

Let's say there are 1000 orders with different order placement timestamps. There are 3 steps after the order is received, picking(10 - 15 mins), packing(8 - 12mins), shipping(5 - 10 mins). Each step has dedicated number of workers lets say 10 for picking, 5 for packing and 2 for shipping.

All the workers are independent and can work parallelly. Once a worker is done with the assigned activity for an order, he can move on the next order to process it.

How can a create a queue variable that is accessible to all the processors in repast python.

I cant find any logistics based examples of repast python. I am trying to explore repast libraries like Simpy but they are not scalable for large problems.

In the Random Walk example in repast4py documentation, we run the program using

mpirun -n 4 python rndwalk.py random_walk.yaml 

This will run the program on multiple ranks but they all share a SharedGrid to interact. Is there something similar for creating shared queues for each step of the process like an order queue, picking queue, packing queue etc...than can accessed by all workers?

Vinay
  • 1,149
  • 3
  • 16
  • 28

1 Answers1

1

Without knowing more of the details, I think you'll need to select a particular rank (e.g., rank 0) to manage the queues and synchronize them across processes. Rank 0 could create a queue for each rank from the full queue and use mpi4py to share those with itself and the other ranks. At some appropriate interval the full queue could be updated from the rank queues and new rank queues created. See the mpi4py documentation for how send and recv Python objects between ranks. For example,

https://mpi4py.readthedocs.io/en/stable/tutorial.html#collective-communication

Broadcast, scatter, gather etc., are MPI collection communication concepts. This is a good introduction to them: https://mpitutorial.com/tutorials/mpi-broadcast-and-collective-communication/, although the examples are in C.

Lastly, repast4py runs just fine on a single process (mpirun -n 1) in which case there's no need to share queues. So, if you simulation runs fast enough on a single process then you'd avoid the issue entirely.

Nick Collier
  • 1,786
  • 9
  • 10
  • At first, I didn't understand the first paragraph but after little bit of exploration of mpi4py I get what you are saying. My actual problem has millions of orders, so I want use more processes. This set of videos helped me to understand the mpi concepts. https://www.youtube.com/watch?v=3F7P2lAD6hw&list=PLxDvEmlm4QvgcMJLy3BiFZZ0J8fLXCuD4 I still need to figure out how the synchronization in repast work along with mpi communication. As I understand, you are a core developer of repast4py, it would be great if you can create agent-based-modelling examples which are not grid related. Thanks! – Vinay Jun 17 '23 at 07:30
  • I was exploring the repast4py example documentation. Wouldn't the self.context.synchronize(restore_agent) create ghost agents in the new rank? What is the need to use mpi4py? – Vinay Jun 19 '23 at 12:44