0

I've been having some issues with a couple of tasks I'm running. I have three tasks, where one is an LCD update task, and the other two are motor driver tasks. I also have two ISR's that post messages to both the motor driver tasks. As far as passing pointers safely, I was thinking about creating a struct:

  typedef struct message{
  enum BUTTON_1 = 0, BUTTON_2 = 1, NO_BUTTON = 3; //button ISR to increase motor drive  
  int timestamp; //A timestamp for the RPM of the motors
  }

Now the issue with shared memory comes in, so I was thinking:

  struct message* update_msg = (struct message*)malloc(sizeof(struct message)); //from here I dont know how to creat an object that fills the space allocated.

I would then send the pointer to the struct through the queue:

  OSTASKQPOST((void *)(st_size)
  ....
  )

At the end after the last task gets the message and does what it needs to with the member variables, I would have to deallocate the memory.

  free(st_size)

Would something like this be plausible?

Bart
  • 19,692
  • 7
  • 68
  • 77
Ci3
  • 4,632
  • 10
  • 34
  • 44

1 Answers1

2

This is the 'Inter-Thread Comms 101' method of communicating data between threads. It will work fine. Assuming 32-bit wide queues, posting struct addresses or object instances starts to win quite quickly, (over directly posting the data by value), as the message size increases.

There are other mechanisms. On my ARM embedded projects where RAM is limited and memory space is more important than speed, I tend to use an array of 255 global message instances as a pool, (it's useful to reserve one value, 255 say, for 'invalid index'). This means that each message can be referenced by just one byte and two bytes in each message allow them to be linked into and out of a list. A linked-list head byte, a mutex and a semaphore make a blocking queue for inter-thread comms - no extra storage space needed. All the messages are linked into a 'pool' queue at startup and are popped, queued between threads and released back onto the pool by the app threads.

ISR's that receive data from hardware cannot call malloc, get mutexes or wait on semaphores for message indexes. I use another queue class that has no locks, just a circular queue of byte indexes. I push in a few messages at startup. The interrupt-handlers can dequeue messages from this 'ISRpool', fill them from hardware, set an int, (bitfield!), to identify the ISR, push the message index onto a 'ISRout' circular queue, signal a semaphore and exit via OS. The thread waiting on the semaphore wakes up and knows that there is data on the ISRout, pops it and queues it off to whatever thread handles messages from that ISR. That 'ISRhandler' thread is also responsible for 'topping up' the ISRpool with messages so that the ISR's always have messages ready when data arrives. This simple, shared 'ISRpool' system only works if interrupts do not re-enable higher-priority interrupts!

In a similar manner, messages for tx ISR's are pushed onto a circular queue for the ISR to pick up, (interrupts are disabled briefly to see if the hardware is idle and the hardware FIFO needs 'priming' to start teh tx interrupts off again). 'Used' tx messages are dumped onto the rx ISRpool - they may as well be re-used for input.

Pooling schemes have some advantages that are not immediately obvious. One is 'no malloc, no free'. Messages can certainly be leaked, but I notice quickly - the terminal prompt from the UART run by my 'monitor/debugger' is '223>' The number is the pool level. If this number goes down and doesn't come back up again, I know I've leaked. This is very important when you cannot just run the app under Valgrind :)

Martin James
  • 24,453
  • 3
  • 36
  • 60