-2

Can anybody suggest ideas on how to solve this design problem:

I have an embedded system that generates events (off all sorts, lets just abstract events for the sake of this, they are a data structure that can be serialised).

These events are typically sent directly via internet connection to a server. It is also required that the events are backed up to a file when the internet connection is not available. Then later sent in order of generation to the server when the connection is available again. An added bonus would also be maintaining a history of events in the log file (up to so many days so file size is limited).

The tools I have to use are an arm cortex M4 micro, FatFS on SD card (http://elm-chan.org/fsw/ff/00index_e.html.), FreeRTOS, gcc. All are setup and working.

A previous system I have done managed a head and tail pointer to events in a block of EEPROM that acted like a FIFO queue. I am not sure how to best implement a similar thing using a file system.

So, the problem is mostly around how to do this using a file system.

Any advice appreciated, thanks in advance

Edit: There could be up to 10000 events per day. The device could be offline for up to 10 days. Events contain small amounts of data such as timestamp and a status value or location. the file system contains significantly more storage than required for the maximum buffered history. Eg 10MB would cover 10 days of storage and the SD card will be at least one Gb.

Ashley

Ashley Duncan
  • 825
  • 1
  • 8
  • 22
  • Name the files with a time stamp or sort by time/date? Is there a RTC in the system? – Lundin Jun 16 '17 at 06:33
  • Are you getting a confirm response for each log message that was received by the server? Are you looking for a concept of "fixed buffer (file) size" like you probably had in EEPROM or "dynamic storage until close to full" on the SD card? How "valuable" is a single event loss? – tofro Jun 16 '17 at 12:23
  • Lundin: I do have a RTC available. This thing can generate up to 10000 events per day (which I probably should have mentioned earlier) and can go extended periods without being online. I am not sure if having individual files would be too much of a burden. Tofro: I believe the server does confirm (yet to be tested). I am looking for suggestions on what might be a a good concept for managing this. I have much much more storage than needed so could allocate portions or do swap files like fill up a log for 30 days, then start a new one for the next 30 days, then overwrite the previous.. – Ashley Duncan Jun 18 '17 at 22:45

2 Answers2

2

The way I solved this problem in the end was to use a file as random access memory. I implemented a queue on the disk as though it was in any other random access memory. The FatFS file system I used supported allocating a contiguous block and a fast seeking function that improved this.

The file has a header and CRC error checking to help prevent corruption. At startup the file is opened and the header loaded. Otherwise it appears it is not much more than a queue based on a linked list.

I implemented it as a base class with two derived classes. The derived classes only do read/write to their storage. One for the disk storage and another for storage in RAM (for testing). With this I was able to simply queue and dequeue items in the application without the caller having any knowledge of where they were going to or coming from.

Worked beautifully for this job and performance was reasonable given the small embedded platform (~2ms queue/dequeue average including SD card access and header store). Tasks queue log items as they occur, another task removes them and bundles them up for sending to the server when an internet connection is available.

Ashley Duncan
  • 825
  • 1
  • 8
  • 22
1

Log into one log file per day, with another task that transfers log lines (oldest file first) when you get a connection back.

Make the server do the rejection of duplicate log lines. It makes your small machine's code easier. The transferrer just deletes files when they are fully transferred.

Resist having your connection checking task transferring "files".

I have a system I'm maintaining that does that and the file transfers are a pain.

One larger systems rsyslog supports disk buffering out of the box, but I don't think you don't have enough OS resources to run rsyslog in freertos.

Tim Williscroft
  • 3,705
  • 24
  • 37
  • Thanks for the good suggestions. I had been thinking along a similar line. I cant get away with one upload a day but could break it into smaller chunks. – Ashley Duncan Jun 21 '17 at 06:53