2

In my project I need to keep various records in eeprom, but also I need to search (by address), delete and edit these records. The records look like this:

[n bytes address1][data1][data2][data3]
[n bytes address2][data1][data2]
[n bytes address3][data1][data2][data3][data4][data5][data6]

I'm afraid if I just delete some records then the memory will be very fragmented (because every record has various lengths of data).

What is the best solution for this task?

I work with avr atxmega.

Joe
  • 4,877
  • 5
  • 30
  • 51
  • What is max and min record length? How many records? – Rev Jan 16 '17 at 13:38
  • about 3000 records and min is about 40 and max is 80 bytes, and i use external memory but my problem is in organization. how to easily search and access to records – ZonderComand Jan 17 '17 at 15:22
  • You probably have 3000x80 bytes available, I would probably make all records 80 or maybe 128 bytes to align them with potential page boundaries. How you "search" the records depends on what you are looking for. I don't think that some kind of sorting would make sense, but maybe some kind of indexing/marking/grouping. – Rev Jan 17 '17 at 17:48

3 Answers3

2

You can define a max size for a record and use that to save the data. You get a few empty bytes, but it beats the hassle of keeping track of your memory.

Also beware of sectors. Sectors are the smallest group to erase. If your data exceeds the border of a sector it can result in broken data.

Wisse
  • 21
  • 5
  • 1
    Good point about the sector erase issue, but on the AVR devices the EEPROM is byte erase-/writable, so this shouldn't be an issue. Other than that I agree that this is a simple straight forward approach, but only is the record lengths do not differ too much. – Rev Jan 16 '17 at 13:37
0

Fragmentation of memory tends to be much less of a problem if you implement a "best fit" algorithm for (ideally, fully) re-using "holes" instead of a "first fit" (you'll trade speed for efficiency, then). In case your data has a certain granularity (this seems to be the case), you can work pretty efficiently.

Organize empty areas in the EEPROM using a linked free-list, and make sure you search the whole free list for areas where the piece of data you want to store fits exactly, or within a certain acceptable margin of overhead. If you cannot find such an area, use the biggest free area. This severely reduces (if not avoids, depending on your data) fragmentation.

tofro
  • 5,640
  • 14
  • 31
0

There are several approaches, the selection depends on EEPROM size, maximum number of records (let's denote by N) and maximum size of a record (let's it be S): First approach is pretty obvious: if (N * S) <= free EEPROM size, then you can just allocate equal blocks of maximum size for each record. For example, if EEPROM size is 2048, and each record is 31 bytes max, and there is no more than 64 records, you can allocate 64 records of 32 bytes each, using the first byte to denote the size of the record.

If the size of each record can vary in wide range, or total number is undefined (you want to fir as much as possible), then there is two fragmentation approaches:

1) Defragment the data. Each time there is no continuous block of required size is available, you'll move all the data, until there is a free block of required size.

For example if record size is varied within 127 bytes, you can use first one byte to denote the type and the size of the block. E.g. higher bit is 1 - when block is free, 0 - if it is contains data. Lower 7 bits contains the block size. This approach is good enough, but since data is moved, it may require to update all references to the data in appropriate way.

2) Store the data fragmented. You can allocate number of blocks of particular size (e.g. 32 bytes each = 64 records max for 2048-bytes EEPROM). The first one will contain the index of the block where data is continued, let's say 0xFE is value for the last block in the chain, 0xFF - to denote the empty block. Other 31 bytes of the block to contain the data. It may make the reading process slightly more complicated, but data location for each record will be unchanged for a whole period of time.

AterLux
  • 4,566
  • 2
  • 10
  • 13