2

I'm working with a board with an ARM based processor running linux (3.0.35). Board has 1GB RAM and is connected to a fast SSD HD, and to a 5MP camera.

My goal is capturing high resolution images and write those directly to disk.

All goes well until I'm trying to save a very long video (over 1GB of data),

After saving a large file, it seems that I'm unable to reload the camera driver - it fails allocating a large enough DMA memory block for streaming (when calling dma_alloc_coherent()).

I narrowed it down to a scenario where Linux boots (when most of the memory is available), I then write random data into a large file (>1GB), and when I try to load the camera driver it fails.

To my question -

When I open a file for writing, write a large amount of data, and close the file, isn't the memory which was used for writing the data to HD supposed to be freed?

I can understand the why the memory becomes fragmented during the HD access, but when the transactions to the HD are completed - why is the memory still so fragmented that I cannot allocate 15MB of contiguous RAM?

Thanks

oferlivny
  • 300
  • 4
  • 15

1 Answers1

0

[...] close the file, isn't the memory which was used for writing the data to HD supposed to be freed?

No, it will be cached, you can check /proc/meminfo for this. Whether the dma_alloc_coherent() function uses only free memory is a good question.

BenMorel
  • 34,448
  • 50
  • 182
  • 322
Turbo J
  • 7,563
  • 1
  • 23
  • 43
  • Indeed. Good odds are that the writes to flash are relatively slow and there aren't 15MB of free pages to pin for DMA. – marko Nov 10 '12 at 23:07
  • Thanks Turbo! But isn't cached memory considered as FREE when it comes to new memory allocations? Also, if I drop the caches, the problem is still there... – oferlivny Nov 11 '12 at 07:40