that's what I have:
- a Windows Service
- C#
- multithreaded
- the service uses a Read-Write-Lock (multiple reads at one time, writing blocks other reading/writing threads)
- a simple, self-written DB
- C++
- small enough to fit into memory
- big enough not wanting to load it on startup (e.g. 10GB)
- read-performance is very important
- writing is less important
- tree structure
- informations held in tree nodes are stored in files
- for faster performance, the files are loaded only the first time they are used and cached
- lazy initialization for faster DB startup
As the DB will access those node informations very often (in the magnitude of several thousand times a second) and as I don't write very often, I'd like to use some kind of double checked locking pattern.
I know there is many questions about the double checked locking pattern here, but there seems to be so many different opinions, so I don't know what's the best for my case. What would you do with my setup?
Here's an example:
- a tree with 1 million nodes
- every node stores a list of key-value-pairs (stored in a file for persistence, file size magnitude: 10kB)
- when accessing a node for the first time, the list is loaded and stored in a map (sth. like std::map)
- the next time this node is accessed, I don't have to load the file again, I just get it from the map.
- only problem: two threads are simultaneously accessing the node for the first time and want to write to the cache-map. This is very unlikely to happen, but it is not impossible. That's where I need thread-safety, which should not take too much time, as I usually don't need it (especially, once the whole DB is in memory).