I am wondering of the best way of backing up a hashmap onto disk with each of the maps .put calls. Currently what i have coded is to separate each object being written into a file where its hashcode is the files name and the contents is a hashmap of the entries that match (encase of collisions)
private File val2file(K k) {
int hash = k.hashCode();
//bitsIgnored if we ever want to store multiple objects per file
hash = hash >>> bitsIgnored;
return new File(rootDir, Integer.toString(hash));
}
with
File toEdit = val2file(k);
ReentrantLock tempLock = new ReentrantLock();
ReentrantLock lock = lockMap.putIfAbsent(toEdit, tempLock);
if(lock == null) {
lock = tempLock;
}
lock.lock();
try {
HashMap<K, V> storageMap;
if(toEdit.exists()) {
ObjectInputStream ois = new ObjectInputStream(new FileInputStream(toEdit));
Object obj = ois.readObject();
ois.close();
storageMap = (HashMap<K, V>) obj;
} else {
storageMap = new HashMap<>();
}
storageMap.put(k, v);
ObjectOutputStream oos = new ObjectOutputStream(new BufferedOutputStream(new FileOutputStream(toEdit)));
oos.writeObject(storageMap);
oos.close();
} catch(FileNotFoundException ex) {
Logger.getLogger(MapSaver.class.getName()).log(Level.SEVERE, null, ex);
} catch(ClassNotFoundException ex) {
Logger.getLogger(MapSaver.class.getName()).log(Level.SEVERE, null, ex);
} catch(IOException ex) {
if(lock.isLocked()) lock.unlock();
throw ex;
} finally {
if(lock.isLocked()) lock.unlock();
}
This does work and i have validated it for 1000 updates per second, with the limiting factor being the serialiser (CPU) and the input/outputs of the drive due to the amount of files its writing to. I have tried some database solutions and other "map to file" solutions suggested here but none seem as fast. I would rather a one file solution compared to my (n file) solution but i cant see anyway to do that for each single update. Does another have any alternative or improvement to the code?