I'm fairly new to Berkeley DB, and I'm attempting to use it in Python with bsddb3 with transactions for power-safety With DB_AUTO_COMMIT and no transaction argument reads and writes work just fine. But the moment I call get/put with a manual transaction the call hangs indefinitely, using virtually no CPU (about 50k cycles/sec) and performing no disk I/O.
_data_store_env.log_set_config(bdb.DB_LOG_AUTO_REMOVE, True)
_data_store_env.set_lg_max(256 * 2**20)
_data_store_env.set_cachesize(0, 512 * 2**20)
_data_store_env.set_lg_dir(str(_journal_path))
_data_store_env.set_tmp_dir(str(tmp_dir))
_data_store_env.open(str(_bulk_data_path), bdb.DB_CREATE | bdb.DB_INIT_LOCK | bdb.DB_INIT_LOG | bdb.DB_INIT_MPOOL | bdb.DB_INIT_TXN | bdb.DB_RECOVER | bdb.DB_THREAD)
# Originally I simply used DB_AUTO_COMMIT, but I changed it to see if this way would fix the hang. It didn't.
txn = _data_store_env.txn_begin()
_data_store = bdb.DB(_data_store_env)
_data_store.set_flags(bdb.DB_CHKSUM)
_data_store.set_pagesize(65536)
_data_store.open("filestore.db", None, bdb.DB_HASH, bdb.DB_CREATE | bdb.DB_THREAD, 0x660, txn)
_idx_store = bdb.DB(_data_store_env)
_idx_store.set_flags(bdb.DB_CHKSUM | bdb.DB_DUPSORT)
_idx_store.open("idxstore.db", None, bdb.DB_HASH, bdb.DB_CREATE | bdb.DB_THREAD, 0x660, txn)
_data_store.associate(_idx_store, lambda key, data: key[0:9], bdb.DB_IMMUTABLE_KEY, txn)
txn.commit()
...
# It doesn't matter whether this flag is present or not. Both produce the same result.
txn = _data_store_env.txn_begin(None, bdb.DB_TXN_BULK)
...
# Never returns
file_exists = _idx_store.has_key(entry_key, txn)
...
# Also never returns
_data_store.put(file_hash, file_data, txn)
Am I doing something wrong? Do transactions even work in bsddb3?