The python influx client use too much ram during the writing process. Since we do not have access to the influxdb config we need to improve this in python directly. I tried to optimize it using different settings for batch_size, flush_interval, jitter_interval with no success.
.write_api(
success_callback=None,
error_callback=callback.error,
retry_callback=callback.retry,
write_options=WriteOptions(
write_type=WriteType.batching,
batch_size=1_000,
flush_interval=1_000,
jitter_interval=1_000
)) as write_api:
points = '\n'.join(df_converted)
write_api.write(record=points, bucket=self.db_name, write_precision=write_precision)
My question is: given a certian upper limit value for RAM, can we somehow limit the client to not exceed it during writing/query