I have an InfluxDB (version 2), where an entry is written every second into my bucket, together with an identifier
(uuid of the source) and a side
value (some domain-specific measurement from 1 to 6). Instead of having a long list of such by-second logs:
2020-05-18T15:57:18 side=1
2020-05-18T15:57:19 side=1
2020-05-18T15:57:20 side=3
2020-05-18T15:57:21 side=3
2020-05-18T15:57:22 side=3
2020-05-18T15:57:23 side=2
2020-05-18T15:57:24 side=2
I'd like to condense those entries, so that I can calculate the duration, for how long side=x
held true:
from 2020-05-18T15:57:18 to 2020-05-18T15:57:19 side=1 duration=2s
from 2020-05-18T15:57:20 to 2020-05-18T15:57:22 side=3 duration=3s
from 2020-05-18T15:57:23 to 2020-05-18T15:57:24 side=2 duration=2s
I also restrict the time period by a from/to range. This is my current query:
from(bucket: "mybucket")
|>range(start:2020-01-01T00:00:00.000Z, stop:2020-12-31T00:00:00.000Z
|>filter(fn:(r) => r.identifier == "3c583530-0152-4ed1-b15f-5bb0747e771e")
My approach is to read the raw data, and then iterate over it, detect changes of the side (something like current.side != last.side
), and then report that as a new entry to my logbook
. This approach is very inefficient, of course (JavaScript):
const data = fetchFromInfluxDB(from, to):
const terminator = { 'time': new Date(), 'identifier': '', 'side': -1 };
data.push(terminator); // make sure last item is also reported
const logbook = new Array();
let lastSide = -1;
let from = undefined;
for (let e of data) {
if (e.side != lastSide) {
if (from != undefined) {
let to = e.time;
let diff = Math.floor((new Date(to) - new Date(from)) / 1000);
if (diff > 0) {
logbook.push({'from': from, 'to': to, 'side': lastSide, 'seconds': diff});
}
}
lastSide = e.side;
from = e.time;
}
}
Is there a way to group and sum up that data using the InfluxDB query language?