I have the following queried data which is:
SELECT *
FROM MYTABLE
WHERE tagid = '65'
Output:
tagid floatvalue t_stamp
-------------------------------
65 25.51477051 1455897455214
65 35.71407318 1455897485215
65 36.05856323 1455897515215
65 35.72781372 1455897545214
65 35.99771118 1455897575215
65 35.87993622 1455897605215
65 36.23326111 1455897665215
65 35.8652153 1455897695215
65 35.73075485 1455897725216
65 35.94765472 1455897785216
65 36.36379242 1455897815217
65 35.93685913 1455897845216
65 36.64154816 1455898025219
65 36.44329071 1455898055218
65 36.07524872 1455898085219
65 36.40992355 1455898115217
65 38.13336182 1455898145217
The t_stamp
column is a big int of Unix time * 1000
.
This data is logging every ~30 sec (30,000) if the machine is running. I am attempting to query this for the sum of the time differences if they are less than two minutes (120,000) from the row before it. If it is greater than two minutes then I assume the machine was off and that row would be a new start time for the next sum.
My goal here is to get a sum of the total run time using the time stamps.
I am at a complete loss where to start on this. I have had a hard time attempting to make this explanation even make sense to me, much less you guys and apologize if I've made a mess of it.