I've posted a question similar to this, but I'm still having trouble with several things.
I have a list of tuples that looks like this:
[(1, 0.5, 'min'),
(2, 3, 'NA'),
(3, 6, 'NA'),
(4, 40, 'NA'),
(5, 90, 'NA'),
(6, 130.8, 'max'),
(7, 129, 'NA'),
(8, 111, 'NA'),
(9, 8, 'NA'),
(10, 9, 'NA'),
(11, 0.01, 'min'),
(12, 9, 'NA'),
(13, 40, 'NA'),
(14, 90, 'NA'),
(15, 130.1, 'max'),
(16, 112, 'NA'),
(17, 108, 'NA'),
(18, 90, 'NA'),
(19, 77, 'NA'),
(20, 68, 'NA'),
(21, 0.9, 'min'),
(22, 8, 'NA'),
(23, 40, 'NA'),
(24, 90, 'NA'),
(25, 92, 'NA'),
(26, 130.4, 'max')]
I am running experiments for which each experiment has exactly one "min" and one "max" value. I want to sum up the elements in the first element up to only one "min" and "max." For example, this small dataset has 3 experiments because there are 3 mins and three maxes. The output would look like:
exp = [1+2+3+4+5+6+7+8+9+10, 11+12+13+14+15+16+17+18+19+20, 21+22+23+24+25+26]
I would also like to keep track of the values being added to the list as well so that I also have this output:
exp_values = [[1,2,3,4,5,6,7,8,9,10], [11,12,13,14,15,16,17,18,19, 20], [21, 22, 23, 24, 25, 26]]
I am having trouble trying to get started and only have a general idea so far:
times = []
sum_
for item in tup_list:
if item[2] != "min":
sum_ += item[0]
times.append(sum_)