1

I'm looking to get Cumulative Frequency Data out of our database. I've created a simple temp table with all unique status update counts that we've seen, and the number of users that have that amount of status updates.

     Table "pg_temp_4.statuses_count_tmp"
     Column     |  Type   | Modifiers 
----------------+---------+-----------
 statuses_count | integer | 
 frequency      | bigint  | 
Indexes:
    "statuses_count_idx" UNIQUE, btree (statuses_count)

My current query is:

select statuses_count, frequency/(select * from total_statuses)::float, (select sum(frequency)/(select * from total_statuses)::float AS percentage from statuses_count_tmp WHERE statuses_count <= SCT.statuses_count) AS cumulative_percent  FROM statuses_count_tmp AS SCT ORDER BY statuses_count DESC;

But this takes quite a while and the number of queries grows quite quickly. So with the ~50,000 rows I have, I'm looking at 50k factorial rows to be read. Sitting here watching the query grind away I'm hoping theres a better solution that I haven't through of yet.

Hoping to get something like this:

0       0.26975161      0.26975161
1       0.15306534      0.42281695
2       0.05513516      0.47795211
3       0.03050646      0.50845857
4       0.02064444      0.52910301
Peck
  • 822
  • 1
  • 9
  • 26

1 Answers1

2

Should be solvable with windowing functions, assuming you have PostgreSQL 8.4 or later. I am guessing that total_statuses is a view or temp table along the lines of select sum(frequency) from statuses_count_tmp? I wrote it as a CTE here which should make it calculate the result just once for the duration of the statement:

with total_statuses as (select sum(frequency) from statuses_count_tmp)
select statuses_count,
       frequency / (select * from total_statuses) as frequency,
       sum(frequency) over(order by statuses_count)
           / (select * from total_statuses) as cumulative_frequency
from statuses_count_tmp

Without 8.4's window functions your best bet is simply to process the data iteratively:

create type cumulative_sum_type as ( statuses_count int, frequency numeric, cumulative_frequency numeric );
create or replace function cumulative_sum() returns setof cumulative_sum_type strict stable language plpgsql as $$
declare
  running_total bigint := 0;
  total bigint;
  data_in record;
  data_out cumulative_sum_type;
begin
  select sum(frequency) into total from statuses_count_tmp;
  for data_in in select statuses_count, frequency from statuses_count_tmp order by statuses_count
  loop
    data_out.statuses_count := data_in.statuses_count;
    running_total := running_total + data_in.frequency;
    data_out.frequency = data_in.frequency::numeric / total;
    data_out.cumulative_frequency = running_total::numeric / total;
    return next data_out;
  end loop;
end;
$$;
select * from cumulative_sum();
araqnid
  • 127,052
  • 24
  • 157
  • 134
  • ah, no such luck. 8.3.9 and no real hope of updating it in the next couple of days, but I will keep this solution in mind once we get it updated. – Peck Jan 12 '11 at 20:27
  • @Peck: I've added a solution that should work on 8.3, using a plpgsql function. – araqnid Jan 12 '11 at 20:35
  • Hmm. `frequency / (select * from total_statuses)` doesn't look right to me at all. How can you divide a number by a "set". Shouldn't that be `frequency / (select count(*) from total_statuses)`? –  Jan 12 '11 at 20:39
  • assuming `with total_statuses as (select sum(frequency) from statuses_count_tmp)`, then `select * from total_statuses` can be interpreted as a scalar, since the relation has a single tuple with a single field. – araqnid Jan 12 '11 at 20:44
  • @a_horse_with_no_name: its just a temp table with a single value. Not the set. @araqnid made the assumption in his post, I should have been more clear on it. – Peck Jan 12 '11 at 20:45
  • @araqnid it works, and wow, thats fast. Thank you for saving me a lot of waiting time generating this and other CFDs today! – Peck Jan 12 '11 at 20:45
  • @Peck: hmm, still looks awfully confusing. I wouldn't want to maintain such a statement. Seems to be a **very** fragile way of doing this. –  Jan 12 '11 at 20:48