I'm trying to figure out a way to reduce the size of a table that is growing to an alarming size by consolidating similar entries. I've gotten a working 'Group By' query that does roll up the data as I would like, I'm just not sure how to delete the existing entries and insert in new ones based upon that select statement. I suppose the most straight forward approach would be run the select, save those results into memory, delete the table entries, then insert the consolidated entries. But that seems inefficient and would use a lot of memory, hoping maybe there is a way to perform some sql magic and make this happen all at once.
This is my 'Group By' select:
select SUM(user_in),
SUM(user_out),
SUM(user_total),
name,
device_type,
device_os,
ownership,
host_name
FROM user_usage
GROUP BY name,
device_type,
device_os,
ownership,
host_name;
Thanks!
In case anyone else ever runs into something like this, this function accomplishes the consolidation while being very efficient:
CREATE OR REPLACE FUNCTION consolidate_user_bandwidths(
IN _tbl REGCLASS,
IN _savetime TIMESTAMP
) RETURNS void AS
$BODY$
BEGIN
EXECUTE 'CREATE TEMP TABLE user_bandwidths_temp ON COMMIT DROP AS
SELECT MAX(id) AS id, SUM(bandwidth_in) AS bandwidth_in,
SUM(bandwidth_out) AS bandwidth_out,
'''|| _savetime ||'''::timestamp AS created_at,
SUM(bandwidth_total) AS bandwidth_total, name,
device_type, device_os,
ownership, host_name
FROM '|| _tbl ||'
GROUP BY name, device_type,
device_os, ownership, host_name';
EXECUTE 'TRUNCATE TABLE '|| _tbl ||'';
EXECUTE 'INSERT INTO '|| _tbl ||' SELECT * FROM user_bandwidths_temp';
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;