I dont understand how Postgresql (9.2) calculate the column size (in kb), I have this tables:
Table d2:
Column | Type |
---------+---------------|
id | serial |
n | numeric(17,2) |
Table d4:
Column | Type |
---------+---------------|
id | serial |
n | numeric(19,4) |
Table d18:
Column | Type |
---------+---------------|
id | serial |
n | numeric(35,18)|
Table b1:
Column | Type |
---------+---------------|
id | serial |
n | numeric(16,2) |
Table b2:
Column | Type |
---------+---------------|
id | serial |
n | numeric(4,2) |
And i populate them with this code, in order to have 10000 rows for each table;
$tests = array(2, 4, 18);
foreach($tests AS $n)
{
$m = number_format(999999999999999.66549865, $n, '.', '');
$prp_name = "insert_$n";
$prp = pg_prepare($db, $prp_name, "INSERT INTO d_$n (n) VALUES ($1)");
for($i = 0; $i < 10000; $i++)
{
pg_execute($db, $prp_name, array($m));
}
}
$prp = pg_prepare($db, 'insert_b1', "INSERT INTO b1 (n) VALUES ($1)");
$m = 16512.67;
for($i = 0; $i < 10000; $i++)
{
pg_execute($db, 'insert_b1', array($m));
}
$prp = pg_prepare($db, 'insert_b2', "INSERT INTO b2 (n) VALUES ($1)");
$m = 99.36;
for($i = 0; $i < 10000; $i++)
{
pg_execute($db, 'insert_b2', array($m));
}
Now, what I dont understand is how could be that:
SELECT pg_size_pretty(pg_total_relation_size('d2')) AS size_d2;
size_d2
---------
752 kB
SELECT pg_size_pretty(pg_total_relation_size('d4')) AS size_d4;
size_d4
---------
752 kB
SELECT pg_size_pretty(pg_total_relation_size('d18')) AS size_d18;
size_d18
----------
752 kB
SELECT pg_size_pretty(pg_total_relation_size('b1')) AS size_b1;
size_b1
---------
440 kB
SELECT pg_size_pretty(pg_total_relation_size('b2')) AS size_b2;
size_b2
---------
680 kB
So, the d_* tables has the same size, even if the precision (and the lenght of the data stored) is very different;
Table b1 is smaller than b2 even if has a bigger precision.
All tables has been flushed (vacuum, analize) before the pg_total_relazion_size
.
Im not been able to find an answer in Postgresql's datatypes documentations, so I'm gonna ask it here: How does the size in kb grow in relation of the precision of numeric columns?
Im doing this tests in order to decide what precision/scale to use to store monetary types in the database for a CMS, I would like to have only 1 precision/scale value for all item's prices (not for the totals, where the scale must be of 2 decimals).
The more decimals I can store the better is, for the user (so I dont have limits when a customer ask to store 12 decimals for a specific items), but I want to understand how this decision will affect database size and performance.