There are two relevant measurements for 'size' of a table: allocated and used size (both in bytes)
_v_table_storage_stat will help you look at both sizes for a given table
For small tables, the allocated size can be many times larger than the used size, and assuming an even distribution of rows, a minimum of 3MB will be allocated on each data slice. I do most of my work on a double rack MAKO system with 480 data slices. Therefore any table with less than 14,4GB are more or less irrelevant for optimization of 'size'
Nevertheless I'll try to explain what you see:
You must realize that
1) all data in Netezza are compressed.
2) Compression is being done for a 'block' of data on each individual dataslice.
3) Compression ratio (the size of data after compression divided by the size before) gets better (smaller) if data in each block share many similarities compared to the most 'mixed' situation imaginable.
4) 'distribute on' and 'organize on' can both affect this. The same can an 'order by' or even a 'group by' in the select statement used when adding data to your table
In my system, I have a very wide table with several 'copies' per day of the bank accounts of our customers. Each copy are 99% identical to the previous one and only things like 'balance' changes.
By distributing on accountID and organizing on AccountID,Timestamp - I saw a 10-15% smaller size. Some data slices had a better effect, because they contained a lot of 'system' account ID's which have a different pattern in the data.
In short:
A) it's perfectly natural
B) don't worry too much about it since:
C) a 'large' table on a Netezza system is not the same as on a 4-core database with too little memory and sloooow disks :)