Suppose I have to do a networked application for an organisation ,say bank .Here a member can have different accounts like loan accounts,GDCS accounts,deposit accounts etc.To store the data I thought about two apporoaches.I am taking the deposit account for example.(A member can deposit amount in any day.)
1.]Storing the deposit details of every member in a single table having member id as a field. 2.]Storing the deposit details of a single member in a single table named member_id_deposits
In case 1 ,there will be more than one record with same member_id.So data redundancy is there,since member_id is redundant. In case 2 ,there is no redundancy as the entire deposit details of different days are stored in a single table for each member.But in this case if there are 100000 members,100000 tables will be there.
So which approach should be followed,the one which having lesser no of tables ,or the one that reduces redundancy but having very large number of tables?.
I know the primary concern in database design is decreasing redundancy.So in that point of view the second design is better.But it has a lot of tables.Is there any problem, having a very large number of tables?.Is there a limit for maximum number of tables that can be stored in a database.Is database with large number of tables slow in querying.