I have some questions and some possible bottleneck findings.
First, is this a CentOS 5 or 6 system? Because in 6, we have an incredible tool called blktrace which is ideal for measuring impact in this kind of situations.
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/ch06s03.html
We can then parse the output with btt and get where the bottleneck is, application, filesystem, scheduler, storage - at which component the IO is spending most of the time.
Now, theoretically coming to your question, it will obviously increase the number of inodes and as you keep creating or accessing new or existing files or directories inside directories, access time will increase. The kernel has to traverse a more vast filesystem hierarchy and hence that without a doubt is an overhead.
Another point to note is that as you increase the number of directories, the inode and dentry cache usage will climb up meaning consumption of more RAM. This comes under slab memory, so if your server is running low on memory, that is another point of thought.
Speaking of a real world example, I recently saw that on a highly nested ext3 fs, creating a subdir for first time is taking around 20 seconds whereas on ext4 it is taking around 4 seconds. That is because how the block allocation is structured in different filesystems. If you use XFS or ext4 it is needless to say that you will get some performance boost, however minimal it might be.
So, if you are just asking for what is the right choice of filesystem, ext3 is a bit outdated. That's all I can offer without further data and benchmark.