Regarding "A) How many hits per second/minute/hour these pages are getting" - this information will be in the logs and just about any log parser and/or web analysing stats package will look at this for you. The common free/OSS ones are listed here.
For "B) How long it's taking to serve each page." - this can also be included in the logs if you use a custom log format, though you'll have to check the documentation for the log analysis tool you chose to see if it supports this extra information. Be careful when using this figure to infer things without other facts backing up the inference, as the time will obviously be affected by other load on the system as well as the load imposed by itself.
One of the most likely source of trouble in the circumstance you describe is the database. You don't state what the database server you are using is so we can't be more specific here, but you will find most databases allow logging of long running queries which you can use like the Apache "time taken" log field to infer places to look for optimisation opportunities. Specifically look for queries that perform table scans over large datasets.
The other main possibility is simply a glut of activity that your machine is not high spec enough to cope with - you should see this if it is the case using an Apache log analyser. If you get a sudden glut of traffic this can result in extra Apache processes getting launched and many extra database queries. In either case this can result in a lot of I/O activity either due to the database access or swapping if the extra processes push the machine past what can fit in RAM. It would be worth looking at memory use and swap activity during one of the busy spots, of if you can't catch one at the time it happens leave some logging in place so you can review what happened after the fact. I use collectd for such monitoring (there are other options around with similar features if collectd is not to your tastes), and as well as monitoring system params like CPU use, I/O and memory+swap use it also has modules for logging specific Apache and mySQL/postgres properties which you may find helpful. You state that you already have an I/O chart which implies a solution like this is already installed - you could check to see what other property logging options that has, specifically if it can distinguish between I/O to partitions where your data is from I/O caused by swap activity.
If gluts of activity are the issue then you you may find that you need either more RAM, a better I/O subsystem, or both, in order to serve the site's peak load - though there might be places in your code or database design where optimisation would help too, specifically look at improving the indexing of your data in the database if full table scans are being performed where they shouldn't be necessary) and you could consider certain caching techniques to reduce the number of times dynamic content is reconstructed from scratch.