It's been a while since I've used the FastCGI facilities in Rebol, so I can't really answer the first question very well, but I can help on your second question, though you might not like it.
Noone has recreated the fastcgi://
scheme for R3 yet. I say "recreated" because R3 has a completely different port model than R2, so port schemes aren't at all portable between the two platforms. And this is in addition to the R2/Command port scheme being built-in native code, which also wouldn't be portable to R3 even if it were open sourced because R3's system model is different too. And regardless of its portability, R2 contains a lot of commercially licensed code that Rebol Technologies doesn't have the right to open source - pretty much everything that it could open made it into R3 already. So if it isn't there already, it's safe to assume that it's not at all compatible or not openable.
It would be faster and easier to start over from scratch in R3 with a brand new fastcgi://
scheme that follows the R3 model. The most that the R2 source would help with, even if we had it, would be to document the FastCGI protocol, and AFAIK the protocol is better-documented elsewhere. It would likely be a good idea in that case to make a host port that is optimized for this kind of thing, something which is a bit easier to do in R3.
On the plus side, from what I remember the FastCGI protocol is not that difficult to do, and the new R3 port model is a lot better for this kind of thing, so starting from scratch might not be too difficult. And if we're lucky, this can all be done in user code that just runs on the regular R3 interpreter, no host code adaptation necessary. So the news doesn't have to be that bad.
Now an attempt to answer your first question: It depends.
It really depends on what you are trying to do, and how you have things set up. CGI has the overhead of starting the process every time, so it's only fast if the process startup overhead is significantly less then the rest of the overhead of the request (filesystem or database access, for instance). Rebol, particularly R2, has a significant amount of process startup overhead because it is an interpreter that has some built-in interpreted code to load when it starts up. You can minimize that startup overhead by using the SDK to create your app with only the code you need, but that still might not help enough in your particular case (not knowing what you're trying to do).
FastCGI is a way to get rid of that process starting overhead by running an out-of-process app server instead of starting a new process per request. For something like Rebol which has significant process startup overhead, the savings can be just as significant.
One thing that you need to consider is that R2 has a single-thread-per-process model for the most part, so if you want to handle multiple concurrent requests you either have to do them in parts in the same process (like Node.js) or have FastCGI allocate multiple server processes to handle multiple requests independently, or maybe both. Be sure to ask the Rebol experts for advice if the prospect of this is intimidating, or just set up FastCGI to start up more app servers to run at the same time.
So, how many requests you can do per second with a FastCGI setup depends on how you configure FastCGI, how you write your FastCGI handler code, and how much and what kind of work your requests are doing.
It is telling though that you are getting 4-5 requests per second in CGI mode. That's unusually slow. Rebol's startup overhead isn't anywhere near that slow in the worst case. That means that either your requests are doing a lot, or that you don't have enough RAM to be running more than a couple CGI processes at a time, or that you have CGI configured badly. I'm not sure that FastCGI can help as much in that case as just using better hardware or doing a better job of configuring Apache. Nonetheless, make sure you have enough FastCGI worker processes and write your handler to handle multiple requests at the same time and you'll save as much overhead as you can.
Good luck!