If that's how long it's taking, then that's the length of time it takes. The interpreter has to be started up and establish a full working PostScript environment then fully interpret the input, including all the fonts, and pass that to the output device. The output device records the font, point size, orientation, colour, position and attempts to calculate the Unicode code points for all the text. Then, depending on the options (which you haven't given) it may reorder the text before output. Then it outputs the text closes the input and output files, releases all the memory used and cleanly shuts down the interpreter.
You haven't given an example of the file you are using, but half a second doesn't seem like a terribly long time to do all that.
In part you can blame all the broken PDF files out there, every time another broken file turns up 'but Acrobat reads it' another test has to be made and a work-around established, generally all of which slow the interpreter down.
The resolution will have no effect, and I find it very hard to believe the media size makes any difference at all, since it's not used. Don't use NOGC, that's a debugging tool and will cause the memory usage to increase.
The most likely way to improve performance would be not to shut the interpreter down between jobs, since the startup and shutdown are probably the largest part of the time spent when it's that quick. Of course that means you can't simply fork Ghostscript, which likely means doing some development with the API and that would potentially mean you were infringing the AGPL, depending what your eventual plans are for this application.
If you would like to supply a complete command line and an example file I could look at it (you could also profile the program yourself) but I very much doubt that your goal is attainable, and definitely not reliably attainable for any random input file.