There are multiple ways to convert a CFG into a piece of code that does actual parsing, each with its strengths and weaknesses.
Some algorithms, like the CYK algorithm, Unger's algorithm, and (my personal favorite) Earley's algorithm can take as input an arbitrary CFG and a string, then use dynamic programming to determine a parse tree for that string if one exists. The operation of these algorithms doesn't resemble your typical pushdown automaton, since they work by filling in tables of values while processing characters one at a time.
Some parsing algorithms, especially LR(1) and the general family of LR parsers, more directly maintain a parsing stack and use a finite-state control to drive the parser. LR(1) parsers can't handle all possible CFGs, though - they can only handle deterministic CFGs - but variations like GLR parsers can handle all grammars by essentially running multiple stacks in parallel. The compiler generation tools bison and yacc generate parsers in this family, and if you take a look at how their input files work you'll get a sense of how the CFGs are encoded in software.
LL(1) parsers and simple backtracking parsers work top-down and typically use a stack (often, the runtime call stack) to parse input strings. They can't handle all grammars, though. The ANTLR parser generator produces parsers in this family.
Packrat parsers work by using modified CFGs that encode priorities of what order to try things in. Code using these parsers tends to closely mirror the shape of the grammar. Parser combinators are another modern technique where the parsing logic looks a lot like the CFG.
I would recommend taking a compilers course or picking up a copy of "Parsing Techniques: A Practical Guide" by Grune and Jacobs if you're interested in learning more about this.