I'm the guy responsible for DMS, which is what I think you refer to in your question.
When trying to generate code for multiple target domains, you somehow have to express how to map the specification to the individual targets, no matter what machinery you use to do it.
Hard issues show up when the semantic gap between the specification language is different for each of the targets. (Most common multi-target code generators I've encountered tend to produce for the same type output languages, which sort of avoids this problem).
One way to do this is to write a separate translator for each output language. That works, at the cost of a lot of work. Another way to do it is to translate the specification language to an intermediate langauge/representation/domain in which most of the translation issues have been handled (e.g., an abstract procedural language), and then build translators from the intermediate domain to the individual targets. This tends to be a lot easier. If you have a real variety of targets, you may find that some targets have some things in common, but not with others; in this case, what you want is multiple intermediate representations, one for each set of commonalities.
All of this is orthogonol to how you actually express these translators. You can write them as classic compilers; you'll be busy for a long time. You can write them as some type of syntax-directed translation from an input specification captured as a graph ("crawl the graph an spit out text for each node") which seems pretty common (most "model driven" code generators seem to be of this type), but doing it this way doesn't offer much help beside the insight of doing it this way.
The way I like, and the reason I built DMS (and others built TXL and Stratego), is to use source-to-source transformations, because this way you can write down the mapping from your input language to your output language as rules that you can inspect that are essentially independent of the underlying transformation machinery; this is a big win if you are going to write, in effect, lots of rules, which occurs especially often when you are targeting multiple languages. Transformation engines have another major advantage over code generators that just spit text: you can process the output of one stage of the translator by applying more transformations. This means you can optimize code, you can build simpler rules (because you can use a chain of rules instead of one computation that represents the crossproduct, which is always big and hairy), and you can translate through several levels of intermediate domains.
Now, the other reason I built DMS the way I did, forces the clear separation of each of the "domains" (input spec, output domains, intermediate domains). You (and the transforms) are NEVER confused as to what a structure represents. Stratgo and TXL IMHO goofed here; they only manipulate "one" representation at time. If you are translating between two notations A and B you have to set up a (IMHO screwball) "union" domain having both A and B in it. And you have to somehow worry about, if you have a "+" piece of syntax in both A an B, whether "+" means the "+" in the A domain, or the "+" in the B domain. If you can't tell, how are you going to know which transformations to apply to it?
The idea of refining across multiple domains, and using transformations to do it, alas, aren't mine. They were proposed by James Neighbors (the source of the term "domain analysis") back in the 1980s, for his Draco system. But I get to stand on the shoulders of giants. DMS inherits Neighbors' domain concepts and transformational foundations. A difference is that DMS is designed for scale, arbitrary domains (DSLs as well as current programming langauges; it has predefined langauge modules for C++, C#, Java, JavaScript, ...), and carrying out deep analyses to support the transformations.