I have only developed a toy language for teaching purposes. So take this with
two grains of salt. But had a great experience with compiling to abstract
syntax trees, with semantic analysis and type checking in the parser so that
the trees were fully decorated with zero additional passes. Then it was easy
to build a tree-walking interpreter to support language development. It
included an arg/result stack to mimic the runtime stack of compiled code.
Consequently, there was only a small conceptual jump from the interpreter for
a given tree node to its corresponding code generator. Because the tree node
types were OO classes, the interpreter and generator code were side-by-side.
Careful design of the node types (I remember static link counts in id
reference nodes worked out very well) allowed the interpreter to be extremely
clean. And it was easily fast enough to construct a nice test suite.
Interpreter and code generator had beautiful symmetries. I remember
illustrating how code generation for IF ELSE could work by first changing the
interpreter to use GOTOs to choose the correct recursive call and then
translating that almost line-for-line to the code generator for similar jumps.
So the students (and I simultaneously) easily implemented simple tree
optimizations (I remember constant folding) and a code generator with the
interpreter there as a reference for the language semantics. At the end, we
could say "tc foo.toy" to interpret the source foo.toy or "tc -c foo.toy" to
compile to assembler code, run the assembler, and then load and run the
compiled assembly code. It was all rather nice and convenient.

The obvious conjecture is that this would be an efficient way to
develop a language and its prototype implementation: build the
interpreter first because it's clean and easy. But ensure the AST
structures and interpreter map easily onto a a runtime model for
compiled code. Then the code generator need be developed only after
the language is stable.