I know there are different ways to combine programming languages (Haskell's FFI, Boost with C++ and Python, etc...). I have an odd interest in combining programming languages; however, I have only found it "necessary" once (I didn't want to rewrite some older code). Also, I notice that this interest is shared (there are an abundance of questions about integrating languages on SO).

My question is, simply, are there any other benefits in combining programming languages? Is there value in mixing different programming paradigms (e.g. functional+OO, procedural+aspect-oriented)?

Any from-the-field examples would be much appreciated.

UPDATE

When I say "combine two languages" I am talking about using them in conjunction, in ways not necessarily originally intended. For example, suppose I use Boost to incorporate Python code in C++.

8 Answers
8

A typical example shows up in the Computer games, particularly AAA titles where a C++ backend is the norm. The interface section will often be designed in a scripting language such as Python or Lua. This allows for easy modification by both the developers so they can test out new interface designs without messing with the highly complex physics and graphics engines underneath and at the same time allows for easy modification by players who may not have the coding chops to handle a full game engine but can competently do a few interface tweaks.

Another widespread use case is the web itself. Javascript, CSS and HTML combine to form a front end with whatever you want as a backend on the server. Windows 8 apps use a similar approach with the declarative XAML defining interfaces and providing class outlines while C# fills in the details and runs the thing.

Thus, the typical divide will be a fast, statically typed language running a solidly tested codebase handling heavily numerical work with a scripting language running on top of it to provide easily changeable frontends handling visual and presentation functions as well as input.

Other use cases exist as well. Data processing languages like R can be bolted on to other codebases to provide analytic and presentation functions more easily than the native codebase might be able to. As far as combinations of paradigms, a combination of Prolog for database work and some other language to handle the standard program functions is a known case. Another case would be having Fortran or Assembly routines for fast numerical computation within some scripting language like Python. Which is exactly what Numpy and Scipy do.

Programming languages are fundamentally abstractions that assist in helping you tell a computer what to do. As they are abstractions, this means that they hide some of the details and so simplify the overall task, enabling you to comprehend how to do some particular piece of programming without having to understand the whole thing at once. But that hiding also means that they restrict what you can do: going outside the abstraction is hard, and so also is having to build a lot of new abstractions on top of the base level provided by the language.

The net effect of this is to mean that a particular programming language has a range of programming tasks that it is well suited to, but it is not going to be good for everything. A low-level programming language allows you to do low-level work, often that happens to be performance critical (such as numeric-heavy processing). That's OK, but doing high-level work in the same language is nothing like as easy as using a language that specializes in higher-level abstractions. What's more, there's no perfect set of abstractions to choose: some problems are better described in one group (e.g., functional programming), and others require a different approach (e.g., logic programming).

The best way out of this situation where a programming language that can do one task isn't good for others is to use multiple programming languages. Let each language focus on doing the tasks that it is good at, and then transfer control to code written in another language for other parts. It's specialization (and also componentization) and it is a good thing indeed. Of course, there are languages that happen to work particularly well together since they have strongly compatible ABIs (e.g., Java and Scala, C# and VB, C++ and Lua) but they're most certainly not the only levels that you can plug things together at. Indeed, when you're accessing a web service you're probably using the same ideas: the languages that clients are written in are probably not those that servers are written in because, even if there's no theoretical reason for a language to be unable to do both sides, some languages have better abstractions for one side or the other.

Another example that was not mentioned is the use of a new programming paradigm with existing legacy code and / or libraries.

For example, Java has tons of existing libraries and frameworks that can be used directly by new languages like Scala and Clojure. In this way, one has the advantage of using a clean implementation of a different programming paradigm that is not supported by Java, while retaining backward compatibility with legacy code and existing libraries. (I chose Java, Scala and Clojure but there are probably other examples I do not know of).

I would say the benefit is to get the most expressive and best performing solution to the problem at hand. Also, embedding one language in another often lets you move to a higher level of abstraction for parts of your work.

I am doing IOS development, so naturally I am writing in Objective-C++, which gives the awesome power and complexity of C++ and the run-time binding and introspection of Objective-C. I would think this is perhaps the most popular "hybrid" language/paradigm currently. Often I will write .mm (objective c++) files with mostly C++ and STL and just a little Objective-C. For example, last week I wrote a state machine in C++ that called back into an Objective-C object to perform its actions. Writing the machine in C++ was easier, more expressive, and reusable.

In this particular project, I also wrote a scheme interpreter which is embedded in the program and is used to control a communication channel by writing scheme at runtime based on environmental conditions. Scheme is a good choice for this because it is so small, simple and "code is data." It also raises the level of abstraction, and a functional approach works well in this situation.

I don't find embedding little languages to abstract complex problems unusual at all. Be it a set of tokens in a string that control an interpreter to drive some complex decision tree, or a full-blown specialty language of my own design or a unique problem, or a more off-the-shelf solution like LUA.

There is no such thing as a "general purpose" programming language. Every language is strong in one area and useless in the others. Therefore, in a complex project which covers many different problem domains it is most efficient to use multiple languages. As far as I know, there are no benefits whatsoever in using a single language for a project where you can use multiple languages.

Of course, the most efficient language for any given problem domain is a language tailored specifically for this narrow domain. This approach is known as Language Oriented Programming, and such languages are called Domain Specific Languages. But even if you, for some weird reason, do not want to implement you own languages, there are multiple ready to use languages out there, and picking one most suitable for a task still will be much more efficient than using a language which does not fit well into the problem domain constraints.

One of the most efficient and convenient way of mixing different languages together is using embedded DSLs hosted inside a single meta-language. This way you won't need any FFI, you can reuse features of your host language library from within your DSLs, you can reuse your host language optimising compiler.

Another common approach is to host different languages within a runtime of a single common VM, like Java+Clojure+Scala or C#+VB.NET+F#.

I can't agree with that first statement. Of course there are general purpose programming languages. Most scripting languages such as Python and Ruby fall into this category. You can do desktop GUIs, web apps, text processing, networking, interactive shells, games, etc. They are very general purpose, useful for a very wide range of programs.
–
Bryan OakleyMar 26 '13 at 11:27

3

@SK-logic: yes, for each application I listed there's a better fitting language. That's why python is a good general purpose language -- it will do fine for a large number of tasks if you don't have a highly tuned special purpose language. You seem to think that because there's a better language than python for each specific task, that python can't be a good general purpose language? That's the very definition of a general purpose language -- useful for a wide variety of programming tests. The fact that there may be better languages doesn't change the fact that, often, python is Good Enough.
–
Bryan OakleyMar 26 '13 at 12:04

3

@sk-logic: "[Python] does not even run on my PIC16". I'm not sure I see your point. A language doesn't have to run on every known platform to be general purpose. You seem to think that "general purpose" is defined as "perfect for every task". To me it means "good enough for most tasks".
–
Bryan OakleyMar 26 '13 at 12:07

2

@BryanOakley: Every language has an application area or domain. Python is definitely very good in certain areas (e.g. web applications, prototyping, as a scripting language to be used on top of some existing engine) while it is unusable in other areas (like writing a device driver or avionic software). I would not say that "Python is not perfect for writing device drivers" but rather that "Python is unusable for writing device drivers".
–
GiorgioMar 26 '13 at 12:22

3

@BryanOakley I think "general purpose" has been abused in the past to mean that a language can do everything and do it equally good, by various authors. I've seen and used "multi purpose" as a better term for languages that are good enough for a variety of distinct tasks.
–
Yannis Rizos♦Mar 26 '13 at 12:35

Tkinter is a graphical library for Python that is built upon Tcl/Tk. It's actually an embedded Tcl interpreter that is called from python. Normally this goes mostly unnoticed, but it's possible to put tcl code in a string and run it directly in the interpreter. This lets you do things that the native tcl/tk can do that Tkinter cannot because of an incomplete implementation.

Wrote a system that imported CSV and XLS spreadsheets into a database using custom mapping documents. Wrote that system in Java and it worked fine. Then came the requirement to read those documents from an FTP server that would need to be polled at set intervals. Ruby's perfect for that, could write that code far more efficiently in Ruby than Java. So wrote a Ruby script to poll and retrieve the documents, which fed them to the original Java application which still could be called standalone as well for additional inserts or testing

Working on a system written in an old language that's probably going the way of the dodo soon (losing support from toolvendors, training providers, etc.). Porting the entire system to a new environment in one go would be too risky for the business, take too long to please marketing, and deny customer new capabilities they've been asking for. So we create for several releases a hybrid system, with part created in that old technology and part in the new, gradually transiting from one to the other until in a year or two maybe the old will all have been retired.

Every system using web services (or other EDI) to communicate is inherently hybrid at least in potentiality. It doesn't matter for your client whether the web service provider is written in Java, C#, Cobol, or whatever as long as the documents exchanged are in a format that both sides understand. Makes for great flexibility in case you have several teams working side by side that have different skill sets. Each can employ what technology it knows best, the end result will be better as a result.

If there actually is a benefit in mixing two languages, soon enough either one language picks up elements or concepts of the second one, or a new language emerges "between" both languages (one could argue that this happened with Scala, which pretty much sits in the "middle" between Java and Haskell).

As far as I know Scala mayb have taken many concepts directly from ML, which is a predecessor of Haskell. Also, there are many features and idioms that are commonplace in the functional language community and different languages support a different mix of these features. That's why Scala may seem to be between Java and Haskell whereas both Scala and Haskell took certain features from a common source.
–
GiorgioMar 26 '13 at 12:46

incorrect. If 2 tools each serve a good purpose, no need to corrupt one or either to become a bland mix of them both, just use them both together.
–
jwentingMar 26 '13 at 13:02

Some languages are unmixable as their runtime environments are designed for mutually exclusive purposes.
–
SK-logicMar 26 '13 at 13:14

2

"I would still prefer to code in hideous Java 8 than in "simple" Java 1.2 (at least until a Scala job comes along)": I would not be so sure. Sometimes one prefers a missing features to a poorly designed feature.
–
GiorgioMar 26 '13 at 13:54

1

well said Giorgio. The whole "closures" thing in Java 8 is a prime example of how to ruin a programming language for the sake of bolting on something the language was never designed for.
–
jwentingMar 27 '13 at 6:38