Two colleagues at $work frequently complain about the other's programming language of choice. The fight is between C++ and Perl. Today, not for the first time, the C++ supporter mocked Perl's lax attitude towards numbers.

In a SWIG layer we use, many functions expect integer arguments. If you pass in a string in Perl, or what you consider a number in Perl terms, frequently the layer will croak that it cannot find the overloaded method call -- the method call is ambiguous because there are at least two matching methods, one that takes integer arguments and one that takes strings. So, you need to typecast with 0 + $var.

The C++ thinking is that this is retarded -- a number is a sequence of bits (usually 32 or 64 bits) that fits inside the register or a memory cell of a computer. Not only that, but number types form a strict hierarchy, including integer (short, long), float(ing point), and double (floating point). Although related to strict typing, there is a related model of thinking here: that numbers are what can be represented in this particular way in a computer.

The Perl thinking is that a number is what looks like a number to a human being. If it is stored in a machine-dependent number format, it is a number. If it is a string that looks like a number, it is a number. In essence, numbers are what people think are numbers. The string representation of a number is not the number; it is only a representation. Similarly, what is stored in memory is not the number, but only its representation. The C++ people, however, seem to think that the number stored in memory is the number itself.

There are problems with both approaches and good arguments on both sides. However, the Perl way is better, because it lets us program more on our own terms, not on the terms of the computer or the underlying hardware. Humans do not excel in laborious or repetitious tasks, and not even in describing them in detail; thus automatic memory management and garbage collection. Similarly, the concept of a number is far larger that the small hierarchy and narrow-minded, hardware-centric view of C++ (and C) programmers.

The Perl thinking is that a number is what looks like a number to a human being.

No, the Perl thinking is that if something is used a number, it is a number. When you call $a + $b (and they are not objects that overload +) then they are used as numbers, independently of what they look like. If there is no number representation, a default (0) is used, and optionally a warning emitted.

Interestingly the Perl interpreter is written in C which is in essence a subset of C++ so whatever you can do in Perl you can also do in C and in C++. Actually all three languages are Turing complete so they can all do anything any of the others can do, at least in principle.

At the end of the day this sort of debate is silly because the different languages are designed for different purposes. At best you can say that for some particular problem or application domain one or other of the languages is better suited. There is however no overall "best" language.

C and C++ represent a limited subset of numbers to the compiler as a sequence of bits, because sometimes it's worthwhile to burden the programmer for the benefit of the computer.

Perl represents data as containers around scalar values, and the language deals with the same scalar container holding different types of scalar values for the programmer. This is because it's often worthwhile to burden the computer for the benefit of the programmer. Perl does sometimes need a hint about how to treat a particular value, but usually the right thing just happens, and the Perl programmer can be more efficient in implementing a program because of that.

The issue of passing a Perl string to C or C++ is that C and C++ do not handle the automated data conversion that Perl does. They need the programmer to specify things for them for efficiency and automated type checking because that's the niche those languages were meant to fill.

That C and C++ don't handle a value that Perl handles quite well could as readily be seen as a weakness in those languages as in Perl, but it's really not a weakness of either. It's just a matter of working at two different abstraction levels. The C++ guy who's complaining would have to specify a type for his variables in C++ in the first place, whereas in this situation the type needs to be specified but not until it is about to be passed into C++. That he thinks that's retarded just shows his bias and no concrete advantage of one language overall over the other.

There was a time when people programmed in assembly when they wanted fast efficient code (burden the programmer). They used C/C++ in order to write code more efficiently (burden the computer). In my opinion dynamic languages such as Perl (and Python, Ruby & Lua) are just the next steps in the evolution of programming languages.

++, but I think it's more a gradient. Even assembly puts more burden on the computer than twiddling the bits directly, but less on the programmer. C and C++ burden the computer a bit more than assembly, but still place a substantial burden on the programmer.

Lisp and Forth both move certain burdens for the programmer out of the way in exchange for other burdens on the programmer. Lisp places additional burdens on the computer (recursion) for the benefit of the programmer while removing burdens from the computer (syntax parsing) by placing burden on the programmer. Forth lowers the barrier for the computer (postfix syntax) by placing it on the programmer (postifx syntax). ;-/ Forth's generic storage cells remove a burden on the computer (type checking), and can alternately be a boon or a headache for the programmer. Other languages make similar trades.

Perl is a language for getting work done. As such, it puts more burden on the computer and less on the programmer than many other languages. Implementing an efficient algorithm and knowing a few things about the interior of perl can make a Perl program much faster than not knowing those things. In many cases, Perl can run as fast or nearly as fast as program in languages that are much more restrictive of the programmer. In some cases it can't, and that's the price we pay for more freedom and more power of expression.

Languages can continue to evolve both above and below the level of Perl. The only reason evolution at the level of C and C++ has been so slow is because despite their warts they accomplish their tasks very well. Even CPU microprogramming, which is below assembly, continues to evolve. Even as programs are written in languages at levels higher than Perl, Perl continues to evolve. It will continue to evolve so long as people use it and see where it could fit its niche even better. There are already many languages at far higher levels than Perl, although most of them are domain-specific. Yet Perl is still the glue that holds many of those together. C is still the language, for now, that holds Perl together and that is used to write many of the libraries upon which Perl depends. It's not so much an evolution of a single species as new layers in the food chain or of superstructure being built on substructure.

There is an impedance mismatch between natural language concepts, Perl and C++. As mr_mischief as pointed out, it is centered in the representation of the concept 'number'.

Natural language concepts are vague and flexible. Like humpty dumpty, we make words mean whatever we choose them to mean.
Numbers may be represented as spelled out words ('two' or 'tres'), various numeral systems ( '25' or 'XXV' ), they may be cardinal or ordinal.

Perl numbers are fluid and flexible. They may be internally represented as integers, floats or strings, or all of the above depending on how they are initialized and used. Most importantly there is a Perl concept of 'number' and it maps reasonably well onto the natural concept of cardinal numbers.

C++ has a group of concepts that together take the role of 'number'--there is no single concept of number. C and C++ operate on a lower level of abstraction. To simplify things, C and C++ define standard ways to automatically coerce (promote) specific types of number into other types as needed to perform common calculations. This creates an illusion of 'number'-ness. These allied concepts bear similarities to the natural concept, but fail to offer the flexibility we expect--you can promote a 16 bit integer into a 32 bit integer, but you can't do the converse. Your C++ library does not accept numbers, it accepts integers.

In theory, SWIG should be smart enough to handle the impedance mismatch and coerce or reject invalid input. That is what it is designed to do. It looks like the overloading on the method in your C++ library is confusing SWIG, since it sees two possible target methods. That you should have to fix the ambiguity for SWIG is not unreasonable. DWIMish systems may need help at times.

I don't know that I am narrow-minded when I write C or assembly, I try to always be mindful of what I am doing. That means that when I write C, I don't say to myself "this is a number", I say instead "this is an unsigned 16 bit integer". When I write 8051 assembly language, I say "this register contains the low order byte of an unsigned 16 bit integer". When I write Perl, I can safely say "this scalar contains a number" and only get more specific when the need arises.

When putting a smiley right before a closing parenthesis, do you:

Use two parentheses: (Like this: :) )
Use one parenthesis: (Like this: :)
Reverse direction of the smiley: (Like this: (: )
Use angle/square brackets instead of parentheses
Use C-style commenting to set the smiley off from the closing parenthesis
Make the smiley a dunce: (:>
I disapprove of emoticons
Other