I've been using python for a few days now and I think I understand the difference between dynamic and static typing. What I don't understand is under what circumstances it would be preferred. It is flexible and readable, but at the expense of more runtime checks and additional required unit testing.

Aside from non-functional criteria like flexibility and readability, what reasons are there to choose dynamic typing? What can I do with dynamic typing that isn't possible otherwise? What specific code example can you think of that illustrates a concrete advantage of dynamic typing?

We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.

This question came from our site for professional and enthusiast programmers.

5

Theoretically there's nothing you can't do in either, as long as the languages are Turing Complete. The more interesting question to me is what's easy or natural in one vs. the other. There are things I do regularly in Python that I wouldn't even consider in C++ even though I know it's capable.
–
Mark RansomOct 3 '12 at 21:19

27

As Chris Smith writes in his excellent essay What to know before debating type systems: "The problem, in this case, is that most programmers have limited experience, and haven't tried a lot of languages. For context, here, six or seven doesn't count as "a lot." ... Two interesting consequences of this are: (1) Many programmers have used very poor statically typed languages. (2) Many programmers have used dynamically typed languages very poorly."
–
Daniel PrydenOct 3 '12 at 21:38

3

@suslik: If language primitives have nonsensical types, then of course you can do nonsensical things with types. That has nothing to do with the difference between static and dynamic typing.
–
Jon PurdyOct 4 '12 at 4:36

10

@CzarekTomczak: That is a feature of some dynamically-typed languages, yes. But it is possible for a statically-typed language to be modifiable at runtime. For example, Visual Studio allows you to rewrite C# code while you're at a breakpoint in the debugger, and even rewind the instruction pointer to re-run your code with new changes. As I quoted Chris Smith in my other comment: "Many programmers have used very poor statically typed languages" -- don't judge all statically typed languages by the ones you know.
–
Daniel PrydenOct 4 '12 at 15:33

10

@WarrenP: You assert that "dynamic type systems reduce the amount of extra cruft I have to type in" -- but then you compare Python to C++. That isn't a fair comparison: of course C++ is more verbose than Python, but that's not because of the difference in their type systems, it's because of the difference in their grammars. If you just want to reduce the number of characters in your program source, learn J or APL: I guarantee they'll be shorter. A more fair comparison would be to compare Python to Haskell. (For the record: I love Python and prefer it over C++, but I like Haskell even more.)
–
Daniel PrydenOct 4 '12 at 15:38

16 Answers
16

Rob Conery's Massive ORM is 400 lines of code. It's that small because Rob is able to map SQL tables and provide object results without requiring a lot of static types to mirror the SQL tables. This is accomplished by using the dynamic data type in C#. Rob's web page describes this process in detail, but it seems clear that, in this particular use case, the dynamic typing is in large part responsible for the brevity of the code.

Note that the usual disclaimers apply, and your mileage may vary; Dapper has different goals than Massive does. I just point this out as an example of something that you can do in 400 lines of code that probably wouldn't be possible without dynamic typing.

Whether you use a dynamically-typed language or a statically-typed one, your type choices must still be sensible. You're not going to add two strings together and expect a numeric answer unless the strings contain numeric data, and if they do not, you're going to get unexpected results. A statically typed language will not let you do this in the first place.

Proponents of statically type languages point out that the compiler can do a substantial amount of "sanity checking" your code at compile time, before a single line executes. This is a Good Thing™.

C# has the dynamic keyword, which allows you to defer the type decision to runtime without losing the benefits of static type safety in the rest of your code. Type inference (var) eliminates much of the pain of writing in a statically-typed language by removing the need to always explicitly declare types.

Dynamic languages do seem to favor a more interactive, immediate approach to programming. Nobody expects you to have to write a class and go through a compile cycle to type out a bit of Lisp code and watch it execute. Yet that's exactly what I'm expected to do in C#.

If I added two numeric strings together, I still wouldn't expect a numeric result.
–
pdrOct 3 '12 at 20:40

22

@Robert I agree with most of your answer. However, note that there are statically-typed languages with interactive read-eval-print loops, such as Scala and Haskell. It may be that C# just isn't a particularly interactive language.
–
Andres F.Oct 3 '12 at 20:57

@RobertHarvey: You might be surprised/impressed with F# if you haven't already tried it. You get all of the (compile-time) type safety that you normally get in a .NET language, except that you rarely ever have to declare any types. The type inference in F# goes beyond what's available/works in C#. Also: similar to what Andres and Daniel are pointing out, F# interactive is part of Visual Studio...
–
Steve EversOct 3 '12 at 21:40

7

"You're not going to add two strings together and expect a numeric answer unless the strings contain numeric data, and if they do not, you're going to get unexpected results" sorry, this has nothing to do with dynamic vs static typing, this is strong vs weak typing.
–
vartecOct 5 '12 at 12:50

Phrases like "static typing" and "dynamic typing" are thrown around a lot, and people tend to use subtly different definitions, so let's start by clarifying what we mean.

Consider a language that has static types that are checked at compile-time. But say that a type error generates only a non-fatal warning, and at runtime, everything is duck-typed. These static types are only for the programmer's convenience, and do not affect the codegen. This illustrates that static typing does not by itself impose any limitations, and is not mutually exclusive with dynamic typing. (Objective-C is a lot like this.)

But most static type systems do not behave this way. There's two common properties of static type systems that can impose limitations:

The compiler may reject a program that contains a static type error.

This is a limitation because many type safe programs necessarily contain a static type error.

For example, I have a Python script that needs to run as both Python 2 and Python 3. Some functions changed their parameter types between Python 2 and 3, so I have code like this:

A Python 2 static type checker would reject the Python 3 code (and vice versa), even though it would never be executed. My type safe program contains a static type error.

As another example, consider a Mac program that wants to run on OS X 10.6, but take advantage of new features in 10.7. The 10.7 methods may or may not exist at runtime, and it's on me, the programmer, to detect them. A static type checker is forced to either reject my program to ensure type safety, or accept the program, along with the possibility of producing a type error (function missing) at runtime.

Static type checking assumes that the runtime environment is adequately described by the compile time information. But predicting the future is perilous!

Here's one more limitation:

The compiler may generate code that assumes the runtime type is the static type.

Assuming the static types are "correct" provides many opportunities for optimization, but these optimizations can be limiting. A good example is proxy objects, e.g. remoting. Say you wish to have a local proxy object that forwards method invocations to a real object in another process. It would be nice if the proxy were generic (so it can masquerade as any object) and transparent (so that existing code does not need to know it is talking to a proxy). But to do this, the compiler cannot generate code that assumes the static types are correct, e.g. by statically inlining method calls, because that will fail if the object is actually a proxy.

Examples of such remoting in action include ObjC's NSXPCConnection or C#'s TransparentProxy (whose implementation required a few pessimizations in the runtime - see here for a discussion).

When the codegen is not dependent on the static types, and you have facilities like message forwarding, you can do lots of cool stuff with proxy objects, debugging, etc.

So that's a sampling of some of the stuff you can do if you are not required to satisfy a type checker. The limitations are not imposed by static types, but by enforced static type checking.

"A Python 2 static type checker would reject the Python 3 code (and vice versa), even though it would never be executed. My type safe program contains a static type error." Sounds like what you really need there is some kind of "static if", where the compiler / interpreter doesn't even see the code if the condition is false.
–
David StoneOct 26 '12 at 2:09

Duck-typed variables are the first thing everyone thinks of, but in most cases you can get the same benefits through static type inference.

But duck typing in dynamically-created collections is hard to achieve in any other way:

>>> d = JSON.parse(foo)
>>> d['bar'][3]
12
>>> d['baz']['qux']
'quux'

So, what type does JSON.parse return? A dictionary of arrays-of-integers-or-dictionaries-of-strings? No, even that isn't general enough.

JSON.parse has to return some kind of "variant value" that can be null, bool, float, string, array of any of these types recursively, or dictionary from string to any of these types recursively. The main strengths of dynamic typing come from having such variant types.

So far, this is a benefit of dynamic types, not of dynamically-typed languages. A decent static language can simulate any such type perfectly. (And even "bad" languages can often simulate them by breaking type safety under the hood and/or requiring clumsy access syntax.)

The advantage of dynamically-typed languages is that such types cannot be inferred by static type inference systems. You have to write the type explicitly. But in many such cases—including this once—the code to describe the type is exactly as complicated as the code to parse/construct the objects without describing the type, so that still isn't necessarily an advantage.

OK, my answer wasn't clear enough; thanks. That JSValue is an explicit definition of a dynamic type, exactly what I was talking about. It's those dynamic types that are useful, not languages that require dynamic typing. However, it's still relevant that dynamic types cannot be automatically generated by any real type inference system, while most of the common examples people are trivially inferrable. I hope the new version explains it better.
–
abarnertOct 3 '12 at 20:37

4

@MattFenwick Algebraic Data Types are pretty much restricted to functional languages (in practice). What about languages like Java and c#?
–
spircOct 5 '12 at 6:25

@spirc you can emulate ADTs in a classical OO language using multiple classes that all derive from a common interface, run-time calls to getClass() or GetType(), and equality checks. Or you can use double dispatch, but I think that pays off more in C++. So you might have a JSObject interface, and JSString, JSNumber, JSHash, and JSArray classes. You would then need some code to turn this "untyped" data structure into an "application typed" data structure. But you would probably want to do this in a dynamically-typed language, too.
–
Daniel YankowskyOct 9 '12 at 1:42

As every remotely practical static type system is severely limited compared to the programming language it is concerned with, it cannot express all invariants which code could check at runtime. In order to not circumvent the guarantees a type system attempts to give, it hence opts to be conservative and disallow use cases which would pass these checks, but cannot (in the type system) be proven to.

I'll make an example. Suppose you implement a simple data model to describe data objects, collections of them, etc. which is statically typed in the sense that, if the model says the attribute x of object of type Foo holds an integer, it must always hold an integer. Because this is a runtime construct, you cannot type it statically. Suppose you store the data described in YAML files. You create a hash map (to be handed to a YAML library later), get the x attribute, store it in the map, get that other attribute which just so happens to be a string, ... hold a second? What's the type of the_map[some_key] now? Well shoot, we know that some_key is 'x' and the result hence must be an integer, but the type system can't even begin to reason about this.

Some actively researched type systems may work for this specific example, but these are exceedingly complicated (both for compiler writers to implement and for the programmer to reason in), especially for something this "simple" (I mean, I just explained it in one paragraph).

Of course, today's solution is boxing everything and then casting (or having a bunch of overriden methods, most of which raise "not implemented" exceptions). But this isn't statically typed, it's a hack around the type system to do the type checks at runtime.

Conversely, you can also translate any statically typed program into an equivalent dynamic one. Of course, you would lose all compile-time assurances of correctness that the statically typed language provides.

Edit: I wanted to keep this simple, but here are more details about an object model

A function takes a list of Data as arguments and performs calculations with side effects in ImplMonad, and returns a Data.

type Function = [Data] -> ImplMonad Data

DMember is either a member value or a function.

data DMember = DMemValue Data | DMemFunction Function

Extend Data to include Objects and Functions. Objects are lists of named members.

data Data = .... | DObject [(String, DMember)] | DFunction Function

These static types are sufficient to implement every dynamically typed object system I'm familiar with.

You are mixing concepts of dynamic typing with weak typing in your example. Dynamic typing is about operating on unknown types, not defining a list of allowed types and overloading operations between those.
–
hcalvesOct 5 '12 at 14:52

2

@Jed Once you have implemented the object model, fundamental types, and primitive operations, no other groundwork is necessary. You can easily and automatically translate programs in the original dynamic language into this dialect.
–
NovaDenizenOct 5 '12 at 15:30

2

@hcalves Since you're referring to overloading in my Haskell code I suspect you don't quite have the right idea about it's semantics. There I've defined a new + operator which combines two Data values into another Data value. Data represents the standard values in the dynamic type system.
–
NovaDenizenOct 5 '12 at 15:35

1

@Jed: Most dynamic languages have a small set of "primitive" types and some inductive way to introduce new values (data structures like lists). Scheme, for example, gets quite far with little more than atoms, pairs and vectors. You should be able to implement these in the same way as the rest of the given dynamic type.
–
Tikhon JelvisOct 10 '12 at 16:19

A membrane is a wrapper around an entire object graph, as opposed to a wrapper for just a single object. Typically, the creator of a membrane starts out wrapping just a single object in a membrane. The key idea is that any object reference that crosses the membrane is itself transitively wrapped in the same membrane.

Each type is wrapped by a type that has the same interface, but which intercepts messages and wraps and unwraps values as they cross the membrane. What is the type of the wrap function in your favorite statically typed language? Maybe Haskell has a type for that functions, but most statically typed languages don't or they end up using Object → Object, effectively abdicating their responsibility as type-checkers.

Yes, Haskell can indeed to this using existential types. If you have some type class Foo, you can make a wrapper around any type instantiating that interface. class Foo a where ...data Wrapper = forall a. Foo a => Wrapper a
–
Jake McArthurOct 5 '12 at 15:22

2

Your membrane is an 'interface' and the types of the objects are "existentially typed" -- that is, we know they exist under the interface, but that's all we know. Existential types for data abstraction have been known since the 80s. A good ref is cs.cmu.edu/~rwh/plbook/book.pdf chapter 21.1
–
Don StewartOct 5 '12 at 19:27

As someone mentioned, in theory there is no much you can do with dynamic typing that you could not do with static typing if you would implement certain mechanisms on your own. Most of languages provide the type relaxation mechanisms to support type flexibility like void pointers, and root Object type or empty interface.

Better question is why is dynamic typing more suitable and more appropriate in certain situations and problems.

First, lets define

Entity - I would need a general notion of some entity in the code. It can be anything from primitive number to complex data.

Behavior - lets say our entity has some state and a set of methods that allow outside world to instruct the entity to certain reactions. Lets call the state + interface of this entity its behavior. One entity can have more than one behavior combined in a certain way by the tools language provides.

Definitions of entities and their behaviors - every language provides some means of abstractions which help you to define behaviors (set of methods + internal state) of certain entities in the program. You can assign a name to these behaviors and say that all instances that have this behavior are of certain type.

This is probably something that is not that unfamiliar. And as you said you understood the difference, but still. Probably not complete and most accurate explanation but I hope fun enough to bring some value :)

Static typing - behavior of all entities in your program are examined in compile time, before code is started to run. This means that if you want for example your entity of type Person to have behavior (to behave like) Magician then you would have to define entity MagicianPerson and give it behaviors of a magician like throwMagic(). If you in your code, mistakenly tell to ordinary Person.throwMagic() compiler will tell you "Error >>> hell, this Person has no this behavior, dunno throwing magics, no run!".

Dynamic typing - in dynamic typing environments available behaviors of entities are not checked until you really try to do something with certain entity. Running Ruby code that asks a Person.throwMagic() will not be caught until your code really comes there. This sounds frustrating, isn't it. But it sounds revelational as well. Based on this property you can do interesting things. Like, lets say you design a game where anything can turn to Magician and you don't really know who will that be, until you come to the certain point in code. And then Frog comes and you say HeyYouConcreteInstanceOfFrog.include Magic and from then on this Frog becomes one particular Frog that has Magic powers. Other Frogs, still not. You see, in static typing languages, you would have to define this relation by some standard mean of combination of behaviors (like interface implementation). In dynamic typing language, you can do that in runtime and nobody will care.

Most of dynamic typing languages have mechanisms to provide a generic behavior that will catch any message that is passed to their interface. For example Ruby method_missing and PHP __call if I remember good. That means that you can do any kind of interesting things in run time of the program and make type decision based on current program state. This brings tools for modeling of a problem that are lot more flexible than in, lets say, conservative static programming language like Java.

In a statically typed language where types are either stated in the source - manifestly typed, or if the compiler infers types to some degree, could you theoretically write a function/procedure/class/method/... that could take valid source for that language then compile and evaluate it returning the result? If so, could you express the type of the result, or could the compiler infer the result?

What if you fed such a routine its own source, would it work then?

Would you have to make use of the hack that some statically typed languages have of effectively saying "don't check the type of that value please" to its type system?

Sometimes type expressions can be horrendously complex and we adapt the way we program to compensate. Dynamically typed languages have a different set of concerns that moulds the way their practitioners approach problems and the problems they choose to address. Eval/exec statements or functions are usually already available as part of the dynamic language as they carry around their own interpreter. Of course these statements are not a panacea, but I bring it up as a case where statically typed languages are at a disadvantage compared to dynamically typed languages. Saying that, it is good to see people adding local run-time type checking to predominantly statically typed languages and vice-versa, although there doesn't seem to be a happy medium in language design allowing the flexibility of typing that I would like available at the moment.

Readability and flexibility are small advantages? The early stages of software development are all about the latter, the late stages depend heavily on the former. It takes someone truly ignorant to say it's unimportant to have readable code.

As for cases where dynamic typing wins...

returning context-sensitive data like error strings / data arrays from the same function (as one argument, with no extra work required)

@MattFenwick You could've just said so. Anyway... I said "dynamic typing wins" not "impossible in static typing" - as it was noted in another answer, nothing is impossible. This is more related to something being natural to a language. Best tools for the job etc. That said, I'll try to find some time to add more examples. As for being rude - I don't see how it's rude to say things as they are. Also, there is no bias towards static typing; I didn't mention the good things about it because it's not what the question was about. For the record, I regularly use both kinds of languages myself.
–
snake5Oct 5 '12 at 12:49

I ran into an example recently, as part of implementing tasks in ActionScript 3. AS3 has no generics (sortof) so it's easy to accidentally end up in situations where you have a task containing a task containing an int (doubly-nested) but mistakenly believe you have a task containing an int (singly-nested).

One way that I could avoid this problem is implementing a function "UnwrapUntilSinglyNested" that takes any task nested to any amount and outputs the intuitively equivalent singly-nested task. The type of this function, which is something like Task<T>|Task<Task<T>>|Task<Task<Task<T>>>|... -> Task<T>, is not expressible in "typical" type systems (like C/C#/Java, but not Haskell/Coq). (On the other hand it wouldn't be necessary in typical type systems because the mistake is caught by the compiler.)

There's very little practical things you can do in a dynamically typed language that can't be catered for under a particular static type system. But there are many different static type systems, and each have their own strengths and weaknesses. There will always be something you can do under static type system x that you can't do under static type system y. You can see this by looking at the other answers to this question; 'you can do that in type system z', but you can't do it in another.

With a dynamic type system, you can do pretty much all of it, oftentimes with a lot less ceremony. You're not constrained by how the type system fits together and what rules it has that allow you to do one cool thing but not another.

Another example of this, easier to understand than the metaclass, is the eval function in python. It takes a string and returns a value. You have no idea ahead of time, what type that value will be. This is not possible in a statically typed language.

That function could be described as returning type MyClass | list | int. Whether that's a useful type or not is another question, but it's closely related to the question of whether this is a useful function, and of how it's used.
–
abarnertOct 3 '12 at 21:23

3

@grieve: I understand why you make the distinction, but from a type-theoretical point of view, there is no difference between a function that returns one of three possible types and a function that returns a value of an algebraic data type with three data constructors. You say that you have never seen this feature in any statically typed language, but the point of abarnert's comment is that it is perfectly possible to do exactly this in almost any statically-typed functional language. So this feature, while useful, has nothing to do with whether the language is statically or dynamically typed.
–
Daniel PrydenOct 3 '12 at 23:49

1

@DanielPryden: its not clear that eval is not possible for a typed language. Its an open problem.
–
Jonathan FischoffOct 5 '12 at 23:29

You can build all you want in both. But your way of programming will be vastly different.

Static type mean that you have contracts. So you have some kind of errors already wiped out, because you won't compile the other way. Globally, you know who you are calling this method of an objet of this class.

In dynamic typing, commonly called duck typing, you care less about the class of your object. You care about the "message" it can send. You just care he got the method you call.

You can partially do this in static type langage thanks to polymorphism or interfaces. The difference is mostly that, you don't program the same way. So both way have some merits and drawbacks.

Dynamic langages usually focus more on what you do than how you do this. So the intent of the programmer is clearer, but it can leads to pretty annoying bugs or an harder understanding of how the programme really work.

Globally, it is a bit easier to abstract things in dynamic langages than in static types, but it comes at the cost of slower code.

Understand, you achieve the same things, maybe even the same way (thanks to var, or dynamic types in static, polymorphism, interface, ...).

Dynamic typing offer more elegant code ..... or ugyier code, depending on the programmer. In static typing, there is less magic, and often one true way. So your code won't be as elegant than a good dnamic typing code, but you will see way less horrors.

A good comparison is java versus ruby. Java offer one true way, static typing, fast langage, but is ugly to some people, because you are bothered with trivial questions about types, contracts, ect .... Code is less beautiful, but interns are also restricted in the horrors they may produce.

Ruby on the other way, is known as elegant, slow ( no longer that true, same order than php and python, now), you have many ways to do things. So an expert may produce so pretty art, elegant and readable code, in wa fewer lines, withouth bother the reader with trivial questions about types. But you are also free to produce the uglyier code ever wrote, that nobody including yourself can understand.

In dynamic typing, you have more fredom, some may say more power, more productivity, but this comes at a price. You have to be self-disciplined, and you will suffer from the horrors wrote by a noob.

Typing is often times a mechanism for creation of methods. If you can defer creation of the type until runtime, you can (in theory, not necessary in Python, but in dynamic languages that would allow this), to create a method that specializes on a type that did not exist in your program, once you compiled it.

When this may be helpful? In certain cases there are just too many types, but the user will never use all of them. Somewhat artificial example: say, you have a program that allows you to query your database which contains 100 tables, each table has 10 columns. Theoretically, you can create C(100 * 10, 100) types of objects by querying your database - if you wanted to do object mapping at compile time - that would be not only your own life time work, but not even your grand children would have finished it. Deferring type creation to runtime allows you to create only the necessary types in your program. And, voila, C# can do it too (sort of). This is also very similar to how V8 implementation of JavaScript works behind the scenes (it creates strongly typed objects, from dynamically created types, not just string hashes), etc. etc. There may be plenty of valid uses.

PS. you can also think of a property access operator as being one such method, to make this more generic.

Your first paragraph reads very confused. While dynamic typing allows naturally treating types as first class objects, this is neither necessarily the case nor the point of the exercise. I also don't know how you arrive at 100^10 types. And the note about V8 misrepresents things -- while V8 specializes for object layouts via maps, this is a mere implementation detail and it's solely about in-memory object layout, not about anything resembling a "type" by the common definitions.
–
delnanOct 3 '12 at 20:43

1

@wvxvw: Incorrect. Anonymous types are generated at compile time. See Anonymous Types (C# Programming Guide): "Anonymous types provide a convenient way to encapsulate a set of read-only properties into a single object without having to explicitly define a type first. The type name is generated by the compiler and is not available at the source code level." You can even use a decompiler to open the generated assembly and see the anonymous classes that were generated.
–
Allon GuralnekOct 7 '12 at 11:37

1

@wvxvw: The article is completely irrelevant to the topic at hand. dynamic and DynamicObject provides dynamic binding, not dynamic typing! At no point has any type been created at runtime. And of course you talked about anonymous types - your example talked about querying the database with an arbitrary projection plus object mapping, which with .NET involves anonymous types. Example: from c in db.Customers select new { c.FirstName, c.City }.
–
Allon GuralnekOct 7 '12 at 13:38

@Justin984 If you can't do this in your statically typed language of choice, that language's type system sucks or you don't understand how to utilize it. Heck, even Java could do it if it had operator overloading, or allowed add interface implementations for types we didn't define.
–
delnanOct 3 '12 at 20:14

5

@Justin984 The solutions I refer to all only need a single implementation. So do solutions involving parametric polymorphism (e.g. in Haskell, where we can do this easily).
–
delnanOct 3 '12 at 20:19

6

This answer is totally wrong and misleading.
–
user39685Oct 3 '12 at 20:22

9

@Justin984 No types are being inferred here. Type inference has nothing to do with static/dynamic typing. For example, Haskell is as statically typed as it gets, and infers a static type for the function here (written adder x y = x + y, full stop) without any type annotations.
–
delnanOct 3 '12 at 20:25