Posted
by
timothyon Sunday June 17, 2012 @11:14PM
from the from-this-day-forth-no-erlang dept.

New submitter SIGSTOP writes "The LINC [OpenFlow 1.2 software-based] switch has now been released as commercial friendly open source through the FlowForwarding.org community website, encouraging users and vendors to use LINC and contribute to its development. The initial LINC implementation focuses on correctness and feature compliance. Through an abstraction layer, specialized network hardware drivers can be easily interfaced to LINC. It has been implemented in Erlang, the concurrent soft-real time programming language invented by Ericsson to develop their next generation networks."

I used Erlang once before, in a previous job. What a fucking nightmare.

It's certainly different from your average OO language, but it's no more "a fucking nightmare" than other functional languages like haskell and ocaml.

I don't like either because I believe garbage collection is a bad idea - the programmer should have full control, and it's the job of the language to expose flaws, not hide them.I know I'm in the minority here, but still I look back to the days of (pre-turbo) pascal and (pre-95) Ada with fondness.

But really, Erlang is one of the better functional languages, as they go. It may be tough to learn, but it has enough software written in it to prove its wirth.

GC is sometimes a bad idea, and sometimes a good idea. It depends entirely on precisely what the programmer is doing.

Programs written in languages without some facility to enable automated or semi-automated memory management (be it full GC, reference-counted smart pointers or what have you) tend to be inherently non-modular. Somebody has to know when nobody else wants some object any more. There are essentia

And just like manual memory management is a clamp on the foot for modularity in a single-threaded environment, manual locking and mutexes is an additional clamp on the foot for modularity in the multithreaded realm.

Erlang's actor model is an attempt to enable the production of modular concurrent communication software, just as Java's garbage collection model is an attempt to enable the production of modular business data software.

Threads with shared memory, manual locking, mutexes, condition variables, restartable transactions and so on are hard to use. But that can also be okay. Sometimes the problem you're solving is so simple that it doesn't really matter, and sometimes it's so hard that there's no better way.

Having said that, of course, Erlang is closer to what Alan Kay meant when he coined the term "object oriented" than pretty much any other language has yet realised.

Erlang is closer to what Alan Kay meant when he coined the term "object oriented" than pretty much any other language has yet realised

I'm not sure it's still there, but there used to be a thing on the Erlang web page that said that Erlang was 'not an object oriented language like...' and then listed a number of other languages that were all less object oriented than Erlang.

Erlang is a functional language first , OO second. Smalltalk was designed to be OO from the ground up. In smalltalk even first class objects such as integers have methods and can be passed messages. Thats taking it too far IMO , but you can't get any more OO than that.

Maybe. I think Erlang and Smalltalk are trying to solve two different problems. Smalltalk is trying to solve the 'ease of coding' problem, whereas Erlang is trying to solve the 'high reliability' problem.

If you think GC is only or even primarily for hiding programming errors, you aren't much of a programmer. Your programming language examples also suggest that you're very much a bondage-and-discipline programmer who doesn't mind having to be verbose unnecessarily.

For allocating lots of small objects, a good garbage collecting allocator is both more time- and space efficient. Think of something like a programming language compiler. Try building an AST (or an intermediate representation of the program or s

Your examples are somewhat flawed, because a lot of projects do exactly that. In Clang, for example, which I think gets better parsing performance than anything else around, small objects are allocated with a bump-the-pointer allocator with a per-AST scope, and then are freed by just freeing the whole allocation at once. This is entirely possible to do with manual memory management, because you have complete control over allocation policy. With automatic garbage collection, the collector has to infer lif

Of course you can used specialized allocation schemes, and they are sometimes really convenient, but you can't use them for everything, and it's always extra work. Perhaps ASTs were a bad example (although for ASTs, you have a lot less code - much of it pointless boilerplate - to write if you're using an ML-style variant instead of a class hierarchy).

For memory and time overhead, any data structure with a large number of nodes (a big std::map for instance) will have significant overhead, and you're not goi

If you think GC is only or even primarily for hiding programming errors, you aren't much of a programmer.

I hope you program better than you read. I don't think (and never said) it's for that, but I think it's an inherent side effect.

Manual memory management is good when your objects are large and long-lived, in which case you can handle them directly or by reference counting, but there are a lot of programming tasks where this doesn't even remotely apply. Maybe that isn't the type of programs you've written, but that's just your lack of experience.

You're making assumptions without any basis here, and then proceed to draw conclusions from them.And you're wrong too - small and short-lived objects are the prime example of when garbage collection is bad - they stick around when they shouldn't have to. This problem is elevated with concurrency and reentrance. If anything, manual memory management should always be the default f

For allocating lots of small objects, a good garbage collecting allocator is both more time- and space efficient.

Do good garbage collecting allocators actually exist? I've heard of them, but have seen no droppings in the woods or other evidence. I'm pretty sure they are possible, but most of what I've dug into the guts of was a total amateur hour inside. Stuff like no sense of realization that GC shouldn't run constantly if the program isn't actually doing anything, and should cap itself in proportion to the program's core CPU usage.

For every 100 people doing their own memory management, I'm sure 101 of them are doing it right.

While I'm sure that you can do a better job at memory management every time than all the compiler & OS experts of the past 50 years most people can't. Memory management and pointer errors have caused not just decades of security problems but it has destroyed property and even killed people. All that could have been avoided.

Erlang routinely beats Java and C++ in large server benchmarks with lower latency and

It's certainly different from your average OO language, but it's no more "a fucking nightmare" than other functional languages like haskell and ocaml

Well, in the words of the guy behind Erlang, Joe Armstrong, "Erlang is a so-so functional language with excelent concurrency and distributed concurrency model" (paraphrase).

As a pure functional language, it shows it's age IMHO - no Hindley mildner type system, uninspired syntax (based on Prolog), etc. etc. But as a systems language for distributed, soft real time, highly available systems, nothing else can currently touch it. (Yes, I used to work at Ericsson and build routers based on Erlang/C.)

Anyway, an Ethernet switch or Router has two engines - The low level ('fowarding pane') of which is high speed 'copy packet from port a to port b'. It's not very bright, as that's almost all it CAN do, but it's very fast at that task. Of course, to be useful, an ethernet switch has to have more smarts -- for example, learning which switch port has which MAC on it and things like that. In a normal switch, it can, when confronted with a packet ask

You can for instance implement Layer 3 routing by dynamically changing the switch matrix on Layer 2. Essentially, IP adresses then become fancy MACs, and you know which IP adresses are reachable behind which port, allowing you to handle routing on Layer 2. Similarly you could implement higher level protocols to change your switch matrix, handle TCP errors already in the switch, saving hops and packets etc.pp.. In the end you get a box which seems to run each layer at wire speed.In some way this runs contrar

That, if data in the traffic do not change much. From the ONF site, the router has to consult with the software what to do with the unrecognized packet. That cannot be fast. What's more, unless they change order of packets, the router would have to stall the traffic on the port. If change of order is deemed acceptable, they can go on routing other packets, but in the mean time other packets requiring software introspection might come and they would have to be buffered. Buffering on router is bad, because l

That, if data in the traffic do not change much. From the ONF site, the router has to consult with the software what to do with the unrecognized packet. That cannot be fast. What's more, unless they change order of packets, the router would have to stall the traffic on the port. If change of order is deemed acceptable, they can go on routing other packets, but in the mean time other packets requiring software introspection might come and they would have to be buffered. Buffering on router is bad, because later comes the problem of how to insert the now-cleared-for-delivery packets in the stream.

All in all, I do not see how even theoretically it can be "at wire speed": either lots of out of order packets with selectively long latencies or long stalls. More flexible and complicated you want it to be, more exotic cases you want to catch - the more overhead that would introduce.

So you have never seen a Juniper router in action?That's essentially what they do, they take their routing table, lookup the MACs of the respective gateways on their ports and then generate a switch matrix which the routing component writes through to the switch.

I did, but it was way too many years ago. The Juniper was huge and pretty dumb, but could route fiber gigabit in realtime.:)

That's essentially what they do, they take their routing table, lookup the MACs of the respective gateways on their ports and then generate a switch matrix which the routing component writes through to the switch.

Oh. So the SDN means: that above + standardized interface to install your own "plug-in" for the e.g. MAC resolution?

I had the (short) experience in the telecom and many years ago (I was told) it worked pretty much like that for PSTNs/etc. Also there I heard the terms "control plane" and "data plane" first time. I had this short employment where among other things I had to impleme

You know celebrity marriages never last. What are we going to call the bastard child offspring of this unholy union? Erla Flow? No... sounds like a personal problem. Lang Open? No... that won't do either.

Screw it, let's just call it "Forever In Beta", since most parents name their children based on their hope for their future. -_-

A pure functional language is one that just relies on recursion with only parameter variables to carry out loops. Sorry , but while that may have some kind of academic aesthetic to various hard core CS types, I really don't see why its that is any better than OO or even procedural style programming especially since both of those paradigms support recursion anyway. To me, pure functional just seems to be a very quick way to obfuscate even the simplest potential solutions to a problem.

Functional programming leads to a completely different set of coding idioms. These can be very convenient in many applications. Since it is not the kind of programming that most people are used to, it can be awkward at first. There are a lot of things that are much more elegant and obvious when done this way, but you have to get over the barrier of thinking procedurally first. I would say that it isn't simply that hard core CS types like the aesthetics of functional programming. I think it's

Of course there is nothing really stopping you from writing functional code in any language. It's just that notationally, functional languages make it much easier. In the same way, you could write object oriented code in any language, but the verbosity would negate it's usefulness.

This seems to conflate dynamic vs static typing and functional vs procedural. The problem you discuss comes up in procedural languages all the time:

if (TEST) return bar();
else return baz();

That's more a "problem" with dynamic typing. Statically typed functional languages like Haskell or the ML family use type inferencing systems to detect these types of problems at compile-time. There's been a lot of progress made on type systems since C/C++ were developed. As the previous poster mentioned, Haskell

The profound thing your missing is that as a developer, if you feel the need to debate over the qualities of your language or style of programming (i.e. functional versus OO or whatever) then you have no right to actually be part of the debate.

The language doesn't matter that much, and the people trying to convince you why one is better than the other aren't experienced enough to realize that yet. The more someone implies that one language is the way to go or one style is the way to go for a certain task,

The more someone implies that one language is the way to go or one style is the way to go for a certain task, the less likely it is that you should believe them.

Obvious Troll is obvious. Or at least, I hope you're trolling, as opposed to trying to look wise by displaying intentional naivete. If you'd left out "for a certain task", I'd agree with you wholeheartedly -- there's no one right tool for every job -- but certainly some tools are better for some jobs than others.

A pure functional language is one that just relies on recursion with only parameter variables to carry out loops. [...] To me, pure functional just seems to be a very quick way to obfuscate even the simplest potential solutions to a problem.

Or am I missing something profound?

Yes, you're missing something profound.

The big deal (and it's a Really Big Deal) in functional programming is isolation of side-effects. Pure functions transform inputs to outputs, and do nothing else: They don't mutate data, they don't