Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Nerval's Lobster writes "A recent post on Reactive Programming triggered discussions about what is and isn't considered Reactive Logic. In fact, many have already discovered that Reactive Programming can help improve quality and transparency, reduce programming time and decrease maintenance. But for others, it raises questions like: How does Reactive differ from conventional event-oriented programming? Isn't Reactive just another form of triggers? What kind of an improvement in coding can you expect using Reactive and why? So to help clear things up, columnist and Espresso Logic CTO Val Huber offers a real-life example that he claims will show the power and long-term advantages Reactive offers. 'In this scenario, we'll compare what it takes to implement business logic using Reactive Programming versus two different conventional procedural Programming models: Java with Hibernate and MySQL triggers,' he writes. 'In conclusion, Reactive appears to be a very promising technology for reducing delivery times, while improving system quality. And no doubt this discussion may raise other questions on extensibility and performance for Reactive Programming.' Do you agree?"

Ok, there are a lot of people commenting on this below who have absolutely no idea what reactive programming is about. So I'll try to clear it up a bit.

Reactive programming is not polling.

If you call a function and wait for it to return a result, you aren't doing reactive programming.

If you are working in a REPL or command-line environment, and you have to type a command every time you want to obtain a result, your system is not reactive.

Reactive programming is not events and triggers. Well... let's say it this way: reactive programming is to events/triggers as writing in [C/C++/Java/C#/Haskell/etc.] is to writing in assembly. In other words, you really could do the same exact thing with events or triggers than you can do with reactive programming. But events and triggers are very basic compared to what is meant by reactive programming.

Events and triggers are typically used when a little reactivity is needed. When you build your system around reactivity, using events and triggers quickly becomes inefficient and you need something built for the task. You would pick a reactive programming language or framework for such a complex job, just like (most of) you would choose high-level languages and frameworks over assembly for building a social media website.

Reactive programming isn't an agile framework. It's not some new way of describing object-oriented programming. And it's not the right tool for every job, but for some jobs, it's the perfect tool.

The downside of reactive programming, however, is exactly the same as the downside of the event/trigger logic that has been available in databases for decades, namely the difficult-to-trace side-effects caused by out-of-order execution that is basically mandated in reactive programming.

It's not so obvious when you're first writing code, but it's a headache when you're maintaining it (I disagree with the article on this point). Take the example in the original article... let's say you go back and add a requ

Just like OOP eventually breaks down into CPU instructions, and you can use any Turing complete language to implement anything, I think he has a fair point. It sounds like a way to design a language so that complex event/trigger type stuff are easier to implement and debug systematically, perhaps using a slightly different way of thinking. It might not be a fancy thing, but I don't think there's anything wrong about giving it a name.

Don't kid yourself that it isn't fundamentally a style implemented with events.

I suppose that's why I said:

... you really could do the same exact thing with events or triggers than you can do with reactive programming. But events and triggers are very basic compared to what is meant by reactive programming.

... you really could do the same exact thing with events or triggers than you can do with reactive programming. But events and triggers are very basic compared to what is meant by reactive programming.

I kind of get the diretion you're going in, but I stil don't see. can you give an example? I see your analogy about C++ versus assembly but I only really understand it because I nuderstand C++.

Presumably reactive programm automates some (how/what?) or many or most of the difficulties away...

The point of new paradigms in programming languages is to make the complexity of the expression match the complexity of the idea being expressed, not the complexity of the (platform specific) implementation.

Especially the bit about hardware. I guess if = just means a wire, then if one side changes, then so does the other. I guess you'd have to insert a D-latch to make it non-reacive. I guess nothing in the assignments is clocked.

I expect with care and crazy templates you could make a reactive framework for C++ if one chose.

It's a level of abstraction. If the only difference is that a "reactive" paradigm allows you to write the same code in less lines, each of which is less complex, it's an improvement over creating events/triggers. I don't know whether that's actually the case, but if it is, the above applies.

so Reactive Programming is basically syntactic sugar making event handling a bit easier.

Fair enough... though I can't help thinking of Node.js and all those callbacks being heralded as a new revolution in programming when I hear about this stuff. As an old fart, I know there's nothing really new... just kids who think they need any new paradigm because they never had the balls to learn how the old stuff worked.

There really is no such thing, they just made up the term for attention. What he is describing might be called "tools" programming, but it's not new or different. I have written "Tools" in various languages for over 20 years. If they think they are going to market a few bucks with a "re-branding" program good for them. It worked for "Cloud" and I knew better then too.

I think it's supposed to be being touted as a special new language because we need "reactive" programming hence the name "Reactive". The language may be "new", but I doubt it. I have not looked honestly, I looked at the claim on why it's needed and call bullox. I have mountains of libraries I have written for various tools in C, Python, Perl, and various scripting languages (TCL/WISH/SH). Most of those use bits of the mountains of libraries developed for those languages.

I'm getting cynical in my age I guess, but the majority of the claims people make are simply fluff to try and make a buck or name. Claiming that we need a new language for a special case is silly. Develop and release a library(or libraries) for your needs so that people can use it. Otherwise we end up with all these little tufts of crap that 3 people in the world use, and have to listen to them complain about why adoption is so low.

I'm getting cynical in my age I guess, but the majority of the claims people make are simply fluff to try and make a buck or name. Claiming that we need a new language for a special case is silly. Develop and release a library(or libraries) for your needs so that people can use it. Otherwise we end up with all these little tufts of crap that 3 people in the world use, and have to listen to them complain about why adoption is so low.

I can say this because your cynicism plays directly into my game. The new paradigm is Meta Programming. With my new Meta Programming tools I can write code in one single language and compile it into any other language that has an open source parser. My style guide language ensures the output code is consistent with whatever project I need produce code for. You are both the perfect candidate for, and also most likely not to learn such a language / platform.

It looks more like a domain specific language with a propriety tool/library of some type. Anything that can, as they claim, reduce 500 lines of code into 5 lines probably is not terribly general purpose. It looks like it is just a rebranding of event driven programming only done in their own tool rather then a standard language, probably doing a few things automagically and a whole bunch of stuff badly.

1) The proactive, forward looking teams adopt it first, and have great success.

2) The "emerging trend followers" hop on board, and have reasonable results.

3) The rest of the industry follow and have mixed results, without it being any more successful than any other methodology.

Don't be blinded - initial results always look very promising.

Anybody around here remember Jackson Structured Programming [wikipedia.org] The initial OOP wave? The whole CASE moevement? GUI application builders that were supposed to end the need for programmers?

The golden rule is that "whatever methodology technology you choose, half of adopters will always get sub-average results". The question you have to ask yourself Is are your team smarter than the average team?

The longer you spend in programming, the more you realize it's all been done in years past and it's just some "new grad" thinking they've invented something because they never looked into the history of programming techniques used over the years, or because they never happened to touch the systems that did it before.

I've been programming for over 35 years.

I've come to the conclusion that it's all about marketting buzz-words and bullshit to try and sucker investors into spending money, not about actually improving the way people write code or think about problems.

The nice thing is after 10-15 year or so, all problems start to look familiar to you, but not so much to the young guys or the managers. You can either rebel against the latest old thing new again and seem a Luddite, or seem a genius for immediately seeing all the pitfalls and optimizations entailed in the "new" idea.

I started programming in the late 80s, went professional / full-time after about 15 years of hobbyist level programming (COBOL, Fortran77, Pascal, CA-Clipper / DBase III/IV, C, C++, VB, Java, JavaScript).

OOP with the concepts of public functions and private variables is a big step forward in terms of the compiler being able to check your work. For anything where you are passing around complex business objects, some sort of OOP is necessary.

Like every other question about software development, we should always start with The Mythical Man Month. In this case, the relevant chapter is There Is No Silver Bullet.

Now, the interesting thing about that chapter is that, while it was right, it was the least right of all the good predictions Brooks made. No technology is a silver bullet, but many produce noticeable improvements that, when put together, can give us an order of magnitude in productivity over older tech. It's not as if OO has been abandoned. Case tools were replaced by the far more sensible powerful IDE. GUI builders are not used a lot nowadays, but we get many of their gains by having dev environments with tighter feedback loops. And there's of course the mother of all improvements, which is the creation of large, powerful libraries. How many of the things that people did for business applications 15 years ago are, today, just replaced by libraries?

Not that this denies your final thesis though: Hire bright programmers with people skills, and do your best to keep them. No technology will allow bad developers to make a good application.

You have to also realize that probably over half of software developers are willfully ignorant and therefore incompetent, which is why technology promotion on/. looks like a good path to larger acceptance.

Yes, but in most any setting where everyone in the room knows that there are more kinds of averages than the arithmetic mean and uses them regularly, they will still probably only use the unadorned term "average" to mean just that - unless context dictates that some other average is implied.

Go ahead - tell any mathematician, statistician, etc to "calculate the average of these numbers" with no option for further clarification, I bet you at least 90+% return the arithmetic mean - not because there's no other

Yes, but in most any setting where everyone in the room knows that there are more kinds of averages than the arithmetic mean and uses them regularly, they will still probably only use the unadorned term "average" to mean just that

Really? If I say our neighbours are "just an average family" I don't mean they have a non-integer number of members.

Yet another super awesome framework/system/language/whatever to make a shopping cart in as few lines as possible.

The someone tries to build something remotely complex and it all falls to shit and the code ends up as spaghetti.The guy who built it then leaves the company and they can't find anyone else with the skills to understand how it works

I've been looking at ways to make CRUD applications "declarative" using almost nothing but data dictionaries (field tables) and attributes. It can be done for relatively simple and controlled scenarios, but if you have unanticipated requirements or are stuck using an existing database that has built up years of baggage/cruft, then it requires custom interventions.

CRUD is conceptually relatively simple, but the devil's in the details and those details can really be devils.

Can you screw your abstractions and forget to call them later? Do you have to use Gates condoms or will sheepskin work? What if your abstractions become pregnant? Is the offspring concrete? Should you have just prevented the problem by going with an int-er-face?

Can you screw your abstractions and forget to call them later? Do you have to use Gates condoms or will sheepskin work? What if your abstractions become pregnant? Is the offspring concrete?

I think you have the genders switched -- in this analogy, you're the woman. In particular, the abstraction is the boyfriend who can walk away scot-free if things don't work out, and you're the one who is stuck with an 18-year support obligation.

The line is blurry between them. Libraries for CRUD generally require or expect certain conventions be used. These conventions create a "framework" of sorts. But there are "controlling" frameworks and "voluntary" frameworks (for lack of a better term). Controlling frameworks control the "main" function(s) and flow, limiting you to writing event handler-like snippets.

Voluntary frameworks give you sample "main" function(s), and detail libraries, and let you use or ignore what you want. Controlling frameworks

I'd step back even further and say it's a repackaging of wiring things together. Once we got away from magnetic cores and punched-cards, wiring things together wasn't modeled in code very much. If all of that were really great, we would have already been doing more spreadsheet and/or circuit-simulator based programming. That's not to say it doesn't have some nifty applications. Aren't there some cool GUI front ends to audio processors that work that way?

In imperative (procedural) programming, A=B+C is an assignment statement. A is set when the assignment statement executes. Subsequent changes to B or C do not affect A, unless you specifically re-calculate A. Software developers are responsible for understanding and managing these dependencies.

Reactive programming is a declarative approach for defining logic. Variables are defined by expressions such as A=B+C, which automatically reacts to changes in B or C.

this is just rebadged functional programming, except using deliberately confusing syntax

Higher level functional specifications are like generics. They match some generic set of parameters regardless of the types of the parameters. They then invoke other functions, some of whose overloads specify types, and usually a generic fallback.

Until run-time, there is no information about what the generics passed to that higher level function are, so there is no way to use static typing from compile time analysis.

Another aspect of some functional languages is that they make it easy to pass generic

Not entirely. Of course you can model A=B+C as a function A which, when evaluated, returns the sum of B and C. And sure, you can evaluate A() several times and get different results if B or C have changed.

Reactive programming is about the future and expecting change. When B or C changes, notification of that change is pushed up to the eventual recipient.

But it's more than just simple event handling. For example, what if you wrote a complex SQL query against a few dozen tables located on a hierarchy of s

Not entirely. Of course you can model A=B+C as a function A which, when evaluated, returns the sum of B and C. And sure, you can evaluate A() several times and get different results if B or C have changed.

Reactive programming is about the future and expecting change. When B or C changes, notification of that change is pushed up to the eventual recipient.

So let me get this straight instead of polling the keyboard and mouse port constantly, with Reactive Programing we can just have them generate an interrupt which calls a handler and it calls functions which calls functions to propagate this state change event into other waiting batches of code until the end result is generated -- Updating a cursor location or activating keystroke command, etc. Wow, that's pretty revolutionary. Someone should invent the shit out of that so that the CPU can just sit on a HA

I'm trying not be as automatically dismissive as others, but this example isn't convincing. If your enterprise can be encapsulated by eleven use cases, any framework will suffice. But even in this toy example, we can some obvious missing things.

For example, the "Purchase Order" entity includes a reference to a sales rep. From it's inclusion we infer that is important data, so it would also need to be added as a "reaction".
There's also no hint on how these "reactions" are actually implemented. The original article claims that consistency is enforced but doesn't really explain why.

The customer.balance Rx, specified for the place order use case, states that its value is the sum of the unpaid order totals. This expression is automatically reused over all transactions that use the field. So, the balance is increased if an order is added, decreased when it is deleted or paid, increased when you raise the quantity, and so forth.
With imperative code, it is easy to overlook one of the use cases introducing âcorner caseâ(TM) design and coding errors. Reactive practically eliminates this class of error.

The above might be what they mean, but that's not sufficient for consistency. If your database field is merely a duplicate of derived data in other entities, you shouldn't pat yourself on the back for avoiding a problem you created in the first place! You can also overlook a use case in the analysis phase too, and then you'll fail to include it regardless of the framework.

I think I'd like to see an example with all the entities included and some normalisation. Then maybe I might be convinced that RP has any advantages over triggers.

It also doesn't help that he's clearly trying to exaggerate how complex this simple logic would be in other programming languages. The "500 lines of Java code" is mostly whitespace, curly braces, and comments. When you remove the comments from the code he provides while keeping the generous line-spacing, it's only 275 lines of code.

Equally troubling is the screwy premise. In most cases, once the PO issues the price is fixed. Reacting to changes in product.price would be incorrect. That's the whole reason for having a product_price field in lineitem. If that isn't the case, then as you say the duplicate is self-inflicted damage. I also don't see why you would ever reassign a line-item row to another product, you toss it and create a new one.

I know it's hard to present a complete example in a neat little package, but when I see an examp

Their presentation makes analogies between "reactive programming" and spreadsheets and specifically references the power of "chaining" to have multiple functions firing as the result of changes.

There are a number of issues with this kind of event chaining that you run into as you get past the toy cases.

1) Fan-out. How many actions are being kicked off by a simple change?2) Latency - this is a direct corollary to the fanout. Are all of the chained functions being run synchronously? If so, what happens when someone introduces a very slow function that gets run as the result of a user input. So the user changes the price of a part and every purchase order in the system is suddenly being updated?3) Synchronicity - of course, as soon as you find out that your synchronously run chained functions slow things down you start running them in the background. Now, you have a problem where you don't know if something is up-to-date or not. And, in this model, it's not possible to find out if something is up-to-date.

The examples that they gave are very poor use cases for triggers even. Most general ledger systems I've looked at, running on top of a database, would just recalculate the balance on demand. If your database is large enough that the recalculation starts to take significant time, you cache the result and invalidate it using a trigger. Most GL systems typically make entries much more frequently than they need to calculate the balance for an account. If the recalculation of the balance takes significant time, you probably don't want to do it every time an entry is made anyhow.

Their presentation makes analogies between "reactive programming" and spreadsheets and specifically references the power of "chaining" to have multiple functions firing as the result of changes.

Sort of an off-topic question -- why is Excel the chosen programming 'language'/model for non-programmers? Are there design concepts that come into play to assist the non-programmer or any human when you lay out your data/variables visually?

I would guess because a spreadsheet makes dealing with variables -- both scalars and arrays -- easy, where "dealing with" includes data reduction, visualization and editing. Derived values are updated automatically, you can trivially pull up at least a few common visualizations, and the software includes a solid library of basic statistical and other mathematical functions. I am a professional programmer, proficient in GNU Octave in addition to various compiled languages, but the reasons I listed are why

That's pretty much the summary of why I use them, as well as to initially organize sample data sets so I can quickly rearrange and visualize them. But as anyone who has attempted to use Excel to calculate a result on a million plus row spreadsheet knows, automatic calculation quickly becomes an enemy of the state - it slows to a crawl as the data sets grow or the calculations become more complex than just sum/average/etc.Another post above points out the cause of the issue - one change can kick off a fannin

Stored procedures and triggers are already here and I see no evidence that there is anything new here. If the database is in a correct normalized form this will not reduce the amount of code one iota.

Do ***NOT*** put this sort of logic in the application code. Use properly written stored procedures, foreign key constraints, and database triggers and don't let "application" programmers (especially not agile ones or those who invent new terminology for well known and previously solved problems) within 100 miles of the logic.

And as far as the users being able to understand it "better" I have only one word to say: Bwahahahahahahahahahaha!

Indeed, it's shocking that anyone ever chooses a language or platform other than the database for doing complex application logic. Certainly couldn't be a limitation of the tools or languages available in the DB, those are top notch for every business case.

I don't know Val Huber, but can judge his "technology" ( hardly worth that name, though ) by what he is doing: putting the logic of a server-side application inside an RDBMS. After years and years of Hibernate applications with abysmally bad mainainability, now that finally finally finally-thank-god there are mature no-SQL databases, this guy goes back to where we were in the 80s.

Moreover: comments on that article simply get suppressed. Which says enough about this guy's capability to sustain critical thought. Dupe. Total dupe.

There was once a language called magic that allowed this to be done and was really good at it. A bank wrote their system with it.
Then 5 years later someone tried to re-write the banking app. 10 years later they're still reverse engineering the original app to provide the functionality.

I don't know why people keep submitting this garbage from Espresso Logic, who is just taking advantage of the fact the the term "reactive" has been overloaded to mean different things to exploit the hype surrounding the Reactive Manifesto [reactivemanifesto.org] and related technologies (e.g., Akka [akka.io], Rx [codeplex.com], Node.js [nodejs.org], etc.) to push their own, completely unrelated product, which is based on the more traditional (i.e., the one you find in Wikipedia) definition of "Reactive Programming".

"Reactive programming", as defined by the Reactive Manifesto (which is what all the hype is about), is about designing applications that operate in an entirely asynchronous and non-blocking manner, so as to maximize CPU utilization and fully exploit parallelism, and ensure that the system is always responsive to new events (user input, incoming data streams, errors, changes in load, etc.) rather than having resources tied up waiting for external processes (e.g., blocking on I/O). It has nothing to do with "reactive databases".

This is just forward-chaining logic programming. I.e., if x changes then do y, and people have been using this in design for years - several large vertical applications have this feature built in.

The problem with this kind of code is that, when there is contextual dependency on what is to be done (which happens often in real life), the code becomes riddled with special cases accessing data from far flung parts of the system to determine the contextual state - this starts to look akin to spaghetti code with

We used to make websites by regenerate all html pages when the database changes. It delivers really fast then.

Yeah, that's "reactive programming" at the file level. I do it all the time, with Makefiles. Lots of people do. A nice thing about a Makefile is that you can easily control when the calculation of derivative files happens. You (re)create some sets of primary data, then use a simple "make" command to rebuild everything that depends on any of them.

This is yet another illustration that the perps who introduced this supposedly-new concept have mostly just found a new buzzword for an approach that has been reinvented repeatedly in the past.

One of the ongoing problems in many fields of science and technology is the constant rediscovery (or reinvention if you prefer) of concepts long known by others that just use different words for the concepts.

Actually, mathematicians long ago realized this, and have a standard term for things that are described differently but are actually identical: They're generally called "isomorphic". A classical example that's important in computers was George Boole's "Boolean algebra", invented primarily as an exercise in pure logic. Eventually people noticed that it was isomorphic to the "propositional calculus", and then as electricity became widely available, engineers found that their new "circuit switching calculus" was another isomorphism. That's why so many programming languages have those AND, OR, XOR, NOT, etc operators.

This crowd is merely giving us another entry in the rather long list of isomorphic systems that differ only in their terminology. This means time wasted re-developing your system from scratch, when you could have saved a lot of time if you'd only realized that others had already solved your problem for you. But we don't have a good way to discover that two systems described with different words really are the same thing.

(Or do we? Maybe some mathematical linguists are working on the problem right now, and we don't realize this because we don't recognize their terminology.)

Given how much anti-education rhetoric combined with fads and agism I see in technology, it seems like tech is worse then most fields about reinventing the past. There seems to be an almost pathological desire to not acknowledge when something is a revisit of an older idea. Maybe too much ego wrapped up in being original?

There is an anti-learning bias I think. More and more programers who think the past was stupid and not worth studying, with languages that only old irrelevant people learned, and system designs that are irrelevant for today, and theorems and equations that are hard and difficult and thus to be skipped.

Yeah, that's "reactive programming" at the file level. I do it all the time, with Makefiles. Lots of people do. A nice thing about a Makefile is that you can easily control when the calculation of derivative files happens. You (re)create some sets of primary data, then use a simple "make" command to rebuild everything that depends on any of them.

But that isn't reactive; it's still polling.

Reactive is the opposite of polling. E.g. if you wanted Makefiles to be reactive, your filesystem would need to notify when a change occurs, and then that change would immediately cause a compile (or whatever it is in your data calculation case) of only the pieces that are directly affected by that, and so on up the chain until the final result is produced.

If you have to type "make [...]" or set up a cron job, you aren't doing reactive.

Yeah, that's "reactive programming" at the file level. I do it all the time, with Makefiles....

But that isn't reactive; it's still polling.... you have to type "make [...]" or set up a cron job, you aren't doing reactive.

So who said anything about typing "make"? It's quite common for, e.g. a web CGI program to start a "make" process after the client input has been written to the appropriate files. The CGI program can then proceed or exit, and make's updates will happen in the background. It's also not uncommon to see Makefile commands of the form "cd some/dir; make foo" as part of the update process.

This isn't materially different from a write/assign triggering a cascade of updates via some routine deep down in the

Your filesystem notifications or DB reactions are either coming from a polling loop in the FS/DB engine, or from an event processing loop in the FS/DB engine.

The system calls you use isn't the question. It's about how your implementation works, not about external implementations you don't control.

Let's think hierarchically. Make is smart and will check all intermediate output, all the way down to the leaves of its target tree, to see if it needs to do anything. If all the leaves are older than the final output, make doesn't do anything. If a level 5 leaf is younger than the final output file, make reevaluates that leaf and its level 4 parent, its level 3 gra

Yeah, that's "reactive programming" at the file level. I do it all the time, with Makefiles. Lots of people do. A nice thing about a Makefile is that you can easily control when the calculation of derivative files happens. You (re)create some sets of primary data, then use a simple "make" command to rebuild everything that depends on any of them.

I was thinking along the same lines, except that I would say 'relational' rather than 'SQL' because SQL is neither the only nor the ideal implementation of the relational model.

In one significant way they are different, however. Part of Codd's insight was that the deductive power of his model should be less than Turing-complete, so that issues of undecidability don't arise (triggers were not part of Codd's model.) This restriction made automatic query optimization and reasonable transaction times feasible,