Further, how does one isolate personal preferences? For example, my eyes can simply spot more patterns if I tablize information than the same info in code. I used to line up parameters in function calls long before I ever touched a database to help me spot patterns and typos. I was born that way. I have described many times how relational allows one to tune their view of something so as to better study it. That is subjective, but it is a truism for me. My eyes and brain simply work better with tables. In most OO apps I see, at least 50 percent of it is attributes or can readily be converted into attributes. If this is true, then why not tablize those attributes so that my table-oriented head and eyes can better study and sift it?

[You seem to work under the assumption that you can understand the whole program. For many non-trivial systems, this isn't the case. OOP lets you hide away parts of the complexity so that you don't have to think about them. You can't "see" the patterns because there are none. If there are, you haven't properly factored your code. -- JonathanTang]

I never meant to suggest that such was possible. But schemas can come as close to a system-wide view as possible. By the way, if OO factors out all the patterns, then how come it has an InterfaceFactoring problem with regard to DatabaseVerbs? Further, some repetition cannot be factored out no matter how hard you try. For example, in many-to-one relationships, the many are going to have to be referenced/linked/pointed to the one thing one way or another. No way out. (There is a topic on that somewhere.) Tables just let one see that remaining bit of non-factorable repetition and patterns better. See near the bottom of ArgumentsAgainstOop for a list of ways OO tends to violate OnceAndOnlyOnce. Plus, you cannot factor out repetition if you cannot find it, and tables make it easier to spot. -- top

Because the tables would be full of instance data, which isn't especially interesting. Code shows the abstractions. Tables of data show the instances. Why should we look at the data? We're not clerks, we're programmers.

[What the man said... I don't want to look at data, data can't put itself on the screen, programs have to do that. Code is about behavior, how we manipulate that data, and get it on screen, or enforce rules on it, but I don't want to see the data, I want to see the code that does the work on the data. Having a Customer table with some data tells me nothing of what can the program do with customers. Databases are important for storage, because it is true, data outlives applications, but that doesn't mean data's more important than applications. Data and applications are partners and are equally important to the user. Data without a program to manipulate it is no better than paper. Programs are what the user sees, uses, curses, blames, loves, etc... programs are the users connection to the computer, not data.]

{Most behavior can be converted to data, aka declarative.}

(Questions about such claim moved below.)

Schemas show the abstraction. One can clerkify [?] large parts of an app if they know how tablize stuff.

But you don't talk about looking at schemas. You talk about looking at tables of data. How does that help you program?

I look at both. It is just that schemas are relatively easy to grok, so one does not have to stare at them for a long time.

That wasn't the question. When I look at the data in my programs (constants), they account for a tiny fraction of the code. Over 99% of the code describes behavior. When I look at the contents of my databases after they've been used for a while I see nothing that could provide insight about making the code better. How does looking at tables of data help you program?

Perhaps if somebody volunteered an interface from a typical class or API, if there is such a thing, it can be digested and dissected to see how much of it can be reworked to be declarative in nature, as was done with the charting example below. It shall be an interesting learning experience I expect. Learning how to convert back and forth from behavior to declarative is likely a good exercise, regardless of one's format of final preference. It would also be interesting to see what kind of things don't convert well, and are best left as behavior.

You still aren't answering my question. How does looking at tables of data help you program?

One factors repetition and links into data so that one is dealing with it as data instead of code. I generally find many things easier to deal with as data than as code. Code is generally ugly and hard to read in comparison because it is harder to type, sift, and query.

[Schema's show me nothing of how that fancy screen works,]

If you use a declarative approach, you don't have to know how it works. That is part of the beauty of declarative: ask for WHAT you want, and let something else worry about the HOW. It is DivideAndConquer. Somewhere around here I compared some declarative techniques to making a shopping list: you fill in the list and somebody else does the shopping. [see DataCentricThinking]

If your job is to write the code for that fancy screen, you do have to know how it works. You can ask for what you want all day but the computer won't give it to you until you tell it how to do it. Users let "something else" worry about the how. That "something else" is programmers.

Could you provide a more specific scenario when the behavior needs to be looked into? The main issue is DivideAndConquer. You can mentally (and physically) departmentalize when you are concerned about behavior and when you are concerned about attributes. If you ask for something via attributes but don't get expected behavior, only then do you have to dig into implementation details. However, this should only be a small percentage of the time. And, since not everything goes well as declarative, we have EventDrivenProgramming.

I can do better than provide a more specific scenario in which the behavior needs to be "looked into". I can provide a general scenario: Creating new software. If the software you need already exists, then all you have to do (after you pay for it, of course) is configure it (i.e. specify the attributes). If the software you need does not exist (or you can't afford to purchase it) then you have to describe its behavior in a way that a computer can understand. Every programming job I've had (creating a UML modeling tool, a check processing system, a portal development system, a computer-based-training system, a loan origination system, etc.) required the creation of new behavior. If all that is required is to specify attributes for existing software then I don't get hired. I'm not a configurer. I'm a programmer.

Top thinks declaring a schema is programming, look at any of his samples, ask him for any samples, he'll show you a schema. Rational discussion about programming won't work with top, programming will never be more to him that configuring databases and dumping resultset's to the screen. Don't waste your effort trying to show him otherwise, it's futile.

Most biz apps reduce down to domain schema design, UI config, and filling in event snippets. That is a powerful triplet in my observation. If you can show something wrong with it, please do. You have not shown OO code kicking p/r code's butt and appear to expect everyone to believe your anecdotes at face value. Also remember that CeeIsNotThePinnacleOfProcedural. Most OO'ers bad memories of procedural seem to come from C apps. How about you give some details about OO helping you with the "computer-based-training" system?

[It doesn't exist, you can't make an argument by saying to use something that doesn't exist. There's no declarative language that can take you from db to screen and do the ui and rules and everything else, certainly not SQL.]

{I never claimed that SQL or declaritive techniques do *everything*. Why do people keep implying I did? It is a matter of shifting the burden away from declarative code, not eliminating the burden entirely. I never claimed entirety. -- top}

[Programmers are the man behind the curtain, we make those things work, sometimes you make me think you're just user that learned a little code. You think all that stuff happens by magic man, no way, a programmer did it.]

So, it is mostly implementation detail. How the declarative framework is "executed" does not have to be one's concern. That is abstraction and DivideAndConquer. You guys keep talking about how abstract OO allegedly is, yet here you are wanting to know exactly how every bit moves.

[or how that data get's from one system to the other, or how that business rule was enforced,]

Many business rules can be enforced using declarative techniques.

Bull... simple rules, nothing more.

Technically, you are wrong because DataAndCodeAreTheSameThing. A RunTimeEngineSchema is just filling in tables. In practice one usually does not go that far; it is balanced between declarative and (direct) code. It is easer to put Boolean expressions in code from a human interaction perspective, for example. But I've found some creative ways to tablize Boolean expressions and set arithmetic (the ideas need some tweaking and testing before put into practice).

One uses a combination of declarative techniques and imperative techniques in practice. OO just makes too much imperative that should be declarative IMO. A large part of OO interfaces are essentially DatabaseVerbs.

[or how the screen show's data that isn't in the database but is derived from it.]

Like a total? Throw in a few events and totals are a snap.

[Events are code... programs..]

I never recommended 100 percent declarative. Events are essentially procedural and made better GUI frameworks than anything OO has come up with. Nobody can even agree what MVC is outside of SmallTalk.

[Data can't show me a picture and let me drop it onto a grid to purchase that item, nor can it show me how it was done. Data can't show me a graph or chart to make it easier to understand, nope, more programs do that.]

Declarative graph interfaces are perfectly possible.

[Don't tell me about possible, tell me about is... pink elephants are possible, but that doesn't make em practical or make me likely to see one.]

[Give a user a database and a SQL interface, and no-one would use computers.]

It is for the developer, not the user. You are putting words in my mouth.

[The data is meaningless without a program to allow the user to interact with it. The user and the data interact with each other, the program is what allows it, enables it, prevents it, the program is what makes it all work. All data's the same, all programs aren't. When I see a crud screen, I don't even pay attention to the labels anymore, I want to see the code that makes it work, the data's irrelevant, uninteresting.]

I have used multiple CrudScreen frameworks without knowing the details of how they work. (Although I would provide more trace-ability options and make the priority rules more declarative if I built it.) Is your wanting to know how the GUI framework works necessary for the job, or merely technical curiosity? Many developers don't know how the compiler specifically works, how the file system works, or how the OS works, yet successfully use them. Why should the GUI framework be any different? Isn't that what some call "Encapsulation"?

Practical declarative approaches minimize the influence of behavior, not entirely get rid of it. One factors out the stuff that is better as declarative. What is left over are usually relatively small tasks or events. There can be a wide continuum to how much is turned into behavior and how much is imperative (code). The biggest advantage of an event-driven framework is that one does not have to worry about "large scale" code structures for the most part, and thus can focus on relatively small snippets of behavior code without having to navigate the big code picture. There is no "center". Of course for non-interactive applications (batch jobs), event driven approaches are less effective and unnecessary. Traditional "top-down" procedural programming works better under such circumstances IMO. -- top

Is the Human Brain Mostly Declarative?

The human brain is mostly declarative in nature, or at least the models we use are. The "algorithm" to calculate signal "strength" and propagate signals is relatively simple. The "power" of the brain is not really in this algorithm, but in the "attributes" of the brain cells (the weighting level and links). Yes, one needs the algorithm (behavior) to set everything in motion, but the real bag of goodies is in the attributes. True, few would want to program this way, but it is certainly a powerful exposition of the significance of attributes.

Good event-driven frameworks are a lot like this. The event engine does not really care about the meaning of the attributes and small snippets of code; it just processes them based on user inputs, its dispatching rules, and related propagation of events. It is "dumb" to the meaning of it all. (See "BrainSchema?" on TheAdjunct)

-- top

Yes, but that's like saying that the program that is input to a TuringMachine is a list of "attributes", whereas the "true algorithm" is the definition of the TuringMachine itself - i.e. that's not true. TuringEquivalent "attributes" are not attributes, they are algorithm. -- dm

Customizable View - It is or can be easier to customize one's view of attributes. See CodeAvoidance.

More "mathable" - There are more known ways to add discipline and rules to attributes to make it more math-like so that one can manipulate them with higher-level abstractions.

Divide and Conquer (or specialization) - One can separate the process of declaring something from processing it. See above.

"Restrict" implies that we limit ourselves to one view instead of many. Let's consider the behavior defined by a widely available application such as WebLogic 8.1. Are you saying that viewing that behavior as attributes (the bytecode values, I assume) is more compact than viewing the Java source code? Are you really arguing that there's some benefit to viewing behavior as a table of integer values? It sounds like you're arguing for machine code over any higher level language.

Bytecode? I think you are approaching it too literally. I will see if I can find a specific API or interface to declaratize, for WebLogic is a big product. Perhaps you wish to suggest one?

I'm not talking about the external interface to a program, I'm talking about the program itself. No matter how you present the API, there has to be code behind it to do what you want.

Pick something specific, and we can explore that further.

I picked WebLogic 8.1. I can't be more specific than that. How would we view WebLogic 8.1 (the program, not the configuration) as attributes? How would that be more beneficial than viewing it as Java source code?

WebLogic seems to be a classic CrudScreen type of application where the user configures and associates stuff via CrudScreens. I thought you wanted to get away from that. One could build it with a database, a VB-like GUI IDE, and GUI events in a strait-forward manner.

For one it talks about "role-based security". This can be handled with via AccessControlList techniques. You can make whatever interface for it you want. One of the beauties of declarative approaches is that different languages can access the info. It does not matter as much which language implements the UI, for example.

Yes, it's trivial to set map ACLs to a database in WebLogic. But we're not talking about administering WebLogic. We're talking about creating WebLogic.

[Your missing the point. Of course you can take an existing system, and modify it to make it configurable by declaration. But you can't write new systems that way. Let me see you write a new system with new behavior without writing code for that behavior... you simply can't declare behavior that doesn't exist.]

On second reading, I realize that some of the confusion may be over what is meant by "new system".

Again again again, I never said it was all-or-nothing. As far as techniques for hooking up declarative info with code, look how HTML interfaces with JavaScript in an EventDrivenProgramming fashion, and EvalVsPolymorphism. (In the future I expect file systems will be replaced with relational databases, and the "integration" will be tighter and/or simpler.)

It may be implemented in OO, but OO is not the only way to implement it. The user of HTML+JavaScript (app developer) does not know and cannot tell, and is thus not concerned. If you mean the DOM model, then yes, that is OO, but not what is being addressed here. We are not talking specifically about UI frameworks here.

[Because events happen to objects. The event system comes from the DOM (Document Object Model), thus mytag is an object in the DOM and onClick is an event that it throws, good code however, isn't written that way, as it's not reliable. Good code uses mytag.addEventListener("click",aFunction). EventDrivenProgramming always uses objects... functions don't throw events, objects do, EventDrivenProgramming is OO by it's very nature.]

You seem to be using a rather wide definition of OOP. Or perhaps the so-called "physical" definition. I doubt it is the consensus definition, but please work that out in DefinitionsForOo, not here. As far as your "good code uses...." comment, I would like to see more justification for that statement, if you don't mind. Personally, I usually find it easier to use a visual IDE to add event snippet code to a screen widget. But if one must deal with code, something like:

<widget name="buttonX" .... onClick="clickButtonX()">

is fine by me. DOM and OO are not the only way to get such, just the "in-style" approach. (Ideally, if one has 500+ widgets in a bunch of screens and several hundreds of events associated with them, a database starts to look like a better place to manage such rather than a bunch of files and/or RAM pointers IMO.)

[Anyone who dispatches every event to a database needs to learn to program better, if you're still writing code in the Visual IDE properties, don't come preaching about how to program, because you still don't know.]

I suggest you offer better reasoning for claiming that a given something is summarily bad. (Perhaps not under this topic, though.)

What if you need more than "event snippet code"? What if you need to write an entirely new program?

I am not quite sure what you are asking. If you need a new app, them simply make one. I see nothing from stopping somebody from making one. Are you talking about creating an event-driven GUI framework from scratch itself?

If we put all of the code for our new app in one event "snippet", then making it isn't so simple.

Who ever proposed putting it in one big snippet? Most decent apps have lots of small snippets.

No-one proposed putting it in one big snippet. I'm trying to get you to explain how you would approach writing a program that doesn't consist of event snippet code attached to GUI triggers. I'm still trying to get you to explain how looking at the code as attributes will help me.

Our new app needs hundreds of thousands of lines of code. It needs structures and lists and queues and sorts and all that stuff they teach in first year CS courses.

Regardless of change patterns, the application needs lists of foos, queues of bars, bizarre sort routines that aren't commercially available, and other kinds of new code. I'm describing the kind of programming I do. It can't and won't be done entirely in databases or off the shelf applications, although it will interact with them.

I would like to see specific examples. Most of the limits I encounter are specific to SQL or a given database engine, not relational in general. Some tools let one supply extra "work" columns in the result set in order to do secondary processing of the result set, for example. In my dBASE days I would often make multiple passes. Sometimes it could be combined into fewer passes, but I kind of liked to divide it into steps for unit testing, etc.

Let's say it's a transaction processing system. Transactions are sent in multiple formats from multiple sources, converted to a common format, logged, sent through multiple workflows (determined by their content) consisting of automated and manual event-driven steps distributed across multiple physical machines, and eventually transmitted in various formats to multiple destinations. The database is used to log the transactions and share "static" data, but it can't be used to fake messaging between the steps. The GUI framework is only used in some of the manual steps. It is not a framework for the entire application.

Please clarify "faking messaging between steps". Note that most bigger RDBMS have scheduling utils to allow scheduled processes and/or periodic polling. Most OS's also have such utilities in case you don't want to write it all in a DBMS language such as PL/SQL.

"Faking messaging between steps" means multiple systems polling a database to see if a value has changed, using tables as their queues. It works for low volume systems where every process can afford to poll the database at a reasonable rate. It is not event driven, though, like a real messaging system. We don't need scheduling and we can't afford polling. We have to have n processes waiting on a transaction and processing it as soon as it arrives in the local queue.

Response below under the "emacs" comment.

Let's assume that the event-driven GUI framework already exists, but that one of these events has to trigger truly novel behavior. There's no off the shelf component we can plug in. There are no APIs to call. We have to create a non-trivial program.

I would have to investigate a specific example. Even if by chance our framework is limiting, there is usually a decent compromise if you ponder alternatives a while. It may not be cost effective to make something 100 percent open just to handle a handful of odd requirements. It may be letting the tail wag the dog. Besides, in the age of open source, there is always the source to tweak.

I don't think you understand. I'm not talking about modifying an existing application. I'm describing an entirely new application. No-one has ever written one of these before.

We don't seem to be communicating. I don't understand what you are getting at.

You spoke of "limits" in our framework. There is no framework, we are building the framework.

In the 1950s folks realized that sticking all the code in one "snippet" was hard to manage and wastefully repetitive.

Again, I never proposed One Big Snippet. I don't know where you got that impression.

Again, I'm trying to get you to talk about creating novel applications as opposed to associating event snippets with GUI triggers. I'm sorry if it sounded like I was saying you proposed one big snippet.

We'll need to break the code into manageable units that can be individually tested and reused where possible. One popular way to do that is to define separate procedures and structures. Another popular way to do that is to associate procedures and structures in classes. The way you seem to be advocating (viewing the instructions as attributes) hasn't been popular for 50 years or so. What benefit will your approach give us?

What specific 50's failure cases are you talking about? Lisp? You don't wanna piss of Lisp fans. They will make me look like a pussy cat in comparison.

Machine code. When you talk about "table-izing" code and viewing code as attributes, that's all I can imagine. Give us an example of how you would view the program I'm describing as attributes.

Which one? Several were mentioned.

Either WebLogic 8.1 or the transaction processing system described above.

[Any of em... Top, don't you understand, no-one ever knows what the hell you're talking about. You have all these fucked up weird little ideas about programming but you're in your own little world and you just assume we understand what you're saying. Well we don't. No-one does, quit jabbering nonsense and show us something, show us what you're talking about.]

The bottom line is that without specific requirements and UseCases to compare, communication will probably be difficult. Maybe there are ways to communicate without pointing to code and requirements, but right now I am stumped. -- top

[All you ever show are database schema's and a tidbit of simple procedural code and you make grandiose claims about how powerful your system is, but you've never shown anything to back up your claims. We all know how powerful OO is, that's why it dominates the industry, but no-one even knows what you're proposing because you don't ever show any programs. You think TableOrientedProgramming is so powerful, then put up a sample application showing off that power or shut the hell up and quit blowing smoke up everyones ass like you know what you're talking about.]

My official claim is that TOP is not objectively LESS powerful than OO. I am under no obligation to prove it objectively better. It may all be subjective. I have pointed out actual TOP-based software and toys examples. If you want to show faults in those, be my guest. And, OO does not "dominate the industry". A lot of it is just lip service. -- top

The first one I assume you generally are familiar with. The second one gets into issues of GUI/UI frameworks. I don't think this is the place to battle over GUI framework paradigms, so lets mostly ignore it for now.

The event snippet approach I prefer is more or less like those found in the VB-like tools. I assume you are familiar with VB, right? You build the UI interface interactively with GUI tools. The design is generally declarative in nature, although it does not really matter because the developer does not see the "source" of the GUI itself, only the visual representation and "property lists". If you don't like visual GUI builders, it does not matter much here. You specify widgets either with the mouse or in code. Take your preference.

Then one adds event snippets to screen widgets either by mouse or by some kind of coded association. In VB-like tools an app developer does not know or care how this works internally, and generally don't have to. It might be implemented with a GOF Listener pattern or gerbils on wheels, but they don't care as long as the external rules are spelled out. The association between events and widgets can be via object pointers, database keys, or XML tags (such as the on-click example shown around here somewhere). I would prefer a database if I had to dig into the nitty gritty so that I can query and view it any way I please, and you probably prefer object pointers for whatever indescribable reason. Either way it is an association, regardless of its machine or paradigm representation. It is just a link between A and B.

I see nothing in WebLogic so far that cannot be built this way. Maybe the UML drawing engine is an exception because I have never built a UML diagrammer from scratch and thus have no experience there. But beyond that, WebLogic is just regular ol' CrudScreens system that allows users to add, change, delete, list, and search tons of attributes, instances (records), and associations.

You're going to build a distributed application server out of CRUD screens? And what "UML diagrammer" are you talking about?

WebLogic is kind of like Emacs: it is a lot of different tools. In some cases one probably has to get down into hardware-specific nitty gritty. One nice thing about using the database as the primary attribute and communications mechanism is that multiple languages can be mixed into an app. Thus, if some parts need to be in C, they can. C can still talk to the database to get attributes and leave "messages" to other language portions. Mix and match. As far as the "UML graphing", I put a link somewhere around here. I will agree that for some parts RDBMS may not be the best tool due to unpredictable timing due to garbage collection and the like. See AreRdbmsSlow. But just because RDBMS are not the right tool for some parts is no reason to abandon them entirely.

No one here has advocated abandoning RDBMSs. I'm asking how viewing the instructions as attributes will help us write something like WebLogic 8.1. Can you tell me?

It is very simple: You can focus on "what" more than "how". How becomes an implementation detail. I don't know how to communicate it any simpler. That is the power of declarative. What else can I say? An interface like ChartingExample can be designed and shipped off to contractors to implement if it is well-defined enough. (If not, then there will be tuning iterations.) -- top

So the TOP approach turns "how" behavior is implemented into an "implementation detail" that is delegated to someone else. That's not a programming methodology or paradigm. That's project management. The "contractors" are people like me who have to implement your "details".

The "contractor" comment was to illustrate a point, not necessarily to recommend how to divide up the workload. Even if you are the sole developer, it allows you to mentally separate the two. When the what and how get all mixed together, it is more confusing. Details are intermixed with interface info. OO is currently the pinnacle of imperative, but imperative seems to have inherent limits (at this stage). It would probably take something like 20 classes to implement ChartingExample. It is more time to sift through 20 classes than an interface that deals with only 2 entities. OO only hides away the details of individual classes. But how they are all interrelated cannot be hidden away. CantEncapsulateLinks. Plus, about half of each class interface is wasted on DatabaseVerbs. The declarative interfaces don't have to deal with most of those because they are "external" to specific API's. That would be true with say XML as it is with relational. The API does not care who wrote the XML or how. (I know I said some of this before, but it does not seem to be sinking in.)

My interfaces only describe what they will do, not how. Implementation details are not "intermixed" with interface info. I don't see anything you've proposed that will improve that. I can easily hide 20 classes of implementation behind one interface that deals with 2 entities. You seem to be arguing that since you don't have to write the implementation it can be ignored, and that's better than being someone who has to write the implementation and therefore can't ignore it.

Yes, you could hide the 20 behind two, but in practice it does not happen that often. There tends to be about 2 to 20 times more classes than tables for an equivalent application/system. Declarative approaches encourage a stronger division. However, I admit it is hard to give external evidence and metrics for "encourages". And the declarative approach is still a smaller interface because it does not have to include DatabaseVerbs. As far as "ignoring" the implementation, again it is as much about mentally ignoring as it is dividing up labor. It is about groking the system. Fewer "interfaces" and lack of DatabaseVerbs being replicated for each class improve groking.

[See, already we disagree. I have yet to see a typical custom business app that could be built like this. You've simply left no place for complex business rules and processes.]

What is an example that event-driven architectures could not handle? I realize you believe that it cannot easily handle a lot of stuff, but you are failing to communicate specifics on the failures. You are bothered by something in it, but I cannot read your mind. I need specs and/or code.

[Schema's are good at holding simple declarative rules on fields, but nothing too complex. I certainly wouldn't want discounting process workflow for a product line declared with triggers and constraints. It should be more like... keeping simple but rearranging.]

Why couldn't process workflows be done via various event triggers? Event triggers are TuringComplete. However, it sounds more like a periodic "batch job" in this case, so may work better (from a human maintenance standpoint) using typical batch-job techniques. Again, without seeing specific requirements and UseCases, it is hard to tell.

Interactive UI framework (UI widgets, grids, lists, etc... event code is usually contained within this layer, code in the form)

[And that's at a minimum, I could easily add several more layers to any even slightly more complex application.]

Is OO really declarative in disguise?

Re: {Most behavior can be converted to data, aka declarative.}

[That's a bold claim, and I seriously doubt it's true. If that were the case, programmers wouldn't be needed 90% of the time. I have never seen a project where this was the case.]

I think Turing (or his colleagues?) pretty much proved they are the same thing in the end. The debate here is more about usefulness to humans and human grokkability than mere ability to achieve it.

Also note that roughly 40% to 80% of the class interfaces I have seen over the years are composed of essentially DatabaseVerbs or stuff that can readily be converted to declarative. With all the suggestions that "OO is about behavior", most methods are essentially action-a-tized declarative elements. (Didn't I say this already? Deja vu) OO's behavioral ability is apparently going to waste. Most of it is filling up, finding, linking, and emptying stuff. It is just bad InterfaceFactoring. Factor that all to the root object if you want to clean it up. But then you would have a..........DATABASE? Tada!

[Most elements are actionized declarative elements when that's all you know how to do, sure, but we do far more than that. Don't blame OO because your programs are too simple to need it.]

Without something more specific, we are back to a DomainPissingMatch. The question remaining is where, when, and how do declarative approaches fail; or at least fail to be expressive enough, hence we return to the title of this topic.

[Your always missing the point, and I think you do it on purpose, there's a lot more to programming that just verbs/functions. Verbs and functions are the basic building blocks, the smallest abstractions, you use them to build bigger abstractions - objects, and you use them to animate a model, DomainDrivenDesign. Tough problems are solved by having entities interact with each other in intelligent ways, this greatly simplifies the problem by hiding complexity. As a programmer you have several forms of abstraction available to you, functions/objects/interfaces/closures, and you should be using them all, for they are all necessary in just about every single program. If you only use functions, then you're missing out, you're crippling yourself for no reason. Functions don't handle or model state very well, objects do, objects don't model verbs as well as functions, the best languages allow you to use both whenever you need them.]

[If I'm modelling a warehouse business, for example. Then it makes sense to actually model it, to have a warehouse and products and move things into and out of inventory, just as the real items in the real world do. It's not just data, yes the data is important, but it's making sure that data is valid and consistent that makes the model valuable. Some problems in the domain will be best modelled as functions that work on objects passed to them, others will be best modelled as objects, that have capabilities or allow certain kinds of interactions. Your arguments always revolve around the idea of rejecting abstraction and having the database do everything, and that simply isn't an option in the real world. Abstraction is what enables us to make complex software, while keeping it free of bugs. You either take advantage of all the tools available to you or you don't. If I can hide a complex piece of functionality under a declarative word, I will, if it makes sense to do so. If you can get away with being a one trick pony, good for you, but don't criticize the rest of us for learning to use everything at our disposal.]

The biggest reason for "complexity" that I encounter is relationships AMONG things, not so much complex actions on individual, isolated things. A warehouse similarly involves lots of relationships. For example, asking how long an item has been in a given warehouse is about a relationship between the product and the warehouse itself. Most biz apps are about tracking and managing relationships between stuff, at least those that I see. OO has nothing built-in to help with these. If anything, encapsulation is anti-relationship, or a least does not help (see CantEncapsulateLinks and PrimaryNoun).

Even physical modeling involves a lot of relationship tracking. In most video games things just don't spontaneously do things (at least that is not the hard part). They have to interact with other nouns before something happens, and the result depends the kinds of and attributes of the things interacting, such as the "energy level" attribute of each item. Somewhere in such games there is probably something similar to a many-to-many table which defines the results of an encounter for each kind of thing.

If you have a specific scenario where declarative approaches sour, I would like to hear about it. If we are to explore multiple paradigms, lets also identify their weaknesses with something more than anecdotal evidence. I don't claim that declarative approaches solve every single problem well, just the vast majority of those I encounter. If by chance it makes 90 percent twice as easy and 10 percent twice as hard, then declarative still wins in the end. If there is a class of problems that trip under declarative, I have yet to see them, or at least don't see them often enough to turn the whole boat around.

As far as OO being more "abstract", I would like to see your justification for that claim. I find relational more abstract because it factors out DatabaseVerbs into the database instead of replicating them for each and every entity, as the laws of encapsulation dictate for OO. Encapsulation results in each class having to reinvent the DatabaseVerbs wheel in order to be "self handling", and thus scores lower on OnceAndOnlyOnce, which tends to indicate lower abstraction.

[Languages have statements, more abstract than statements are procedures/functions, which are assembled from statements and give scope to a group of statements. More abstract than procedures/functions are objects, which are assembled from procedures/functions and give scope to a group of procedures/functions. Each one is a higher level abstraction built upon and with lower level abstractions. I said nothing about relational, I said objects are a higher level abstraction that procedures/functions, a statement of fact, as I've just demonstrated.]

Well, okay, I will agree that objects are indeed a higher level of abstraction than functions by themselves in general. But relational is higher yet than OOP IMO because it factors commonly-found state, attribute, and relationship management interfaces into a single shared interface, which OO cannot do (without turning into a database). And, OO is not orthogonal to relational for the most part; so given a choice, I will go with relational and move the purer behavioral stuff to functions. -- top

[Don't compare OO to a database. OO is a programming technique, a way to organize source code, databases are programs, apples and oranges. You can compare relational to OO, but not databases, and when you do that... you need to get specific, so you're really comparing SQL, in various flavors, to OO. Let's at least get that strait.]

It is not just about SQL. For example, in ChartingExample the charting engine does not know or care how the chart config attributes got there. It only takes them and acts on them. An OO version would have methods to add series and link them with data sources, or even addDataPoint methods. There is no equivalent to an OO addSeries methods in that interface. It does not exist as part of that interface. Think about it. OO interfaces spend a lot of methods and code on basic DatabaseVerbs stuff of filling things up and associating them. That is usually where it conflicts with databases and declarative thinking. To me OO stinks largely because it spends so much code and interface room on stuff it is shitty at. You claim OO is all about behavior, but in practice its mostly declarative stuff disguised as behavior (see above). Factor that crap out to a larger thing/paradigm/technique, and OO may then look more inviting to me.

[See, you still don't get it. OO isn't trying to handle things it's bad at. Most OO programs in the business realm happily use relational databases. A database will store the relationship between an order and a line item, but the OO program enforces process and complex rules on the establishment or destruction of that relationship, something databases suck at, OO is augmenting and extending the capabilities of the database. Collections in OO programs are often nothing more than pretty wrappers over a generated SQL statement using the database to filter the set to the proper result, exactly what databases are good at. Collections often offer a predicate based query api native to the language that is easily interpreted into sql to be executed to then fill the collection. Rather than passing a collection of rows to a function, we reverse it and pass the function to the collection of objects, so while you work at the low abstraction level of "select", we get to work more higher level abstractions such as "do, select, reject, detect, inject, first, rest, last". Which can all be interpreted down to SQL's select removing much of the burden from the programmer. I much prefer "detect" to "select top 1" and "reject" to "select where invert conditions", abstraction is a good thing. OO is quite happy to take advantage of what relational techniques offer, while keeping all the advantages OO offers. You might see AddLineItem?(anItem) in the interface, but that's because that is the natural place to enforce rules about addition of line items to an order, but it'll still be a foreign key in the database keeping track of the relationship. It'll still be SQL statements generating reports and doing complex joins and subqueries to get derived data, but once that data is moved into memory, it'll be wrapped in an object to simplify programming with it.]

One can put function wrappers around frequently-used SQL statements also if the SQL grows ugly. However, doing it for SQL that is called/used from a single spot is often a waste of code. (There is another topic about that somewhere.) SQL is a fairly high-level language. It is often hard to beat its expressiveness with imperative code. Even Capers Jones' allegedly pro-OO study showed this. Wrapping high-level concepts with different high-level concepts is not a road to effective abstraction because of the translation tax/overhead. Only wrap if there is a big difference in abstraction levels. (See WrappingWhatYouDontLike.)

[See, I don't want to pass around rsCustomer, I want to pass around aCustomer, because aCustomer can protect itself, while rsCustomer can't. Procedures working on aCustomer, can't violate its integrity, but procedures working on rsCustomer can. aCustomer is easier to work with than rsCustomer. When I add aCustomer to anOrder, anOrder can protect itself against invalid customers, but when I set rsCustomerID field in rsOrder, rsOrder can't protect itself, I could link it to the wrong kind of customer. Objects let us pass around the data with the rules attached to it, because fundamentally, there's no difference between any of the following.... Integer, DateTime, Decimal, Double, String, Customer, Order, LineItem?, Product, Money... they are all datatypes, and programming is about working with datatypes and doing work with them. Simple datatypes come built in, as they apply to all problems, but more complex data types are very specific to certain problems, so you must build them yourself, things like Customer, Money, and Order. You seem happy programming with simple datatypes, why can't you accept the more complex ones that are domain specific?]

We are drifting here from "expressiveness" (the topic) toward integrity enforcement. Perhaps we should move such discussions to DatabaseNotMoreGlobalThanClasses, GateKeeper, and ThereAreNoTypes (I don't believe in the usefulness of "complex" types.) The tight relationship between nouns and verbs does not exist strongly in the real world. A one-to-one relationship is phoney. I reject the "self-handling nouns" approach of OO as superior until I see it in action. Artificial, forced tight-coupling between nouns and verbs is a ticking maintenance bomb.

Keep in mind that putting the integrity checks at the database (referential integrity, update triggers, etc.) allows multiple languages and multiple classes to be under the same rules. If you truly want global enforcement that app developer can't flub up, the database is the place to put it. In the end, OO classes are only conventions, not sure-shot integrity problem prevention mechanisms.

Better questions is why bother, an object is already a data structure, you don't don't gain anything by making it a struct.

Yes there is. One can alter, customize, and query one's view of it, as described in CodeAvoidance. I don't know about you, but I often don't like the original form code comes in. I like to customize my view. Maybe I am spoiled by my past use of NimbleDatabases, or just mentally deficient due to lack of being breast fed as an infant in that I cannot alter the view in my head instead of on the screen. But doesn't it at least seem like a good idea to separate meaning from presentation? It is better meta-ing, if you will. Even MS is slowly moving in that direction with their CLR, making debates about whether C# is better than VB.Net mooter and mooter. (Keep in mind that I would rather have relational table structures than map structures {OO}, but will take the second if I cannot get the first.) -- top

Expressions

I will concede that "expressions", such as Boolean expressions and mathematical expressions are often hard to deal with in a data-centric format (such as an AbstractSyntaxTree), unless they contain some kind of heavily-repeated pattern. Thus, expressions tend not to be items that I convert/factor into tables or data structures. (Although expressions can be stored in tables.) -- top

Some of us call those expressions... programming, maybe you should look into it.

My attempt to answer the question in the title: Is declarative less expressive?

Yes. Because a single declarative statement may map to several imperative implementations, the declarative is less expressive. In other words, it can't express which implementation to use without becoming less declarative.

Isn't this true of imperative implementations also? Thus, by itself it's not a distinguishing factor. -t

The extra expressiveness of an imperative program can be a disadvantage whenever automation can a provide pragmatic implementation from a declarative program. Especially if that automation can take into account variables that are unknown until runtime. i.e. The same declarative program may end up being executed in different ways at different times depending on the runtime situation.