If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

in truth the whole issue with OOP can be that in order to do things right you have to make things into modules, which can turn something that would otherwise be relatively simple into something very very complex. This is not to say that there aren't benefits in the long term to doing so but it can increases code complexity and provide more opportunity for bugs because of the increased line count for trying to force it, and it of course costs more development time to bring up. Again I go back to the chatbot example I used earlier, using a non-OOP imperative approach to this makes writing it trivial, and in most cases relatively easy to follow, however... you get extensibility and modularity benefits in the long term from going the OOP route even though it makes designing and writing said bot a pain in the neck. Seriously just think about trying to design a OOP IRC bot and compare that to thinking about designing it in other imperative techniques... It's multiple orders more complex.

Also some use cases don't really benefit from it, scripting for instance is naturally antagonistic to OOP due to the sheer simplicity that is required, and even if you did try to make OOP scripts there wouldn't really be much if any benefit to it.

The way I see it, it is not so much about OOP being 'bad'; instead, the problem is thinking that OOP SHOULD be at the core of programming, that is, to think that OOP is fundamental to programming. I know all this is more an academic exercise than anything else, but nevertheless it is worth discussing; just because the industry does it like it does doesn't necessarily imply it is the natural, or best, way of doing things.
For example: Why isn't mathematics central to programming (in practice)? It is like you get all this 'empirical facts' and try to organize them in something called 'software engineering', which by the way it sometimes seems to be everything but ENGINEERING.

I'd like to quote mathematician and programmer Alexander Stepanov:

"I think that object orientedness is almost as much of a hoax as Artificial Intelligence. I have yet to see an interesting piece of code that comes from these OO people. In a sense, I am unfair to AI: I learned a lot of stuff from the MIT AI Lab crowd, they have done some really fundamental work: Bill Gosper's Hakmem is one of the best things for a programmer to read. AI might not have had a serious foundation, but it produced Gosper and Stallman (Emacs), Moses (Macsyma) and Sussman (Scheme, together with Guy Steele). I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras - families of interfaces that span multiple types. I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting - saying that everything is an object is saying nothing at all. I find OOP methodologically wrong. It starts with classes. It is as if mathematicians would start with axioms. You do not start with axioms - you start with proofs. Only when you have found a bunch of related proofs, can you come up with axioms. You end with axioms. The same thing is true in programming: you have to start with interesting algorithms. Only when you understand them well, can you come up with an interface that will let them work."

I'd like to seriously question the foundational character that OOP have been receiving for some time now; personally I'd take any time an "algebra for programming" over OOP, but yet again, it might be practically untainable.

The way I see it, it is not so much about OOP being 'bad'; instead, the problem is thinking that OOP SHOULD be at the core of programming, that is, to think that OOP is fundamental to programming. I know all this is more an academic exercise than anything else, but nevertheless it is worth discussing; just because the industry does it like it does doesn't necessarily imply it is the natural, or best, way of doing things.
For example: Why isn't mathematics central to programming (in practice)? It is like you get all this 'empirical facts' and try to organize them in something called 'software engineering', which by the way it sometimes seems to be everything but ENGINEERING.

I'd like to quote mathematician and programmer Alexander Stepanov:

"I think that object orientedness is almost as much of a hoax as Artificial Intelligence. I have yet to see an interesting piece of code that comes from these OO people. In a sense, I am unfair to AI: I learned a lot of stuff from the MIT AI Lab crowd, they have done some really fundamental work: Bill Gosper's Hakmem is one of the best things for a programmer to read. AI might not have had a serious foundation, but it produced Gosper and Stallman (Emacs), Moses (Macsyma) and Sussman (Scheme, together with Guy Steele). I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras - families of interfaces that span multiple types. I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting - saying that everything is an object is saying nothing at all. I find OOP methodologically wrong. It starts with classes. It is as if mathematicians would start with axioms. You do not start with axioms - you start with proofs. Only when you have found a bunch of related proofs, can you come up with axioms. You end with axioms. The same thing is true in programming: you have to start with interesting algorithms. Only when you understand them well, can you come up with an interface that will let them work."

I'd like to seriously question the foundational character that OOP have been receiving for some time now; personally I'd take any time an "algebra for programming" over OOP, but yet again, it might be practically untainable.

What you guys think?

Well, see I disagree with that assessment because as much as programming has to do with math, programming isn't math. Programming is in truth about language and expressing and executing ideas. This is again not to say that it doesn't sometimes require a serious amount of math, but it in and of itself is not. At it's fundamental level you're giving directives to the computer in order to make it do something. This requires a means of doing so that is easy for people writing code to understand and that is easy for the computer to translate into something it can understand.

I would personally argue that OOP is for the most part the natural in terms of how this intermediate language shakes out, because here's the thing let's take an example sentence:

Sally went to the store and picked up a bunch of bright red apples

fundamentally this breaks down into two noun verb noun s where an object is acting upon another object in short

Sally goto store. Sally pickup apples

and of course these apple objects have the property of being bright red, even Databases are actually taught in an object oriented fashion... guess what normalization is? It's actually the process of breaking down tables of data into objects. In fact all of our data languages such as XML are object oriented, because you've got say a paragraph object with properties such as font being a certain type and size, as well as color, and data languages really can't not be object oriented.

Now implementation wise sometimes it's not the best thing to go with, however from a conceptual standpoint OOP is the "best" approach, because it most closely matches how our non-programming languages such as english are designed.

EDIT: Math is actually arguably another language with a different though similar target as programming languages. In short expressing calculations, as opposed to being designed to express ideas to a computer. Which then Math can be used as another language underneath a programming language since little further translation is needed for a computer to understand it

Will Unity Next depend on Mir?
Or will you be able to use Unity Next on Wayland and X11 too?

I don't think Mir is in a position where anything could depend on it at the moment. And when it is, I don't think they would leave users without an X11 version for some time. Unless they are even crazier than I thought.

I don't think Mir is in a position where anything could depend on it at the moment. And when it is, I don't think they would leave users without an X11 version for some time. Unless they are even crazier than I thought.

Well, Unity is a Ubuntu thing, and Mir is a Ubuntu thing.
So Ubuntu could easily make Unity depend on Mir and not care.

you either didn't understood what i was saying or i expressed myself wrong.

i know very well what modular means. might i suggest rereading my post?

your paragraph seemed to be going on about direct access, and saying that making stuff private or protected is cutting down on modularity, which is quite simply not the case. Direct access is actually arguably a bane of modularity simply for the fact that it allows you to rely upon an implementation as opposed to an interface, and on that point having the capability to define said interfaces, and in fact the entire reason you're declaring things should not be simply to stop bad usage, in fact to be doing it with that purpose indicates that the person writing the program is doing things wrong from an OOP perspective.

The entire point of access levels is to be able to define interfaces such that you've got a public interface how people interact with your objects, your protected interface of things that are useful to people deriving from your class that don't/shouldn't for most cases need to be changed, and then you've got your private interface/implementation that with the protected interface form the implementation of how it works.

Let me put this another way for you a class is like a miniature program it takes switches (member functions) and parameters (properties) and then it does stuff with those switches and parameters and gives you a result. With ls or grep or whatever you don't really care about how it's implemented, all you really care about is the results, same idea with a class, and so separation of the interface (public) from the implementation (protected, private) is very important to good OOP design, and again helps to increase modularity since you're relying upon an interface as opposed to an implementation. It's like relying upon the HTML standard as opposed to how IE does things.

And again yes people can get it wrong, but people can also crash cars, that doesn't mean either cars or OOP is bad. It simply means that the person writing the code or driving that car was bad. Just because Java is a shitty implementation of an OOP language doesn't mean that all OOP language implementations are bad or that the idea is innately wrong. Just because Java developers don't understand OOP and think that having those features means that they can be control freaks to limit "bad use" doesn't mean that the rest of us are like that, and in fact we're for the most part going "WTF Java? Being a megalomanical control freak is not how OOP works" and are instead simply using it for interface->implementation separation.