This might be a good thing for Qt. It is the BEST C++ toolkit for many high quality applications. It was being drudged behind Nokia's anemic policy regarding where to head with a mobile OS. Let's hope it doesn't end to Oracle.:p

And hopefully this means that Qt will go back to focusing on the desktop widgets instead of QML and the inane pipe dreams about mobile devices that never came to exist.

Nooo... QML (once the desktop components are finished) are the best thing to happen to Qt in a long time.Hardware accelerated, easily animated, interpreted- It's really slick and much easier to create custom widgets with.

Qt largest growth sector is embedded systems and QML Is the driving force behind this market. You cannot get the performance from Widgets that you can from QML objects (Well you can if you rewrote the widgets in a light weight framework like QGraphicsView or SceneGraph, but then you would essentially have QML.)

I don't know where you get your facts, but QML behaves very well in highly animated GUIs on fairly low end embedded hardware. The fact that it is backed up with a highly optimized SceneGraph engine that removed the overhead of the QGraphicsView engine makes QML even better performing.

The comment above about Digia is greatly misleading. Digia focuses on the commercial license market which is a legacy business. The growing embedded market uses the LGPL version and gets support from the open source community. Companies like ICS and KDAB are growing at a very fast pace servicing this market. Digia has not been able to transition well to the embedded space.

If you rewrote QWidgets in a light weight framework like QGraphicsView or a scene graph, at least it would be C++ instead of a web developers language!!! At least you would have C++ bindings for the scene graph, instead of nothing at all! At least you would still have widgets instead of having to reinvent the push button for every project!!!

QML is the result of a political decision by Stephen Elop and Lars Knoll. Widgets bad, get rid of them. C++ bad, get rid of it. Desktop bad, everything is now 100% focus

Like it or not, thanks to Windows 8 "real desktop application" and "Metro app" will increasingly mean the same thing in the future. QML is a result of Nokia's (failed) mobile efforts, but thanks to it, Qt (unlike pretty much any other toolkit) is actually able to create competitive interfaces, regardless of whether the competition is using traditional widget-based interfaces or Metro-style interfaces.

So yeah, I agree with GP here, QML is the best thing to happen to Qt in a long time.

QML is a result of Nokia's (failed) mobile efforts, but thanks to it, Qt (unlike pretty much any other toolkit) is actually able to create competitive interfaces, regardless of whether the competition is using traditional widget-based interfaces or Metro-style interfaces.

It's not the only toolkit - WPF is quite similar, and shares many of the same concepts. Qt is the only such C++ / native code toolkit, though, and it's a fair bit faster than WPF. And portable to boot.

They could have made widgets hardware accelerated and easily animated, but that would hav required 'hard work' and it's more fun to just shove out new crap instead. Luckily, Digia seems to have people that realize what the real needs of the vast majority of Qt developers are even so they continue to fix and improve widgets as the Nokia guys were having fun in their QML circle jerk into irrelevance.

They could have made widgets hardware accelerated and easily animated.

Yes. And that's how QML came to be, because when you actually try to make "widgets" do all that you end up with something that's not widgets anymore. Do you seriously believe that the mindset was "let's come up with something new from scratch, we've got too little work to do"? The legacy widget model has insurmountable performance issues that cannot be overcome in that model. If you don't understand that, you need to do some research first, perhaps actually try coding something up and convincing everyone how your supercool painter widget based model keeps up with competition.

There's no way to get good performance from a painter-based architecture that asks everyone to repaint their part when something changes. This model made sense for a while because common graphics hardware was generally slow and had no acceleration to speak of when it comes to graph-based representations of the visuals. It doesn't make any sense anymore. When a window moves and is to be recomposited, you shouldn't have to transfer more than a command or two to the graphics card to change a couple coordinates. It'll be picked up next time the rendering is done. In the painter-based model, at best you have backbuffers for every window (even if a window has a flat background that can be represented by two flat shaded triangles -- two dozen numbers at the most, not a megabyte), and those backbuffers have to be composited.

The widget model not only sucks performance-wise, but it also sucks resource wise: you need a lot more memory and a lot more memory bandwidth to render even fairly simple things.

I wish to elaborate on why the painter model is inefficient with today's GPUs.

The painter employs an imperative approach that does not allow for much freedom. Example: begin(), line(), text(), line(), end(). The two line() calls should be grouped together, but they cannot, because then the result would not be equal (what if text() drew over the first line, then the second line() call drew over the text for example?). The result is pretty bad: the underlying implementation has to perform tons of unnecessary shader switches (since font rendering most likely uses different shaders than the line drawing code), and perhaps texture switches (if texture-based AA is used). In addition, every time the painter is used, a vertex buffer has to be filled with vertex data. It cannot be easily cached. And this applies to *every* begin..end painter sequence.

A declarative QML-like approach is a much, much better idea. The fundamental reasons are that (1) the renderer now always has a global picture of what the frame shall look like, (2) intermediate results are much easier to cache, (3) no strict sequence of drawings is given, therefore the renderer is free to reorder and merge drawcalls in any way it wishes. This benefits even pure CPU-based rendering - the Enlightenment Foundation Libraries [enlightenment.org] render using a graph, and are extremely efficient (they clip and cull primitives early on, group primitives together, IIRC can even detect accumulated opacity from several alpha blended layers..).

C++ QML bindings would likely consist of an API that can modify the graph. Either way, the painter-based approach is gone.

The real awesomeness about QML is not the visual graph - for a high-level UI developer that's an implementation detail. What makes it awesome is declarative UI markup, and flexible data bindings to the model. It's what MVC should have been from day 1.

I can certainly understand the advantages to graphics developers. I'm just looking at it from my own angle, which is more about placing buttons together.

And yes, it's certainly a good idea overall. I was hoping that someone would do something like that ever since WPF came out and showed what's possible there (for all its perf flaws and XAML verbosity). QML takes the same ideas and pushes them even further. I only wish they had full-fledged widgets available there from day 1.

If all applications were about placing buttons together, you'd be right. But then people ask "ah, well, how hard it may be to have translucent windows/non-rectangular windows". And then they say "well, now I'd like this one widget to be translucent, see". And boom. The painter-based model falls apart. Even if nothing is translucent, using hardware-accelerated rendering is only effective when using a non-painter rendering model.

Well, again, from the angle of the guy who places buttons together, all I care about is having a two-line way of saying "now make this translucent please". Then of course I'll also want various composable render transforms, preferably complete with declarative time- and event-driven animations, and so on. The precise graphics model that backs this is, again, an implementation detail from that perspective - one only cares that the features it enables are all there.

So is that why C++ was dumped in favor of Javascript? Because Javascript is faster?

It wasn't, you're confusing a few things. Javascript is used as the glue that ties a few events and methods together, but you still pass them down to your c++ program logic to do work. If you just need to change a radio button when another one is pressed, then that hardly needs and c++ logic, a little bit of js would be quite sufficient and make no noticeable difference to performance.

Huh, a class that handles user events and draws to a rectangular regular is too slow?

Yes. Because when the graphics card does the actual rendering using the modern 3D hardware, you don't need the rectangular window anymore, nor its memory and bandwidth baggage. If you actually understood how it works, and what it takes to composite legacy rectangular widgets, you'd have understood it. As it is, you're talking out of your ass, demonstrably without any technical understanding of what's involved. Sorry.

There are areas to improve painting but that's a reason to fix QPainter.

It's got nothing to do with fixing QPainter, you're delusional if you think so. The model wh

This is an imperfect solution which assumes that redrawing inside the window (which is the OpenGL texture) is done quickly enough. It also splits the system into two parts: the hardware-accelerated window compositing part and the unaccelerated drawing part.

You do know you can use hardware acceleration to draw into an OpenGL texture, right?

I am a bit confused as to what you are arguing for. I agree that Qt widgets are vastly overweight, and a redesign that makes it practical to have many orders of magnitude more widgets representing much simpler objects such as lines would be more efficient and powerful, and this is what the newer designs are attempting. But it has nothing to do with moving the drawing to hardware acceleration.

Of course you can use framebuffer objects to draw into a texture. In fact, if I were to implement such a split solution (render textured quads with OpenGL for compositing the widgets, and use the CPU for drawing the widgets themselves), I would use FBOs. That still does not change the fact that drawing the widgets themselves remains unaccelerated.

You joke, but there's already a lightweight Qt-based desktop environment called Razor-QT that runs quite nicely on the RasPi, though it is still a bit rough around the edges since it's still in development.If all else fails, the KDE project could probably assimilate it.

Maybe SUSE (Attachmate) can buy it, or even better Cannonical. SUSE could keep it going but Cannonical is trying to develop a toolkit from the ground up for Unity3D based on NUX, but it is really terrible compared to Qt and it will take them 5+ years to catch up. Forever in this business. It would make much more sense to move Qt in the direction they want to go.

I've wondered how long it would take to make a Unity (or Gnome 3) clone shell with Qt and Plasma. The advantage of Plasma is that you can easily swap shells on the fly and give users choice (though frankly I think the traditional KDE desktop is far more usable than Gnome 3 or Unity).

However, if anyone should purchase Qt, it should be Google. They can guarantee it will stay GPL. And Google themselves need to learn a few things about cross-platform apps. Apps like Picasa, Google Earth, Chrome, etc. probably should have used Qt from the beginning.

Having used both, VCL has nothing on Qt. It also doesn't hurt that Qt is free and cross platform while VCL costs a fortune and has only recently gained OS X support. However now that Nokia appears intent on offloading Qt, I'd worry about Qt's long term future.

GTK+ apps look out of place on Windows, even more so on Mac. In addition to that, Qt just integrates a lot better into the native tool chain (e.g. Visual Studio, Xcode). Prior to being bought out by Nokia, Trolltech were charging $1500 per developer, per platform for Qt. And Trolltech were profitable! It is *that* good a toolkit. It's benefited immensely from being backed commercially and it shows.

Will this continue after Nokia bails? Will the pace of development slow, to the extent that it no longer integrates as well with new tool chains and platforms? That is an unknown and I really hate unknowns...

While it should just work, for other reasons I've got "gtk+-bundle_2.24.10-20120208_win32" installed, set it's \bin to my %PATH%, and even went so far as to symlink \gtk to it. This is all done because I've got several versions laying around and

You just started worrying? I started back when Trolltech was acquired. This only got worse when MS "acquired" Nokia. (And yes, I *do* know that officially that hasn't happened.) I will be quite worried about who they will find to sell it to. This is the more significant with the Gnome folk trashing their own libraries. It's enough to cause one to suspect outside influences.

I haven't used Qt so I can't speak to the comparison, but VCL is actually pretty awesome. I've always liked CBuilder. It was doing RAD well and correctly back when MS solution was to make the dlls in Visual C and the UI in Visual Basic. Remember that mess? I will always have a fond spot in my heart for CBuilder for being a better alternative.

So take VCL, and couple that with Project Jedi [delphi-jedi.org] and you've got a great dev environment. Scores of smart widgets, panels, sliders, panes, etc. If there is anythin

The problem is that "high quality software" should handle errors -- it should not crash when an error occurs (in the worst case, it should gracefully shut down), and it should not ignore errors. Error codes are fine as long as you actually handle them, but in practice the effort required to check the return value of every function call (and you think checked exceptions are annoying?) leads to some errors not being detected or handled. That is why exceptions are good in theory; but if in practice, an exception can cause a program crash even when there is an exception handler waiting to catch the exception (this is the case in C++), then they are not a good way to deal with errors.

As for embedded systems, if there is enough computational power to display a GUI, I really do not see how exceptions are problematic.

And high quality software does handle errors. You seem to be wrongfully blaming deficiencies in Qt's implementation on the langauge. I've written numerous pieces of complex software with Qt using exceptions and have never seen an exception fail to be caught or ever let errors go unchecked. Eitherway, I've used tons of crappy Java and.Net applications that have failed to catch exceptions or check errors and routinely crash because of this. As the person above said, you seem to be just another boring, ignora

You seem to be wrongfully blaming deficiencies in Qt's implementation on the langauge

I was originally replying to a post that claimed Qt was the "best" toolkit for writing "high quality" C++ programs. Qt uses error codes, not because error codes are a good thing (they are not), but because Qt is a C++ toolkit and C++ makes anything other than error codes unreliable. How is that an unfair criticism?

I've written numerous pieces of complex software with Qt using exceptions and have never seen an exception fail to be caught or ever let errors go unchecked

Except that preventing exceptions from crashing your program in C++ means preventing some exceptions from propagating -- and basically forces you to create programs that do not handle certain errors. In C++98, you could just risk a double exception fault, but it was considered bad practice; in C++11, you can't even take that risk, and so your destructors either have to handle the error properly or you need to find some other way to signal the error (or else let it go unhandled or just quit). On some level, you are either not using exceptions at all or else you are allowing some exceptions to be ignored -- that's a reality of C++ exception handling.

I've used tons of crappy Java and.Net

Did I say that Java or.NET are better? In all of these systems, exceptions could have been done better -- for example, by not destroying the stack before the exception handler executes. Java won't cause your program to abort when exceptions are thrown, but Java will cause exceptions to be "forgotten" under some circumstances:

try {} finally {throw new SomeException();}

That is not much better if you want "high quality" software.

You might want to update your C++ hater points beyond what you read on yosefk.com or other lame whiner sites.

Lame whiner sites? I programmed C++ for a decade, and I still have to write C++ code sometimes. My dislike of the languages comes from experience, not from some website. Error handling is just one issue, one that I think is very much relevant if we are going to talk about "high quality" software. Programming in C++ requires knowledge of the long list of undefined behavior, the long list of patterns that have to be used to avoid that behavior (which hardly anyone deviates from, except for novices who have not yet learned the patterns), and debugging is as much about correcting bad program mechanics as it is about correcting bad program logic (and the majority of C++ code is not low level code).

Yes, I know, programmers should just follow best practices; if that is the case, why not just make those practices standard, and create a special statement to disable that behavior? Why are we forcing programmers to explicitly say, "I do not want this program to crash," when we could be forcing them to be explicit in situations where they do want to write potentially unsafe code?

Except that preventing exceptions from crashing your program in C++ means preventing some exceptions from propagating -- and basically forces you to create programs that do not handle certain errors

I don't understand this - of course you have to prevent some exceptions from propagating - ie catching them. If you don't catch them anywhere, they end up in the default handler which usually stops your program by design.

Exceptions are seen as magic error handlers by some people, but the truism that you should never throw an exception you're not prepared to catch is valid. Similarly, you could say you should never return an error code if you're not prepared to check it. The only difference is really in that

I don't understand this - of course you have to prevent some exceptions from propagating - ie catching them.

That is not what I mean; what I mean are things like destructor exceptions e.g. closing a file can result in an error condition, and exceptions are basically the only reliable mechanism to signal that. In C++, iostream classes have that exact issue, and the standard dictates precisely what I said: if closing a file causes an exception to be raised, the destructor must catch it and do nothing, which is another way to say that the error will happen silently. There was, I am told, a very lengthy debate ab

Except that preventing exceptions from crashing your program in C++ means preventing some exceptions from propagating -- and basically forces you to create programs that do not handle certain errors

I don't understand this - of course you have to prevent some exceptions from propagating - ie catching them. If you don't catch them anywhere, they end up in the default handler which usually stops your program by design.

I think he's referring to the poor exception semantics in C++, in particular that there is no exception "top" type. If that's not what he means, at least that is something I've had to face with when it comes to developing C++. As you know, a throw can take on anything, even an int. There is no exception type as a first class type, and as result, there is no "top" type for them.

But since anything can, in theory, be thrown at your code, your code must always have a catch(...) as the last catch of every try

OK, and I did not say it is impossible to do that -- just that it is needlessly difficult, and that as a result it is unusual. People have also written high quality software in assembly language, but if you needed to write high quality software, I doubt that assembly language would be your first choice.

I understand, and it does seem that a hierarchy of well-defined exceptions would be something the standard body could consider, but you'd still be stuck with backwards-compatiblity with throwing an int.

I guess you could attempt to build your own, rely on exceptions derived from std::exception, and then use a global handler to catch all other exceptions.

I'm a pragmatic chap, I know there is no perfect code, and that you can't build it in other languages that do have well-defined exception hierarchies (eg in

We have had very good results with assuming that std::exception is the "top" as he calls it.

The non-std::exceptions are limited and in fact indicate cases where we *must* catch it, or where they should not occur except inside a catch. One is boost::thread_interrupted, which should only be thrown inside a boost thread and thus inside a catch for it. Another is a special exception we have for a "cancelled" dialog box, which must be caught so that the user cancelling a dialog box does not pop up an error-repor

in C++11, you can't even take that risk, and so your destructors either have to handle the error properly or you need to find some other way to signal the error (or else let it go unhandled or just quit).

That's not true - destructors default to noexcept(true) in C++11, but you can explicitly override that.

Except that the behavior of a program following an unhandled error is essentially undefined -- the only well-defined behavior is to terminate. That's what exceptions give you; if the programmer does not define how an error should be handled, the program just stops. In terms of a state machine model, if the program enters an error state and has no transitions out of that state, it halts.

The difference is that not all of my errors are fatal, whereas by definition every exception is.

No, not all exceptions are fatal, at least not in languages that support restarts or continuations. You can have e

You may be right about it being a good idea for Qt itself to not throw exceptions.

However what programmers (at least me) want is to be able to throw exceptions in message handlers and have them caught by the code that sent the message. It is not Qt throwing them, I just want them to pass cleanly through.

There already is QT for NACL [qt-project.org]. It's a very interesting idea, you can deploy QT apps over Chrome instead of having to target a native desktop. If you build for x86 and ARM you've got a complete software stack for web-accessible native GUI apps that will run on any platform that Chrome runs on (which apparently will soon include Android).

Firstly, MS would do well with it - there is a ton of old MFC code out there that will need to be updated to something jazzy, and with the MFC->Qt migration tools, all that old code could be made shiny and modern. This is especially important given MS's renewed interest in native development. Its also important given MSs current GUI technology WPF being a bit shit [wordpress.com] - using way too much resources and performing really badly, really so for anything using lots of fa

For all MSs faults, they do good documentation and put a lot of effort into making their tools work.

Right, that's why MSDN still has no facility to filter a search by programming language (finding the "c" version of a call is damn near impossible since it also brings up c++, c# & objective-c) and hitting recompile a second time in Visual Studio will fix some bugs in your software.

I say this as someone that did a fair bit of win32 programming in College (required, trust me it was NOT by choice!) and it was by FAR the buggiest piece of crap I have EVER worked with and I've done JavaScript on IE6!

actually the MSDN v4 help system was the dogs danglies, then v5 went all HTML and was... acceptable. The new one is a lot more rubbish... however, I was referring to the quantity of the stuff, they do not skimp on writing the stuff - and besides I use Google to search through the online version nowadays, that works:)

I don't have a published citation, but our entire CST class went by the motto (and I am NOT paraphrasing here!) "If at first you don't succeed, recompile!" This motto was ONLY ever said while working on win32 in VisualStudio.

Just a little background: I'm not talking about little clocks and stuff, we were writing low-level networking and GUI stuff such as streaming radio client/server setups, RFID readers and other fairly complicated applications.

All I can do is express my confusion. Nokia purchased Qt presumably with the intent of using it on their phones. They put out a couple of very good phones such as the N900 that leveraged Debian and Qt. All of that seemed like they were on the right path. Debian users practically swear by the N900.

And then... they announce plans to switch to a non-existent Windows platform. What? That was a total reversal of course away from what was previously a direction of free and open source software. Somewhere in the company I'm betting the reasoning given has to do with a spreadsheet of expected costs of development between the Qt and Windows platforms, and my personal bets are on those numbers being wrong and thus the wrong decision being made.

What matters to me personally is that Qt support structure survives this intact, because it's a very important framework. Thankfully Qt is GPL software, so the existing code will survive no matter what.

I do not understand why Microsoft Nokia is getting rid of Qt. One would think that their ability to control both GTK and Qt via Microsoft employees Miguel de Icaza and Stephen Elop would be a good thing.

the reason why they got the board to go with it is that they spent something along to the tune of billion++ dollars on being prepared to do "great things" with whatever tech was the jesus tech of the day - that was the real expenses problem(and the structure of that development was such that it leaked lots of money to companies outside of Nokia).

what they should have done would have been to fire the people anyways and definitely not announce dropping the wh

you could ask, why the hell were these parts in development in australia

Three reasons, which date back to before Nokia bought Trolltech.The first is a key developer in Norway had come from Australia, then went back, initially to work alone, but built up a team over a decade. I got the impression that Norwegian developers loved to spend time at the Australian branch (not far from the Gold Coast) during the Norwegian winter, so I think the two branches got a lot of contact.The second is with people in Norway

Probably in case of failure. If Nokia's handsets tank and the company goes into the red because no one wants a Windows phone - which seems to be the way things are going - then I suspect they hoped they could have a soft landing in the form of being bought out by Microsoft and saved from themselves. Whether this will happen or not - who knows. Though to me it looks like a case of from the frying pan into the deepest bowels of hell.

Microsoft has already thrown Nokia under the bus once when they denied the Lumia an upgrade path to Windows 8. So rather than rescue this dying company (and all its debts) with a buyout, they'll just wait for Nokia to crash and burn before picking up the useful pieces at fire sale prices.

Nokia bought Trolltech (the original Qt developers) in 2008. I vaguely remember the articles at the time saying the reason was indeed so that Nokia could develop new GUIs for their phones. The new CEO of Nokia, Stephen Elop, became so in late 2010. Not long afterward is when the announcements started about going toward the WP7, and one by one stopping the other phone OS projects. Guess where Elop worked before taking over Nokia? Microsoft.

Over the last few years, whenever I looked at a changelog for a new release of Qt, I noticed quite a bit of of work was being done to support Symbian or Meego. When I went to their annual conference a couple of years ago, some of the stuff they were showing off (namely, basic UI control widgets for QML) seemed to be focused on Symbian or Meego first and maybe other platforms later. Meanwhile, I noticed that some releases of Qt (especially around 4.6.2) had some surprisingly bad bugs that I wouldn't have expected in the past. I wasn't alone. A friend of mine at Nokia doing Mac development with Qt admitted as much. The whole thing made me think that far more resources was going into getting Qt support for Nokia's platforms at the expense of Qt's traditional desktop platforms. That's an uncomfortable feeIing to have when you're a software firm and you're paying Nokia (and now Digia) for commercial support for the toolkit. I'm hoping that what's going on now will refocus Qt development.

I think this is great. The devs got paid $135 million for all their hard work, from a big, stupid company. Now, Nokia will probably sell it at a low price. Google as an act of generosity could buy it for a low price, and give it to the C++ committee.

Sounds good to me. The patents it would seem are the really important part. The source code is GPL, I already have downloaded a complete copy of Qt source code, I advise others to do so, for safekeeping.

... I note with interest that Thiago Maceira - a hard-core Troll and formerly Qt's product manager - jumped to Intel a while back and works in their Open Source Technology Centre. He is still very heavily involved in Qt development.