The greater news concerns Qt Creator this time: the complete source code is publicly available under the GPL from now on. Everybody interested in the development of the latest addition to Qt's tool family should head over to the repository and take a look. Qt Creator is intended to make cross-platform development with Qt as easy as possible - especially to those who are new to developing Qt applications.

The source! I've been using Qt Creator on my work mac for quite a few weeks now and really like it. But all that time I wanted to browse the source. (And on my private linux thinkpad, I'm using Kdevelop4, and I'm really liking that, too.)

I wonder if there are any plans from within Qt Software to cooperate with KDevelop? I've been building KDevelop4 from SVN for some time, and the C++ support is beyond awesome :-) I haven't tried the debugger yet, but I've heard it still has some way to go, while the QtCreator debugger seems to be very far already.

Trolltech should have got with the programme years ago and decided that they needed a fully fledged cross-platform IDE designed for Qt, using Qt. Instead what we got was half-baked measures like using Visual Studio with Qt (people use Qt to get away from MS dev tools), trying some half-baked integration with Eclipse and half-baked, halfway-house applications like Qt Designer. A visual designer, but not an IDE which you needed to get separately and where most sane people would expect a visual designer and an IDE to have s decent degree of integration. The framework is first rate, but it has always been let down by the default development tools, or lack of them.

I think the KDevelop developers are entitled to get a bit miffed that the Qt people have finally seen the need in a dedicated IDE for Qt development, that they didn't get any direction or code or component sharing on this from Trolltech years ago and they've just turned up now.

The performance improvements done in 4.5 have nothing to do with the compilation time. In fact, you can expect 4.5 to take 20 to 50% longer to compile than Qt 4.4, which was already 100% slower than 4.3.

So, it takes some more minutes to compile Qt 4.5, but you get improvements all over the board once it's up and running :-)

If the "4.2" you mentioned was KDE (not Qt), then, yes, there are improvements. I haven't seen any significant memory usage differences, but painting performance should be better. You can also try launching individual applications with the "-graphicssystem raster" or "-graphicssystem opengl" option and see if it makes a difference. (native, the default, is the slowest of the three graphicssystems supported on X11)

I'm not a graphics engineer, so I don't know the full details. But a high-level explanation of what the graphics system does: it is the entire paint engine (drawing primitives, etc.), plus backing store and a few more things. The three available graphics systems on X11 are:

The "native" engine is what we've always used. That means a QPixmap is actually an X11 pixmap, which is why you can't use QPixmap from outside the GUI thread. When you draw on a QPixmap or on a QWidget that is using the native engine, Qt will end up eventually making X11 calls to draw the primitives.

Now, the "raster" engine itself was already inside Qt: it's QImage. Using it is somewhat similar (but not fully) to calling widget->render(&image), then asking X11 to simply display that image. It's more than that because there's the backing store to be considered...

And the OpenGL one is the same, except that instead of using Qt's own drawing primitives, we ask the OpenGL library to do it. But you must note that OpenGL was *not* designed to draw pixel-perfect widgets, but instead 2.5D and 3D, where 1px differences don't really matter. So, in order to accomplish pixel perfection, you need good hardware, because Qt asks for a lot from your OpenGL.

The interesting thing is that drawing everything in the Qt application then transferring the whole image to the X11 server is faster than asking the X11 server to draw. Go figure...

Considering that modern graphics hardware doesn't have any 2D video acceleration (unless you use the 3D hardware to do it), it doesn't surprise me at all that rendering everything in software on the client is faster than using X11.

Considering how variable (slow) some drivers (NVidia) seem to be on X11, maybe going for all-software drawing is a better idea. KDE 4 actually seems to run faster on my machine using the VESA driver than it does with NVidia's driver, despite not supporting any 2D acceleration at all.

As far as I understand, that's how things have always worked in Mac OS X - the contents of each window is drawn using software, and sent to the graphics as a big image, which is composited on-screen using the 3D hardware. I believe that Vista works this way too, and Microsoft's newer libraries (like GDI+ or .Net WinForms) do all rendering in software even on Windows XP.

"Considering that modern graphics hardware doesn't have any 2D video acceleration (unless you use the 3D hardware to do it), it doesn't surprise me at all that rendering everything in software on the client is faster than using X11."

That's a silly fallacy. It's exactly like saying you can't run email clients on the CPU, unless you use web browser hardware to do it.
Modern gpus are highly programmable and have no problems accelerating any 2D graphics api.
One could claim that the latency of having simple alpha-blits run through the entire pipeline instead of dedicated silicon that shotcuts it is a lot higher, but then someone will point out that modern 2D graphics also has gradients, paths, masks and extended blend modes rendering the argument for dedicated 2D silicon moot (plus optimizations for simple alpha-blends in the general pipeline would obviously profit all cases of straight-forward quad texturing making them a lot more relevant)

Fine. Modern graphics cards are nicely programmable, and you can do all kinds of neat 2D stuff with them very quickly. In theory. But this is reality, and things don't work that way unless you write every application in OpenGL.

The _only_ time I've seen fast 2D on a Linux system in recent years is using an older Intel onboard graphics chip, which still has dedicated 2D hardware. All the 2D operations exposed through X, including XRender, are pretty fast, and the desktop is very fast and responsive. Not only is it faster than NVidia graphics hardware on Linux, but it's faster than NVidia graphics hardware on Windows as well.

Hell, that was on an older EeePC, which is about as underpowered as you can get. It runs rings around my Nvidia-powered desktop machine.

The problem is that the 2D APIs we have (like X11, XRender, or GDI) are not designed to run on top of 3D hardware, which has a completely different programming model than older 2D hardware. Operations involving logical operations, or bitmasks, or colour palettes make absolutely no sense in terms of modern 3D hardware, but that's what our 2D system uses.

It is possible to implement this on top of a modern 3D card. However, that would take a huge amount of work, and until very recently, the likes of NVidia just haven't bothered. Even on Windows, every new generation of graphics card gets slower and slower at 2D operations, and on Linux it's far worse. These kind of operations, particularly the more advanced XRender stuff, are so slow that a decent software implementation is several times faster.

In the future, we can use stuff like Cairo, or Qt's OpenGL-based painter to get performance improvements, but right now, I don't have any applications that use OpenGL to do 2D. They all use the old 2D APIs.

And the fact remains that I can get far better desktop performance by sticking an ancient GeForce 4 MX in my machine that I could from any new graphics card. It's odd that a bottom-end card from 6 years ago, which is in fact a rebadged GeForce 2 from 2000, can beat pretty much anything modern from Nvidia.

The 3.x to 4.x transition was more than just porting to Qt4, it was also used as an oppertunity to rearchitecture KDE as a whole to be able to continue advancing beyond the limits that were being reached. Unless the KDE project dies out, there will definitely be another major revision within 20 to 40 years... regardless of how forward you think you can't predict how the world of computers will be in even a decade, much less two - four.

If Qt5 were to come anytime soon (which it doesn't seem it will, Qt4 is doing just fine =) it would be more like the Qt2->Qt3 change which would cause something more like the KDE 2.2 -> KDE 3.0 transition than the KDE 1.1.2 -> KDE 2.0 or KDE 3.5 -> KDE 4.0 transitions.

I fully agree!
But all this trickeries are based on historical necessity: There wasn't good support for templates in compilators in the time when Qt was establish. I hope that in the future the support of templates in Qt will change, because modern compilators have good support of templates and many "old stuff" compilators are planed to beeing unsupported in future Qt versions. But I don't belive that it will be in QT5.

QObject is not a workaround for templates, and I don't think the stuff QObject does could be done using templates.

STL and Boost are unrelated matters, since they don't provide the signal/slot mechanics. In fact, it's one of the things that takes Qt a notch above all others - it's only a slight pain at build time, but pays back generously in practical development (as opposed to e.g. observer pattern that quickly starts sucking when dealing with object ownership, N..N relationships etc). Notably, they don't really complicate debugging at all.