Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

werfu writes "Compiz 0.9.0, the first release of Compiz rewritten in C++, has been announced on the Compiz mailing list. See the announcement for more info." Compiz has for years been one of my favorite ways to make Windows users envious, despite my (Linux) systems' otherwise low-end graphics capabilities. Besides the switch to C++ from C, this release "brings a whole new developer API, splits rendering into plugins,
switches the buildsystem from automake to cmake and brings minor functionality improvements."

Truer words haven't been spoken! I am filled with jubilant delight to hear that the Compiz team could exploit the wildly successful merge of the object-oriented and functional programming paradigms of C++!

The language and dependency changes aside, how much do you want to bet there will be problems in every package distro?

After 2 and a half years of getting Compiz sorted in SuSE, RH, Slackware so you have a 50% or better chance of it working out of the box when you install a distro, not having to dig through massive tweaking to get it operating... I'm expecting a step or two backwards in the "installability" department for a while.

After 2 and a half years of getting Compiz sorted in SuSE, RH, Slackware so you have a 50% or better chance of it working out of the box when you install a distro, not having to dig through massive tweaking to get it operating... I'm expecting a step or two backwards in the "installability" department for a while.

Nobody should be putting Compiz 0.9 into a shipping distribution. Hopefully by the time 0.10 comes out they'll have it unfucked again.Fedora might do it, of course. But I don't see it until some point releases have gone by.

The relevant words from the announcement are "complete rewrite". Or in simpler terms for the users, you do not want to run this until it reaches 0.10 (also as per the article.) This is a development and not stable release. (Sure would be nice if they would go 1.0 instead of.10 if it's going to be a stable release...)

Here's the stuff from the announcement interesting to users:

Rendering framework split into the composite and opengl plugins, the former
making compiz a compositing window manager and the latter performing that
compositing using OpenGL. Such a split will allow new rendering plugins such
as XRender and Clutter to be developed as well as for compiz to run as a
non-compositing window manager

Added support to drag windows to edges and have them fill the adjacent side of
the screen

* Added support for automatic wallpaper rotation* Added edge support to grid plugin so windows can easily be resized by dragging
to an edge or corner

Since having the enforced change from the ultra fast, ultra stable Beryl to the not very fast Compiz, I have not been very impressed with Compiz. The developers told me they didn't change anything to get the Beryl fork back into Compiz, but the fact on _MY_ system is simple.

With Beryl I could run whatever effect I wanted and even multiple effects at the same time, and the CPU was barely used, about 98% of the work was offloaded to the graphics card. Now with C

You're not going to see any speed gain from *just* switching to C++ from C. A direct translation of code from C to some other language invariably never accomplishes this. The compilation of Compiz will also be slower if it was just a language change, anyway.***

*** Unless the authors also did a major refactor and performance enhancement job while they were sifting through the code, which is what I always strive to do when I have to refactor an entire project from scratch, but in a time crunch or to get new

Nothing useful. It's eye candy, like a turbo-charged Aero Glass with 3D effects.
I use the cube desktop switcher and that's it. For some reason I find the idea of a cube easier to map out my mind when I have several windows open than a chain of 4 desktops.

So in other words, you find at least one aspect of it to be very useful. While some window effects are just pure eye-candy (e.g., wobbly windows), many of the added desktop effects provide various degrees of enhanced functionality. This includes:

Desktop presentation, be it cube, zooming, or task switching, can be molded and animated to allow the user to better understand and utilize the multiple desktops.

Transparency allows information to be literally overlayed, decreasing the intrusiveness of upper-stratum menus and windows.

Various effects can tag and categorize different applications or application states (active, inactive, shaded, etc.)

The added capabilities allow enhanced usability tools, like magnifiers and mouse location, to be well-integrated and seamless.

Don't dismiss the suite as just eye-candy; if the main perception of Compiz is that it exists only to make things more fun and prettier, then its overall value to the desktop is understated.

There are a number of plugins that increase productivity a lot though, namely the scale, desktop wall, expo, app switcher and zoom plugin. Problem is: the default configuration is not designed to be useful, but to be easy.

While installing new systems, I install the CompizConfig Settings Manager, and then set up the plugins for efficiency: I basically map common window functionality to screen edge/corner clicks with the mouse.

But my point was that the cube isn't useless for me. I can much more easily remember the faces of a cube than slide position. Plus, being able to move from space 1 to 4 instantly by moving left is super-handy.

Ah, sorry, wrong wording. Actually, I just wanted to expand your comment. In my experience (and I spent quite some time on it) much of the usefulness of compiz is a matter of configuration.

So I'd rephrase my leading comment to "the desktop cube is useless to me". But that's what I like about being able to adopt compiz to my bidding. People are different, and I can adopt compiz to my preferences while not bothering you.:)

People should be aware of how the work and see how they can adopt the tools to make the

Fun fact: I knew somebody who added a preprocessor step to his compile process to make every class as a friend of every other class, because he was tired with "not being able to use the pesky private stuff in coworker's cold".

The point is that its niche. The range of situations where you need raw speed, yet by-the-book OOP doesn't slow you down too much is very very small. Games and large commercial desktop apps are basically it. Line of business apps will usually go.NET or Java, web apps will go PHP,.NET, Java, PERL, Python, whatever. Drivers will go C/Assembly, specialized backend systems will go C/Assembly, etc.

There's exceptions to everything and i realize this is a gross generalization, but overall it stands, leaving C++

I suspect the efficiency gap between C and C++ is smaller than you think. Even if you are very strict about encapsulation of objects, you'd be very unlikely to add more than 10% to the run time. And as others have pointed out, making use of features such as templating can actually help the compiler generate more efficient code.

C++ was designed so that it adds no overheads to imperative code, while the OOP constructs such as member functions have only one extra parameter (and one level of indirection for

I understand, but for speed I expect that C++ still outperforms Java, and while C should outperform both of them, C doesn't feature encapsulation, polymorphism and all the other goodies that OOP provides.

No, C is exactly as fast as C++. C++ only becomes slower if you use certain features that have a performance impact. Example: if you use exceptions, there is a performance penalty. If you don't, you don't get the performance penalty. That is one of the design principles of C++: nothing can be included into the language that slows down code that does not use/need it. The main slow downs you will see in your average C++ program, over the corresponding C, is the use of the string class as opposed to the nasty but fast strcpy and friends, and the extra indirect function calls due to virtual functions (which causes a branch misprediction and hence a pipeline flush on modern cpus, costing you a bunch of clock cycles). Still, you only pay for virtual if you choose to use it, and manually implemented virtual function calls are used all over the place in good old C, with the same effect. Furthermore, C++ templates allow code re-use with exactly 0 performance loss and while the error messages are ugly, they're still a whole load prettier than doing the same thing the C way with recursive includes and lots of preprocessor madness. And you can link to existing C code/libraries without any problems.
Frankly, there is no valid reason for starting a new program in C in this day and age.

C++ only becomes slower if you use certain features that have a performance impact.

And virtually every useful feature of C++ that is not in its common subset with C is one of those.

Example: if you use exceptions, there is a performance penalty.

And if you use operator new, you use exceptions.

The main slow downs you will see in your average C++ program, over the corresponding C, is the use of the string class

That and <iostream> [yosefk.com]. Once, I tried programming in GNU C++ for a system with an ARM7 CPU and 288 KiB of RAM. Even after applying all the link-time space optimizations I could find, Hello World statically linked against GNU libstdc++'s <iostream> still took 180 KiB [pineight.com]. (Dynamic linking wouldn't even have worked because libstdc++.so itself is bigger than RAM

"As I understand it, C++ compilers implement templates by making a copy of the object code for each type for which the template code is instantiated. Once you instantiate a template numerous times, your binary gets bigger, and it slows down because it has to keep loading data from storage instead of caching it in RAM."

Not really. GCC reuses the same code from different instantiations. And of course, if you follow ODR then you'll have at most 1 template instantiation for each combination of type parameters.

If you learn about C++ before you try to pass yourself off as an authority, you won't spout easily refutable misconceptions.

C++ only becomes slower if you use certain features that have a performance impact.

And virtually every useful feature of C++ that is not in its common subset with C is one of those.

What is the performance overhead of namespaces, typesafe object creation, references, function and operator overloading, use of const ints for array sizes (more efficient than C), non-virtual methods, STL (the word "virtual" does not appear anywhere in the STL sources), support for wide characters, protected/private modifiers, etc.? While features like templates and metaprogramming hav

As I understand it, the standard library uses throw new, not nothrow new. So if you use the standard library, you get the exception handlers linked in.

What is the performance overhead of namespaces, [...] references, [...] use of const ints for array sizes (more efficient than C), non-virtual methods, protected/private modifiers

True, these features allow one to use C++ as "a better C". But a lot of C++ fanboys will claim that if a program doesn't use virtual, throw, and <iostream>, it's not in the spirit [wikipedia.org] of C++.

typesafe object creation, STL (the word "virtual" does not appear anywhere in the STL sources)

Exception overhead. Or is the entire C++ standard library also available in a nothrow version?

As I understand it, the standard library uses throw new, not nothrow new. So if you use the standard library, you get the exception handlers linked in.

The standard library allows you to specify allocators for everything in it that requires memory allocation, precisely so that you can use your own allocation mechanisms. Writing one that does new(std::nothrow) is trivial.

Of course, this assumes that you want to ignore any OOM errors (which, given the existence of things such as Linux "OOM killer", is a reasonable default), since there's no way for, say, std::string to report a memory allocation error other than just propagating the exception. If you really

Sugar for functions that take this as their first argument. But as Micropolis showed, these are useful for taking legacy code that uses global or module-scope variables and allowing it to be instantiated multiple times. I'll grant you this one.

References

Sugar for pointers.

You have other problems when your code runs out of memory that often

Only if you consider running on a microcontroller or a handheld device a "problem". In such a case, running out of memory means the allocator has to purge items from the cache. Then you run into other classes that use new as their factory, for which

I would imagine that the biggest performance hit for C++ vs C is just the fact that most objects make extensive use of memory allocations. C++ makes this 'safer' than in C, and so most C++ users use it. In C, I tend to avoid memory allocation. You end up defining arrays sized to some reasonable maximum, but there's no performance penalty for that. Occasionally, this does cause problems when that maximum was underestimated, but most of the time it's pretty effective.

For example, our standard apps maintain state persistence by simply writing out one or more C structures to a temp file on disk.

Of course, the C standard explicitly states that the layout in memory of structures is implementation-dependent, so doing things like that sets yourself up for serious pain when you do things like change compiler versions, optimization options, or run on different platforms.

In my experience, a lot of programs run without crashing only through sheer luck.

C++ coders could continue to do this, of course, but they've assumed they needed to use objects for this purpose, leading to complex schemes for streaming those objects out to disk for persistence.

My PoV on C v C++ coding comes down to this kind of stuff. In C, you'll have a function that takes a struct parameter and writes it to file. In C++ you put that function inside the struct and remove the parameter.

No, C is exactly as fast as C++. C++ only becomes slower if you use certain features that have a performance impact.

Which would be every feature that isn't C with added syntactic sugar.

Frankly, there is no valid reason for starting a new program in C in this day and age.

Yes, there is: it's a simple language with very predictable behaviour, compiles fast, and the resulting binary can be trivially interfaced with pretty much every other language. There's no good reason to use C++: you don't get the benefits

As GP rightly noted, unless you use specific C++ features (exception, virtual), you get opcode-for-opcode identical code from C++ compared to C. Unless your microcontroller uses Tarot cards to determine the original language in which that MOV was written, I don't see how it's possible.

Because perhaps one is trying to work around poor design of a class where useful functionality has not been exposed to the public:. Using the class as intended would result in an abstraction inversion [wikipedia.org].

And in C you can have encapsulation, polymorpism and all other goodies OOP provides. C++ just makes it easier. For example many libraries don't export the contents of structures in the exported header files. zlib for example gzopen() returns a "gzFile" which is a typedef void*, and doesn't expose any internals.

That's not the fault of the feature itself, but of people using it incorrectly (at least in a particular environment).

It is still quite possible to retain full control over template instantiation by splitting template into header & implementation files (with header only containing function declarations and not definitions, and implementation containing their definitions), using extern template [open-std.org] in the header for all specializations that you need, and using explicit instantiations in the implementation fi

if you remove the templates by hand-instantiating them, you'd still have the same issue of code duplication.

The difference is that algorithms and containers in C or Java encourage the use of erasure to a higher type (e.g. void * or java.lang.Object). C++ templates can be used this way, but they can also be instantiated once for each T* (by pointer) or even once for each T (by value). I can think of a few things to watch out for when using templates:

Templates make compiler error messages hard for the programmer to read.

The ability to instantiate templates multiple times tempts the programmer to make unproductive

Fewer Viruses - checkLower TCO - checkCLI is not working on windows - wrongMost FLOSS runs on it - checkDrivers for more hardware - checkNo kernels panics (BSOD) - wrongNot nearly as resource hungry - wrong because tests indicate that Windows 7 is less hungry than UbuntuPenguins - what a BSThe easyest way of making a Windows user envious = getting the hottest chick on the planet

1. Windows 7 has better OpenGL performance no matter what hardware and what drivers you throw at it.2. 1.5GB? Sorry but I thought Windows 7 didn't use more than 200-300MB RAM and cached out wasted RAM space?3. What are you running next to GNU+Linux?

Compiz doesn't actually use that much system resources, nor strain your hardware either. It uses your gfx card to do all the work, which otherwise would be doing 99% nothing in most other circumstances anyway.

Compiz doesn't actually use that much system resources, nor strain your hardware either.

I have a 3.2GHz tri-core Phenom II system with a GTS 240 (~400MHz, 96 stream processors) and Compiz will easily consume 5% or more if you have a window with continual graphics updates, like a game or a video player. That's a lot of CPU! You can manually disable transforms on that window but that requires a visit to the settings manager that would leave the average user dumbfounded.

I just used the middle click / cube shrinks and becomes semi-transparent and can be rotated... effect in Compiz, which immediately shot up the CPU usage for both cores of my processor from 20% to around 60% per core. Under Beryl the CPU usage changed about 2% over what the system was already running at. I would say that Compiz does not use the graphics card like Beryl did, and the Compiz devs deny there is a problem.

I use a variety of POSIX operating systems 95% of the time, at work through necessity, and at home through choice. And because I use them, rather than despite it, I am compelled to respond.

Fewer viruses

And drunken cheerleaders get date raped more than shut-in nerd chicks. Personally, I prefer nerd chicks, and you likely do too, but most people don't. Really, they don't, and there's no use telling them that their opinion is wrong.

Lower cost of ownership

If you don't value your time. For the latest of many, many examples down the years, I 'invested' 3 hours this weekend trying to get WiFi with WPA working again after upgrading my wife's box from Ubuntu 9.10 to 10.04. Verdict: the rt73usb driver has (yet again) returned to a state of porkage, so it was (yet again) ndiswrapper and Windows drivers for the eventual win.

CLI/scripting system that actually works

Until of course you try and run a script written for fooshell on barshell, i.e. when a distro changes its shell [ubuntu.com].

Most open source software runs on it

Can be made to run on it, given enough time.

Drivers for just about any piece of hardware ever built

If you limit "ever" to "older than two years or so". But sure, many of the drivers give the appearance of working tolerably well, for a surprising amount of the time! And when they don't, well, there's ndiswrapper, or we'll-fix-it-in-the-next-release, or you've-got-the-source-compile-a-previous-version-yes-we-know-it-doesn't-build-against-your-kernel-headers-or-gcc-version-fix-it-yourself-you-filthy-M$-shill.

No blue screen of death

Ain't seen on one Windows for years.

Not nearly as resource hungry (unless of course you use Compiz:-)

Granted. Oh, unless you've got a driver bug, which you almost certainly do if your hardware was designed this millennium. Then see above.

Penguins way cooler than butterflies

By that measure, that would mean...

But the easiest way of making a windows user envious is to use a mac

...that.

This is not the year of Linux on the desktop (or the netbook). I thought we were there with Ubuntu 10.04, but it's actually a regression from 9.10. I'd just recommend 9.10, but that's effectively abandonware now, just like all previous versions of all Linux distros, "LTS" included.

Again: I'm writing this from Ubuntu 9.10. I've got RHEL5 in that VM over there, SUSE 11 yonder, Solaris in that shell, and even SUA on Windows (tastes a bit like POSIX). I'm happy with POSIX OSen. But I would not recommend them to a Joe Windows user, ever, since I don't want to be their Support Guy from now until there's a distro that actually Just Works.

Yes, if you don't do things right, they won't work right. Wow, you are a genius. Perhaps bash is less forgiving than Windows crap, but I'd call that a feature, not a bug. The main problem with Windows is that it is so damn forgiving in every area that people can do stupid things and then require the OS to support them for years/decades to come, fucking everyone else over in the meantime.
Windows shell scripting still works from release to release because they simply don't change anything because it is s

If you were using #!/bin/sh and expecting bash specific code to work, you're doing it wrong. If you want bash, call it by its proper name and it will always work.

A more likely scenario is that a script written by someone else improperly references/bin/sh despite being chock full with bashisms.

The real problem is that many people these days just assume Unix = Linux and can't even think of/bin/sh possibly not being bash (or something "compatible enough"). This is especially true of "Linux on the desktop" crowd, as server admins typically know better

And drunken cheerleaders get date raped more than shut-in nerd chicks. Personally, I prefer nerd chicks, and you likely do too, but most people don't. Really, they don't, and there's no use telling them that their opinion is wrong.

Do people prefer Windows? After actually trying Linux? Not in my experience.

If you don't value your time.

Most stuff works out of the box. Some stuff does not work out of the box on Windows or Mac either.

Until of course you try and run a script written for fooshell on barshell, i.e. when a distro changes its shell [ubuntu.com].

Dash is supposed to compatible with Bash if you stuck to Debian policy of affected scripts (those than use #!/bin/sh - if you useed bash specific features you should have used #!/bin/bash . Any examples of stuff that breaks? BTW Bash is still the login shell.

Can be made to run on it, given enough time.

Most stuff non-geeks use is in the major distros repos and is easier to install

Heres why I WOULD recommend them to some people, in certain controlled instances (have, actually):

A) have you actually tried to figure out how to secure a network, or even your Dad's computer, when doing so requires he have the ABSOLUTE LATEST version of flash, adobe reader, and java? Not to mention those realplayer and QT plugins that are sure to get exploited one of these days? Linux gets it right with centralized software updates; Windows is an absolute nightmare in this regard. Theres WSUS, but oh

Fewer Viruses - While this is technically true, most viruses I've seen installed on users machines are the result of users actively clicking and running an executable on their machine. While not running in root mode by default on Linux helps to prevent some of the damage, I think a virus running as a regular unprivileged user could still cause a lot of damage. This is also ignoring the fact that if the same incompetent users if presented with a message asking them to perform administrator actions for no reason at all would still click on "Yes", even if there should be no reason for them to do so, as long it promises smiley icons.

Lower cost of Ownership - Last time I went shoppping for a computer, I didn't see any discounts for not having Windows installed from the get go. Either you go with Dell/HP/Lenovo, and they only offer windows, or when the offer Linux, it's the same price, or only a little cheaper, but you get a lot less selection of machines you can get. The other option is to build your own machine from off the shelf components. This is my favourite option, as you can get exactly what you want, but you will end up spending more.

CLI/Scripting system - Almost nobody except tech geeks cares about this. Also, Powershell on Windows isn't all that bad. It has its pluses and its minuses.

Most open source software runs on it - Most all of open source that is worth running will run on Windows. Maybe not all of it, but most of the more important stuff. Conversely, almost no closed source software runs on Linux. Which might not matter to you, but if you're trying to get work done, having things like Photoshop, Outlook (hate it but necessary for business), and many other closed source programs, makes a big difference.

Drivers - Sure you get drivers for all the old stuff. But are you sure that shiny new piece of hardware that just came out last week will run to its full potential. Probably not. And there's also plenty of older hardware that I had that I couldn't run on Linux.

No Blue Screen - I haven't seen a blue screen on a Windows machine in many years. And when I do, it's usually because of bad RAM, causing something to get corrupted. Blue screens still exist, but they don't happen quite as often as they used to. I imagine most Linux systems would also crash pretty badly when they have bad memory.

I'm not some Windows Zealot. I use Windows when it makes sense, and I use Linux where it makes sense. But I don't really think that that any of the reasons you mentioned are valid. Especially if you're talking about home desktop use. Which in the case of Compiz, is exactly the kind of people we are talking about.

Oh come on... how, exactly, is the Mac platform (no, not the iPad, not the iPhone, the Mac, ie Mac OS X) "more closed than Windows"? At best it's exactly as closed, though I'd argue somewhat less so (thanks to the existence of Darwin, their work on the ObjC gcc backend, Webkit, etc).

Very, very true. Although PowerShell is quite powerful... but quite different from most shell scripting in the UNIX world.

You really expect any CLI, no matter how awesome, will make Windows users jealous? I definitely think Compiz is one of the few ways to make your average Windows user jealous of Linux, with perhaps your favourite package manager coming next. I remember reading that MS are building an app store for Windows though, so it won't be something to be jealous of for long!

Of course, trying to make other people jealous of you is pretty pathetic.

* Lower cost of ownership - BS, too much time is spent hacking up config files to make crap work or work right

On Windows, too much time is spent hacking up the registry to make crap work or work right. Just this last Thursday, I had to manually scan the registry to delete every reference to a printer driver that kept killing someone's spooler service... because the spooler service needed to be running to delete the printer normally. If it had been a unix system, I could have just edited a line in a file and been done.

* CLI/scripting system that actually works - BS, anything you can write and make work in Linux, I can in Windows

Using cygwin, bash compiled for Windows or DOS, or other scripting applications that are not guaranteed to be on every Windows system.

* Most open source software runs on it - Show me anything worthwhile that doesn't run in Windows or have a better alternative there

Well, Linux runs in Windows, so I'd say you've won this argument.

* Drivers for just about any piece of hardware ever built - BS, that's the primary thing most users have issues with, half baked drivers

Half-baked drivers in Windows XP, Vista, and 7. That printer driver mentioned above? It was an HP driver written for and installed in Win7 64bit.

* No blue screen of death - Agreed, but I haven't seen one yet in Win7

I haven't either, but I have seen a Win7 machine reboot constantly (the equiv of BSOD since Win7 is set to reboot on fail).

* Not nearly as resource hungry (unless of course you use Compiz:-) - Agreed, but neither was Win98 which is typically how Linux feels

I still have Win98se running on an old machine for old games. Win98se is actually snappier than modern Linux, which is in turn snappier than WinXP/7. How much window compositing did Win98se do? Firewalling? Multi-user? Even the 1998 version of Linux had multi-user support and ipchains.

Mod me down if you want to, but I've yet to have Windows drop me to a command prompt after an video card driver update

I've had it boot up to a BSOD, which looks worse than a command prompt, or a blank screen where I had to remote in or boot up in safe graphics mode.

I've had it boot up to a BSOD, which looks worse than a command prompt.

or had to recompile sound drivers after every OS update (Ubuntu on that one too).

I wish I could. Sometimes vendors take years to get their sound drivers working. Google realtek, imac, and Windows 64 bit.

My file manager will display in a column what date pictures were taken so I can categorize them accordingly, can yours do that? It couldn't the last time I checked.

This is the first time that I ever checked. No, it does not, but it could with a little quick editing. Right clicking and selecting properties shows that the Gnome file manager (didn't check KDE) can see the image properties, including "Date Taken", so the information is there. Linux users are probably just better mentally organized, and name their photo directories YYYY_MM_DD

That's my point. When moving photos from the camera to the OS I do maintain that exact directory structure, but in Windows I don't have to check every photo individually for the date taken, it's a column in the file manager.

No need to try to make Linux users smarter than they think they are though, Windows users and possibly even Mac users can be fairly mentally organized as well.

in Windows I don't have to check every photo individually for the date taken, it's a column in the file manager.

ls -lt *.jpg

If you want to automatically file them into directories based on date you can use --time-style=iso and pipe it into awk or perl and write a quick script you can use every time you do this. You definitely do not have to sort them by date, create a folder for each date, and drag and drop each group of files into its directory.

Again, Linux pushing me to a command prompt. The point was that the file managers in Linux can't handle this basic requirement useful to users at any skill level. We can all do things at the command prompt if we can write a little code, but most users want to use the GUI.

Linux defaults to the command line because the command line is better. There's a reason we moved beyond pointing and grunting into symbolic language. Writing a few lines of code is in fact easier than manually copying, renaming, converting, etc dozens of files. And when you're done you get a script you can use the next time such a task comes up.

If you really really want to use the GUI though, there's no shortage of file managers that will display the date in a column. Konqueror does it by default. So d

Not just date, date picture taken. The last time I checked, which was a year or so ago, it couldn't be done by setting any options, etc. My forum post is probably still at ubuntu.
The cmd line is better for some things, but not all. If so, do you ever use a GUI? Think your Mom is going to write that script? For you and I a cmd prompt has it's advantages, but for the vast majority of users, it's useless.

in Windows I don't have to check every photo individually for the date taken, it's a column in the file manager.

ls -lt *.jpg

This isn't what GP was talking about. That's file modification time, not the date the photo was taken (which is data inside the image file, not in the filesystem about the file). The closest you could get with ls would be to re-touch all the timestamps to match the image date data first, then use ls.
find/image/directory/ -name \*.jpg -exec touch -d `exiftime -tg {} |sed -e 's/Image Generated://' |sed -e 's/:/-/' |sed -e 's/:/-/'` {} \;
or something similar. I can't remember if backticks work in -exec

This isn't what GP was talking about. That's file modification time, not the date the photo was taken (which is data inside the image file, not in the filesystem about the file).

The filesystem time and the exif time should be the same when they're on the camera. Just pass -p to cp when you copy them over.

Yes, but when you edit the file (in Photoshop, say), the date taken stays the same and the filesystem timestamp changes.

It actually annoys me that Windows defaults to showing the exif date taken when it detects a directory of images - I'd much rather see the filesystem datestamp and sort by that, so I can see which I've already edited. I already organise the directory structure by date taken anyway.

That's a fair point. But at this stage it seems like we've moved beyond the field where a general purpose file manager is appropriate. There's no point in having a "date taken" column in a utility that many people will never use for photos. If you really need to sort your photos based on this photo specific metadata, there are photo managers for that.

Linux file managers can read that data just fine and display. Maybe they don't have it by default for whatever reason. I'm sure you can just select the columns you want, such as "picture taken date" and you'll be fine and dandy.

Oh yes, cmake is really complicated compared to the autotools build stuff. I mean I really enjoy fiddling around with all the *.in, *a4, *.am, ltmain.sh, etc just so I can do;./configure --help.

But then I guess you have never tried to use cmake; else you would not have made the ignorant statement about its incomprehensibility. If you have never used autoconf, automake, make, libtool, m4 and friends it would be just as incomprehensible.

Then I would have to say your tool-chain is messed up in some fashion, your cmake installation is hosed in some fashion or you are just trolling. I have built opencv with cmake since their initial version release switched and up to its current release; it has always compiled and installed just fine.

So you are saying if you were to compile kde-4.4.x (they use cmake) you would convert it all to autotools?... I don't believe you at all, not for a second.