Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

mlimber writes "The New York Times is running a story about multicore computing and the efforts of Microsoft et al. to try to switch to the new paradigm: "The challenges [of parallel programming] have not dented the enthusiasm for the potential of the new parallel chips at Microsoft, where executives are betting that the arrival of manycore chips — processors with more than eight cores, possible as soon as 2010 — will transform the world of personal computing.... Engineers and computer scientists acknowledge that despite advances in recent decades, the computer industry is still lagging in its ability to write parallel programs." It mirrors what C++ guru and now Microsoft architect Herb Sutter has been saying in articles such as his "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software." Sutter is part of the C++ standards committee that is workinghard to make multithreading standard in C++."

It's not just making your app multithreaded, it's completely changing your algorithms so they they take advantage of multiple processors. I took a parallel programming course in University, so I'm by no means an expert, but I'll give what insight I have. You can't just take a standard sort algorithm and run in multithreaded. You have to change the entire algorithm. In the end, you end up with something that sorts faster than n log (n). However, doing this type of programming where you break up the dataset, sort each set, and then gather the results can be very difficult. Many debuggers don't deal well with multiple threads, so that adds an extra layer of difficulty to the whole problem. Granted, I don't think that we really need this level of multithreadedness, but I think that's what the article is referring to. I think that 10+ core CPUs will only really help for those of us who like to do multiple things at the same time. I think it would even be beneficial to keep most apps tied to a single CPU so that a run-away app wouldn't take over the entire computer.

Oh, good grief--since the original poster didn't specify units, and since it's highly unlikely the running time would be exactly n log n for any choice of units, and since it's pretty common to leave out the O() in casual conversation, the only sensible interpretation is that the O() was implied....

sometimes it is as simple as adding multiple threading without changing the logic.It depends on where you are splitting your logic.

Lets take a binary search example:Your bank accidentally left a back door in their database, and now the hackers/crackers want to grab their enemies credit and accoutn information, which will allow them to get it faster?

The database is sorted:1) Perform a binary search on the data with each thread doing 1/Nth the data, where N is the number of threads per search2) Perform a bina

EE Guy #1: We can't seem to build faster chips.
EE Guy #2: No problem. We'll just put tons of processor cores in instead.
EE Guy #1: But people have spent the past 30 years creating algorithms for single core machines. Almost none of the programmers have any experience writing multi-core algorithms!
EE Guy #2: Exactly! We'll be able to blame the programmers for being lazy and not wanting to learn new complicated algorithms that require an additional 4 years of university.
EE Guy #1: Brilliant! We should come up with a catchy headline like "The Free Lunch is Over" or something like that.
EE Guy #2: Yeah, and we could get Slashdot to post a link to the article. Slashdot users are sure to sympathize with our devious plans...

actually they divided the buck into 10 dimes and then passed them all in parallel.

seriously though, it was only a few years ago that people were scoffing at the usefulness of dual processor desktop machines and arguing the value of being able to run multi-threaded apps and multiple apps faster at the expense of poorer performance on the vast majority of apps and games which people were running in isolation. it doesn't seem like applications or operating systems have seen a major overhaul since that time

More and more cores? Consumer desktops and laptops have gone up to a whopping two cores -- four cores only if you blow a wad of dough for bragging rights. Two processors is definitely not overkill for the average user, especially since most users have a browser full of Ajax-ridden web pages open 24/7. I doubt that four cores will be overkill, either, once we start to realize all the various ways we've crippled applications to make them well-behaved citizens of the vanishing single-core desktop.The massiv

I'd love to say,"Core 1, you will convert DVDs(or mp3s, or some other processor-intensive task). Core 2, run everything else."

You can, with processor affinity [wikipedia.org]. Unless you're saying your processor isn't dual-core. Even still you can just set your process priority / nice level (whatever OS you run) so that it's a lower priority so your other programs run OK.

I don't know, I haven't owned a computer since 2003 where the processor was really a bottleneck anyway. Unless you're doing something specific like converting media files or running a distributed application (seti, folding, etc.) then normally the bottleneck is disk access

Actually, Itanium was a fairly good idea. That it didn't work out, could just as well be ascribed to politics and real-world issues, as to technical issues. For example, the requirements said it should be able to run x86 unmodified (why? if you want x86 you already know where to get it, right?). It was oversold (the next desktop processor), and underperformed (late delivery, bad performance). None of these issues indicate that explicit instruction level paralellism (EPIC) is a bad idea. And they certainly h

There's a wide variance in what "parallel computing" means. For multicore, you've essentially just got a cheaper version of SMP (symmetric multiprocessing). This is worlds away from what occurs in a parallel computer and what most parallel programming algorithms deal with. With multicore and SMP you program mostly like you're doing multithreading on a single CPU.The algorithms programmers have to deal with here involve concurrency, and have been in use for decades by anyone writing an OS or device driver

This is very, very wrong. Data-set partitioning is certainly one way of achieving parallelism in programming, but it is hardly the only way- nor is it applicable to all domains, as many problems have solutions with too many inter-cell data dependencies. In addition, threads provide a wealth of benefits to application developers by allowing multiple unrelated tasks to be performed simultaneously.

There is, and will always be, overhead associated with parallelization. It may sound great to say "oh, we can farm out parts of this data set to other cores!", but that requires a lot of start-up and tear-down synchronization. It's not at all uncommon for overall performance to be improved by doing something *unrelated* at the same time, requiring less synchronization overhead.

Are threads perfect for everything? No. But calling them the second worse thing to happen to computing is, as best, disingenuous.

There is, and will always be, overhead associated with parallelization. It may sound great to say "oh, we can farm out parts of this data set to other cores!", but that requires a lot of start-up and tear-down synchronization.

I think what you meant to say is that with os threads it requires a lot of effort and overhead. For example on Tera/Cray's MTA it took basically no extra overhead at all to run a loop in parallel over N hardware threads. The only 'hard' part was letting the compiler know which loops to do in parallel.

The problem with os threads is that the things the benefit the most from parallel processing are the finest grained, but the os threads are only usable for the coarsest grained problems. So, OS threads are

Yes it is something the app developer needs to deal with. The problem isn't creating threads- the OS does that. The problem is in protecting data from concurrent unsynchronized access (this needs to be done by the app, as the OS has no clue what can/can't be accessed synchronously, that's an application detail) and parallelizing algorithms (again, not something the OS can do).

Eventually, good parallel algorithm libraries will pop up. That will help some subset of problems. I'd expect frameworks to pop up as well, helping another. But in many cases it just comes down to changing how we write programs.

And you're right, this isn't really a desktop issue- its mainly a server one. Desktops really don't need all the power they have now, perhaps one percent of users outside of gamers actually use it. That doesn't make it any less important of a problem to solve. Although I expect in the end people will still end up disappointed- parallelization is not magic pixie dust, you can only get so much of a speedup. I wouldn't be surprised if those 8 corse only give a 2x speedup over a single core on many apps.

A lot of multi-threading up until now has been about keeping applications responsive, rather than breaking up tasks. That makes sense since muti-core chips haven't been around that long in most peoples homes. Another issue is that once you have more than one processor, two threads really can run at the same time which can show up all kinds of bugs you would never notice on a single core system. The main problem I can see is with testing for errors. With multiple threads it's up to the OS on how it juggles t

The parent is exactly how I would have replied a couple of years ago. I was doing lots of threading work, and I found it easy to the point of being frustrated with other programmers who weren't thinking about threading all of the time.

I was wrong in two ways:

1. It's not that easy to do threading in the most efficient way possible. There's almost always room for improvement in real-world software.

2. There are plenty of programmers who don't write thread-safe/parallel code well (or at all) that are still quite useful in a product development context. Some haven't bothered to learn and some just don't have the head for it. Both types are still useful for getting your work finished, and, if you're responsible for the architecture, you need to think about presenting threading to them in a way that makes it obvious while protecting the ability to reach in and mess with the internals.

The first point is probably the most important. There are several things that programmers will go through on their way to being decent at parallelization. This is in no strict order and this is definitely not a complete list:

- OpenMP: "Okay, I've put a loop in OpenMP, and it's faster. I'm using multiple processors!!! Oh.. wait, there's more?"Now, to be fair, OpenMP is enough to catch the low-hanging fruit in a lot of software. It's also really easy to try out on your code (and can be controlled at run-time).

- OpenMP 2: "Wait... why isn't it any faster? Wait.. is it slower?"Are you locking on some object? Did you kill an in-loop stateful optimization to break out into multiple threads? Are you memory bound? Blowing cache? It's time to crack out VTune/CodeAnalyst.

- Traditional threading constructs (mutices, semaphores): "Hey, sweet. I just lock around this important data and we're threadsafe."This is also often enough in current software. A critical section (or mutex) protecting some critical data solves the crashing problem, but it injects the lock-contention problem. It can also add the cost of round-tripping to the kernel, thus making some code slower.

- Transactional data structures: "Awesome. I've cracked the concurrency problem completely."Transactional mechanisms are great, and they solve the larger data problem with the skill and cleanliness of an interlocked pointer exchange. Still, there are some issues. Does the naive approach cleanly handle overlapping threads stomping on each-others' write-changes? If so, does it do it without making life hell for the code changing the data? Does the copy/allocation/write strategy save you enough time through parallelism to make back its overhead?

Should you just go back to a critical section for this code? Should you just go back to OpenMP? Should you just go back to single-threading for this section of code? (not a joke)

Perhaps as processors get faster by core-scaling instead of clock-scaling this will become less of a dilemma, but to say that "[to do multi-threaded programming effectively] is not that difficult" is akin to writing your first ray-tracer and saying that 3D is "not that difficult." Somtimes it is. At least at this point there are places where threading effectively is a delicate dance that not every developer need think about for a team to produce solid multi-threaded software.

That doesn't mean that I object to threading being a more tightly-integrated part of the language, of course.

As you know, multiple threads in a program do not actually execute concurrently - processing is still serial, it's just so fast that threads can appear to execute simultaneously - and it's not just about queuing execution either.

That holds only for multithreaded programming on a single core. As soon as there are multiple cores available, processing does, in fact, happen simultaneously.

The difference between parallel programming and multithreaded programming is this... with a parallel algorithm, different parts of one task/thread are done on separate CPUs, whereas with multithreaded programming each one thread/task is done entirely on one processor.

Wait... what? "Different parts of one thread are done on separate CPUs"?

In what (real world, non-research) system is a single thread run on multiple processors at the same time? And why are you claiming that running each thread on a single p

On modern systems, threads are themselves first-class constructs, and it runs somewhat like this:

A process has things like memory-tables for virtual memory, handles for objects, files, socket connections, etc. A process always contains at least one thread (this isn't always true while a process is being set up or torn down, but it's true when most anyone's code is running).

A thread generally has a stack (in the host-process's virtual address space, so everyone can read it), some thread-local storage to make life easier for some api's (you don't need to care about this in most cases), and lives in a process. This means that threads can use virtual addresses for memory interchangeably with other threads in the same process.

Additionally, some operating systems support fibers. A fiber is like a thread except that it has to be explicitly or cooperatively (not quite the same thing) multi-tasked. Fibers use even less memory than threads, and you really don't have to care about them.

When you're in, say, Visual Studio, there's a "threads" window for all of the threads of the process that you are debugging. You can end up stepping through code on one thread while other threads are running.

The modern hardware designs lead to interesting performance side-effects from cache location and memory location. It's not quite as hard as systems that have asymmetric access to resources (e.g. Playstation 2), but it makes for fun work.

Not in, say, Linux or Plan 9. Context of execution are first-class constructs, and both threads and processes are special cases of COEs.

A process has things like memory-tables for virtual memory, handles for objects, files, socket connections, etc. A process always contains at least one thread (this isn't always true while a process is being set up or torn down, but it's true when most anyone's code is running).

When the first AMD x2 chips came out the linux kernel had issues with the clock on those chips. The clock would be several times (presumably 2 times?) faster then it should be, the cores clocks were not synchronized for some reason or the kernel would lose track... When you typed a letter it would repeat multiple times as you described.:)

I've had the same problem on single processor machines running Linux for years. I don't know if it's a problem of the keyboard repeat rate being set too low, or something else to that effect, but I notice that a lot of the time on my Linux machines it seems to double/triple type a lot of letters.

I remember learning to write software for OS/2 back in the early 90's. Multi-threaded programming was *the* model there, and had it been more popular, it would be pretty much standard practice today, making scaling to multiple cores pretty effortless, I'd think. It's a shame that the single-threaded model became so ingrained in everything, including linux. For an example that comes to mind, why do I need to wait for my mail program to download all headers from the IMAP server before I can compose a new message on initial startup? Same with a lot of things in firefox.

I was working at a very small software shop when OS/2 came out. We would get a customer, who wanted something to work on an apollo workstation, another one wanted it for xenix, a third for Unix BSD 4.2 (my favorite), or Unix System V (ugh!), or Dos. So, we got a project to port something to OS/2 version 1.0, and I got it to work, and it used multi-threading which I thought was pretty cute and I was proud of myself for figuring it all out just from the manuals. Then the new revision of OS/2 came out and e

It's a shame that the single-threaded model became so ingrained in everything, including linux. For an example that comes to mind, why do I need to wait for my mail program to download all headers from the IMAP server before I can compose a new message on initial startup?

I'm of the opposite opinion; it's a shame that so many people equate parallel processing with threads. When there's not much shared data, using multiple processes keeps memory protection between your parallel "things", decreasing coupling, increasing isolation, and generally resulting in a more stable system (and for certain things where you can avoid some cache coherency problems, a faster system). Your example is perfect; there's really no good reason to use a thread for such lookups. Another process would do, or even better just use select() and avoid all the pain (and bugs) of a multithreaded solution.

OS developers spent a lot of engineering time implementing protected memory. Threads throw out a huge portion of that; a good programmer won't do that without very good reasons. Some tasks, where there really are tons of complicated data structures to be shared, are good candidates for threading. More commonly, though, threads are used either because the programmer doesn't know any better or because they allow you to be a slacker about defining exactly what is shared and mediating access to it. The latter is especially dangerous; defining exactly what (and how) things are shared goes most of the way toward eliminating multiprocessing bugs, and threads make it easy to slack off on that and get a "mostly working" solution that occasionally deadlocks, fails to scale, etc.

Use processes or state machines when you can, and threads when you must.

I'm of the opposite opinion; it's a shame that so many people equate parallel processing with threads.

I read that and wished I had mod points. Anyone who has programmed with a language designed for concurrency, like Erlang, Termite, or a few Haskell dialects hates using threads. Threads are something that two kinds of people should use; operating system designers and compiler writers. Everyone else should be using a higher-level abstraction.

The big problem is not the operating system designers, it's the CPU designers. They integrated two orthogonal concepts, protection and translation, into the same mechanism (page tables, segment tables, etc). The operating system wants to do translation so it can implement virtual memory. The userspace program wants to do protection so it can use parallel contexts efficiently. Mondrian memory protection would fix this, but no one has implemented it in a commercial microprocessor (to my knowledge).

Ugly, clumsy and complicated compared to Java's way.I know how to do threading in C++ on every platform I have used for development. It's just that the modern languages have elegant system with forethought given to threading while desinging the platform/language. Why would anyone new want to learn how to do clumsy non-standard threading in C++?I think the options are to adapt or continue riding the dinosaur untill they die out and be left behind. Sorry that I am sending mixed singnals. I have always worked

The only significant thing that managed languages make easier with regard to multithreading other than a more intuitive API is garbage collection so that you don't have to worry about using reference counting when passing pointers between multiple threads.

All of the same challenges that exist in C/C++ such as deadly embrace and dining philosophers still exist in managed languages and require the developer to be trained in multi-threaded programming.

Some things can be more difficult to implement like semaphores. You also have to be careful about what asynchronous methods and events you invoke because those get queued up on the thread pool and it has a max count.

I would say managed languages are "easier" to use but to be used effectively you still have to understand the fundamental concepts of multithreaded programming and what's going on underneath the hood of your runtime environment.

Garbage collection is a one size fits all solution, that is not appropriate for all the applications in the C++ problem space. Further there is a lot of C++ code already out there that does its own memory management. It would be difficult to retrofit this code to garbage collection.

Furthermore, many garbage collected languages lack proper destructors. At best they have a finalize method. This interfears with the C++ idiom "object creation is

Yeah, but they do it really slowly 'cos you're tied to the framework that has to do it safely no matter what - even if you have 2 threads that never interact with each other, the framework will slap synchronisation all over them anyway.(I know - I had a discussion with a chap about C# thread-safe singleton initialisation. A simple app to test performance on my little laptop had a static initialised singleton taking 1.5 seconds, lock-based initialisation in 6 seconds. No big deal, we expect that, but then I

Some algorithms are inherently not amenable to parallelization. If you have eight cores instead of one, then the performance boost you can get can be anywhere from eight times faster to none at all.

So far, multiple cores have boosted performance mostly because the typical user has multiple applications running at a time. But as the number of cores increases, the beneficial effects diminish dramatically.

In addition, most applications these days are not CPU bound. Having eight cores doesn't help you much when three are waiting on socket calls, four are waiting on disk access calls and the last is waiting for the graphics card.

In addition, most applications these days are not CPU bound. Having eight cores doesn't help you much when three are waiting on socket calls, four are waiting on disk access calls and the last is waiting for the graphics card.

Processors don't "wait" on blocked IO calls. Your program waits while the processor switches to another task. When the processor switches back to your program, it checks to see if the blocked IO call has completed. If it has, it continues executing your program again. If not, your program continues to wait while the processor again switches to other tasks.So it is you (as the programmer) that determines if your program just sits and waits for blocked IO to complete. Or you could spawn a thread for blo

Processors don't "wait" on blocked IO calls. Your program waits while the processor switches to another task.

That's his *point*, nimrod. Processors don't wait on blocked I/O calls, but processes do. Therefore, having umpteen processors doesn't do you much good if there are no processes ready to run because they're all waiting on something.

The bigger problem is that which the article mentioned: That programmers don't know how to take advantage of the parallel cores. There are two major parts to this:1) Just because a given algorithm can't be implemented multi-threaded, doesn't mean there isn't another algorithm that does the same thing that can. So part of it is learning new ways of doing old things, or inventing new ways of doing things (we haven't discovered every possible algorithm).

So far, multiple cores have boosted performance mostly because the typical user has multiple applications running at a time. But as the number of cores increases, the beneficial effects diminish dramatically.
They diminish, but they never disappear. Even in algorithms where you completely have to wait the results of previous computation to go on, you can still get a speedup with branch prediction. In essence, while your one core is cracking the numbers, other cores do the what if work, and even if you mispredict in lots of cases, you can still get speedups with large datasets, because in some cases, when your first core comes up with a result, you will discover that the what if computation started out with a right guess.
Hey, i hear they are doing essentially the same stuff with all those newfangled multiscalar processors and branch prediction anyway.

Your post is so vague, that it is pointless. You might want to be more specific because otherwise I could easily just say "no you can't" and I'd be just as right.

Grandparent post was specific: shared cache

You don't have to be an ass if you want specifics, you could just ask. In this case, I think GPP is referring to the upfront cost of memory access. Once one core pays that price (think of it as a fixed cost), the other cores will be able to access the memory from the shared cache without having to pay t

For now the biggest advantage of multiple cores is the ability to run multiple applications with each running at full speed. Within each application the problems get a lot more complex, using current algorithms many tasks are not easily subdivided. With data that is inherently paralizable it's pretty easy - each pixel on your display is relatively independent of the others and drawing on a common dataset. However the majority of other areas are not so easy. Generally, how do you take an algorithm and di

Most of the jobs being created are not for achieving maximum speed but standards compliance. Companies want software which is easy to maintain & portable, but not necessarily the fastest. If it still was 1997 there would probably be ubiquitous implementations for SMP & vectored assembly language, but that's not the focus anymore.

There is currently no working concurrency model for standard C++. You want to make an atomic access to an object? Hope and pray that you have bug free system libraries and a compiler that doesn't optimize away your locking wrappers and do inappropriate speculative stores. Apparently the next C++ standard will address it, but it seems rather foolish to start a transition to massively multithreaded code without an actual standard.

Wait a second! Have you ever coded in C++ ? Even if threads are not in the standard library, you have boost, you have Intel's TBB(threading building blocks), besides the native threading library. Do you trust you library in Java? What if the VM screws everything up. As for the compiler "optimizing" everything there is a little keyword : volatile that just tells the compiler not to optimize memory access for that varible. A think the real problem is working in a new programming paradigm : have a problem with

"processors with more than eight cores, possible as soon as 2010 -- will transform the world of personal computing"

Exactly what areas of "personal computing" are requiring this horsepower? The only two that come to mind are games and encoding video. The video encoding part is already covered - that scales nicely to multiple threads, and even free encoders will use the extra cores to their full potential. That leaves gaming, which is basically proprietary. The game engine must be designed so that AI, physics, and other CPU-bound algorithms can be executed in parallel. This has already been addressed.

So this begs the question, exactly how will average consumer benefit from an OS and software that can make optimum use of multiple cores, when the performance issues users complain about are not even CPU-bound in the first place?

You point out that few desktop tasks require parallel processing... but think about the flip-side of this: if we could speed-up many tasks, how would that affect desktop computing?

There are plenty of tasks that people do routinely on computers that are not "instantaneously" fast (spreadsheets, photo-editing, etc.). Furthermore there are many aspects of modern user interfaces that would be better if they were faster (generating thumbnail previews, sorting entries, rescanning music collections, searching,

Exactly what areas of "personal computing" are requiring this horsepower?

Video, audio, gaming, emulators, and VMs are starters. But I think you're missing some of the picture. Most computer users have one or two programs open at a time and end up quitting everything when they want to run something processor intensive like a game or photoshop. With the move towards multi-core and with a little work from developers, people might be able to leave 90% of the apps they use running, all the time. Multiple cores also provides something of a buffer. When a thread goes rogue, their ma

Well, you could parallelize recalculation of large spreadsheets. Create dependency trees for cells and split the branch recalculations among different threads. Some accountants and executives with large "what-if?"-type spreadsheets could find that quite useful.Browsers could have separate threads for data transfer and rendering. If the web site is using

tags and CSS, you could split the rendering work for each div to a separate thread. More rapid and frequent partial screen updating can provide today's gen

I agree with some of the previous posters that have faulted programmers for "the state of today." My feeling is that the divide between knowledge of hardware and knowledge of software is far too wide. In my experience, I have witnessed many programmers who spent more time organizing the readability of their code than analyzing the actual effectiveness of it: i.e. whitespace use vs algorithm optimization (be it processor method + instruction or i/o improvement). The end result: bloaty-pooh.I feel that by

For many large-scale software projects (I work in industry so I have some experience with this) it is far easier to find more cpu power than more programmers.

Making code easy to read and maintain is critical to maximizing the efficiency of the programmer. The efficiency of the code is generally a secondary issue, and is only a factor if the code in question is found to be a bottleneck.

Brian Kernighan once said,

"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"

As someone who got a master's in computer science with a focus in high performance computing / parallel processing, and have taught on the subject, *yes*, it does take a bit of work to wrap one's mind around the concept of parallel processing, and to correctly write code with concurrency. But *no*, it's not really that hard. Once you get used to the idea of having computation and communication cycles over a processor geometry, it becomes little more difficult to write parallel code than serial.It's like of

I think the one thing that makes parallel computing more difficult, and quite a bit more so than recursion, is the fact that it makes your program non-deterministic. With a single-threaded application, it's pretty obvious when you've made your application non-deterministic...you reference the time or some resource external to your application. And those kinds of non-deterministic behaviors are much easier to understand...they're mostly just data. But if your application is running on multiple processors usi

Full disclosure: I am a Qt Developer (user) I do not work for TrollTech

The new Qt4.4 (due 1Q2008) has QtConcurrent [trolltech.com], a set of classes that make multi-core processing trivial.

From the docs:

The QtConcurrent namespace provides high-level APIs that make it possible to write multi-threaded programs without using low-level threading primitives such as mutexes, read-write locks, wait conditions, or semaphores. Programs written with QtConcurrent automaticallly adjust the number of threads used according to the number of processor cores available. This means that applications written today will continue to scale when deployed on multi-core systems in the future.

QtConcurrent includes functional programming style APIs for parallel list prosessing, including a MapReduce and FilterReduce implementation for shared-memory (non-distributed) systems, and classes for managing asynchronous computations in GUI applications:

* QtConcurrent::map() applies a function to every item in a container, modifying the items in-place.
* QtConcurrent::mapped() is like map(), except that it returns a new container with the modifications.
* QtConcurrent::mappedReduced() is like mapped(), except that the modified results are reduced or folded into a single result.
* QtConcurrent::filter() removes all items from a container based on the result of a filter function.
* QtConcurrent::filtered() is like filter(), except that it returns a new container with the filtered results.
* QtConcurrent::filteredReduced() is like filtered(), except that the filtered results are reduced or folded into a single result.
* QtConcurrent::run() runs a function in another thread.
* QFuture represents the result of an asynchronous computation.
* QFutureIterator allows iterating through results available via QFuture.
* QFutureWatcher allows monitoring a QFuture using signals-and-slots.
* QFutureSynchronizer is a convenience class that automatically synchronizes several QFutures.
* QRunnable is an abstract class representing a runnable object.
* QThreadPool manages a pool of threads that run QRunnable objects.

While you did say 'almost', I'm still going to take exception with that statement.

That is a very dangerous thing to say without reams of qualifications.

Programming (of any non-trivial nature) is not currently, nor is it likely to be any time soon, a 'no-brainer'. No library, no framework, no toolset, no abstraction takes away from the core fact that programming is hard. Sure, you can take away the boring/trivial stuff and give the programmers more

It's not easy... especially since things sort of halted at 4 ghz, what on earth am I typing about? Well...picture this...limitations...yes they do exist..and sometimes it's important to think beyond what lies just straight ahead (such as the next cycle speed)...and think into a second...maybe even a 3rd dimmension to expand your communication speed. I have for over 6 years been thinking..of a 3d-dimmension processor that cross communicates over a diagonal matrix instead of the traditional serial and parallel communication model. Imagine this folks...if your code could "walk" across a matrix of 10 x 10 x 10 instead of just 8 x 8 or 64 x 64 if you want...get the picture, no? Imagine that your data could communicate on a 3 dimmensional axis - imagine that you had 10 stacks of cores on top of each other - and instead of just connecting they communication bus to a parallel or a serial model...they could in fact communicate on a diagonel basis... this would make it possible to send commands...data..etc....in a 3d-space rather than just a "queue". This of course...would demand a different "mindset" of coding... everything would have to be written from scratch....though...but the benefits would be tremendeous.....you could 10 fold existing computational speed by increasing the communication across processor-cores...maybe even more! Even by todays technology standards. Ok..ok...sounds far fetched for you doesnt it? Well..get this...this was my invention 6 years ago (maybe even 9 years ago...I am getting older so I dont really care...I do care for freedom of information and sharing...Not so much wealth so listen on)...The theory of what I just wrote here on Slashdot (which has more implication on your life in the future than you will ever be capable of comprehending...yes...I am full of myself aint i....Who cares? You dont know me).. point is... There was once a missing brick to the idea of diagonal cross matrix computing....with yesteryears technology it just would not be feasible to do it... but...if you have ANY understanding of what I write here (yes...I am not kidding...this may change history as we know it...and I am drunk right now...and I dont want to keep a lid on it anymore)...here we go... Please think about what I just wrote - and - look up frances hellman's lecture upon magnetic materials in semiconductors...and you WILL have your 4-th link in the 3-B-E-C (base, Emitter, Collector) construction...to make the Cross Matrix Processor possible....just understand this....JoOngle invented this...Frances made it possible - YOU read it from a drunk nobody of Slashdot.org....) now...go make it real!

I have for over 6 years been thinking..of a 3d-dimmension processor that cross communicates over a diagonal matrix instead of the traditional serial and parallel communication model.

Six years, and you haven't discovered all the machines built to try that? This was a hot idea in the 1980s. Hypercubes, connection machines, and even perfect shuffle machines work something like that. There's a long history of multidimensional interconnect schemes. Some of them even work.

processors with more than eight cores, possible as soon as 2010 -- will transform the world of personal computing....

Translation:

Code will get even more inefficient / bloated and require faster hardware to do the same thing you are doing now. While I'm all for better / faster computer hardware, most if not all Jane and Joe Sixpack users never need Super Computer power to surf the net, read e-mail and watch videos.

The presentation itself (OGG Theora video available here [linux.org.au]) included an interesting quote from Tim Sweeney, creator of the Unreal Engine: "Shared state concurrency is hopelessly intractable."

The point expounded upon in the presentation is that when you have thousands of mutable objects, say in a video game, that are updated many times per second, and each of which touches 5-10 other objects, manual synchronization is hopelessly useless. And if Tim Sweeney thinks it's an intractable problem, what hope is there for us mere mortals?

The rest of this presentation served as an introduction to the Erlang model of concurrency, wherein lightweight threads have no shared state between them. Rather, thread communication is performed by an asynchronous, nothing-shared message passing system. Erlang was created by Ericsson and has been used to create a variety of highly scalable industrial applications, as well as more familiar programs such as the ejabberd Jabber daemon.

This type of concurrency really looks to be the way forward to efficient utilization of multi-core systems, and I encourage everyone to at least play with Erlang a little to gain some perspective on this style of programming.

For a stylish introduction to the language from our Swedish friends, be sure to check out Erlang: The Movie [google.com].

Check out this article [oreilly.com] on O'Reilly's site. Threads are actually very low level construts (like pointers and manual memory management). Accordingly the future belongs to languages that eliminate threads as a basis for concurrency. See Erlang and Haskell.

The fact is that programming by and large has gotten lazy, shiftless and sloppy over time and not any better or faster. They really did rely on processing and memory architectures getting faster to overcome their coding bottlenecks. The words; "optimized code" have little or no significance in todays programming shops because of budgets. Because of the push to get stuff out the door as quickly as possible, corners are cut all over the place on many things.

There once was time when debugging was part of your job. Now; someone else does that and at most, the better coders do some unit testing to ensure their code snippet does what it is supposed to. There generally isn't any "standard" with regard to processes except in some houses that follow *recommended coding guidelines* but these are few and far between. Old school coders had a process in mind to fit a project as a whole and could see the end running program. Many times now, you are to code an algorithm without any regard or concept as to how it might be used. A lot of strange stuff going on out there in the business world with this!

If there is a fundamental change in the base for C++, et al., this is going to possibly have a detrimental effect on the employment market as there will be many who cannot conceptualize multi-threading methodologies much less modeling some existing processing in this paradigm; and leave the markets.

I left the programming markets because of the clash of bean counters vs quality, and maybe this will have a telling change in that curve. I always did enjoy some coding over the years and maybe this would make an interesting re-introduction. I have personally not coded in a multi-threading project but have the concepts down. Might be fun!

I have little hope for the C++ standards committee. It's dominated by people who think really l33t templates are really cool. Everything has to be a template feature. They're fooling around with a proposal for declaring variables atomic through something like
atomic<int> n;
This allows really l33t programmers to write really l33t code using really l33t lockless programming. But without the proofs of correctness needed to make that actually work reliably.

It's also long been Strostrup's position that concurrency is a library problem. As long as the OS provides threads and locking, it's not a language problem. This isn't good enough.

The fundamental problem is that, as currently defined, a C++ compiler has no idea which variables are shared between threads, and which are never shared. The compiler has no notion of critical sections. Fixing this requires some fundamental changes to the language. It's known what to do; Modula, Ada, and Java all have synchronization and isolation built into the language. But there's nothing like that in C++, and the designers of C++ don't want to admit their mistakes.

It's not just a C++ problem. Python has a similar issue. Python as a language doesn't deal with concurrency adequately. The main implementation, CPython, has a "global interpreter lock" that slows the thing down to single-CPU speed.

The problem is that a C++ compiler doesn't know what data is locked, and which data items are locked by which lock, because the language has no way to talk about that subject. OS-level primitives lock everything. The compiler has a hard time telling which data needs concurrency protection. Thus, the compiler can't diagnose race conditions.

If the language understood locking, one could do more checking at compile time.
One could ta

I know that languages like Erlang and Haskell are better for concurrent programming than more traditional languages. However, so far they have not been as popular as more traditional languages.

Will the new world of concurrency cause a shift in language popularity? Or will traditional languages remain more popular, perhaps with some enhancements? C++ is gaining concurrency enhancements; C++, Python, and many other languages work well with map/reduce systems like Google MapReduce; and even with no enhancements to the language, you can decompose larger systems into multiple threads or multiple processes to better harness concurrency.

If you know Haskell and Erlang, please comment: do those languages bring enough power or convenience for concurrency that they will rise in popularity? People grow very attached to their familiar languages and tools; to displace the entrenched languages, alternative languages need to not just be better, they need to be a lot better.

"Multi-CPU systems started becoming common in the mid 1990s so developers being a decade behind the times is a little embarrassing and there are many situations where the task is not completly serial."So after a decade of poor adoption on the part of software developers, the chip makers have ignored the fact that the wisdom of the (programming) mob indicates that multi-processing is not an attractive solution. Chip makers have known for more than two decades that they were going to run into physical limits

Core one: For the OSCore two: Anti-virusCore three: Anti-Spyware / Windows DefenderCore four: FirewallCore five: Windows update notifications and installationsCore six: Windows Genuine advantage checksCore seven: Eye Candy (Vista) with XP you get a bonus CPUCore eight: What ever the user wants to run, except when you get a virus, thenyou have to share it with the SPAM bot.

When I first started programming, in BASIC on an Apple ][ (not IIe), I remember being baffled by the fact that the computer did not operate with multiple concurrent streams [blogspot.com]. To me, this seemed the point of making something that was "more than a calculator," and the only way we would be able to do the really interesting stuff with it.

When I first started writing object-oriented code, I was somewhat dismayed to find that OO was an extension to the same ol' linear programming. It seemed to me that objects should be able to exist as if alive and react freely, but really, they were just a fancy interface to the linear runtime. Color me disapointed yet again.

It's an important paradigm shift [chrisblanc.org] to recognize parallel computing. Maybe when the world realizes the importance of parallel computing, and parallel thinking, we'll have that singularity that some writers talk about. People will no longer think in such basic terms and be so ignorant of context and timing. That in itself must be nice.

Sutter's article hits home with all of this. His conclusion is that efficient programming, and elegant programming that takes advantage of, not conforms to, the parallel model is the future. Judging by the chips I see on the market today, he was right, 2.5 years ago. He will continue to be right. The question is whether programmers step up to this challenge, and see it as being as fun as I think it will be.

I see a lot of comments indicating that all a programmer needs to do to scale to more cores is just multithread your algorithms. If only that were true! Unfortunately, memory access patterns become extremely important for getting good performance, and that requires some pretty sophisticated knowledge about the hardware and proper tuning is almost a black art. Once large numbers of cores are in use, scaling your software optimally is going to be very difficult. Don't delude yourself. Talented programmers are going to be very much in demand, and I suggest starting to learn everything you can about it now. For starters, Ulrich Drepper has written an incredibly detailed and helpful article available at http://people.redhat.com/drepper/cpumemory.pdf [redhat.com] which should really help dispel any notions that this change to computing is going to be easy!

Perhaps I am the only person who thinks this, but is seems to me that threads are not a very good low-level primitive for concurrent programming. They inherently assume that whatever is running on the different processors is independent. As a result, writing a tightly coupled parallel algorithm is "hard".

I would much rather the operating system switch 4 or 16 synchronized cores completely over to me. Add prefixes to the assembly instructions so that I can explicitly execute instructions on processor 1, 2, 3, etc, in a shared memory model. Add logic similar to simultaneous multithreading to keep unused cores saturated with instructions from other threads when possible. This would help the programmer extract parallelism from tightly coupled algorithms. There seems to be no real multithreaded analogue to assembly language, and I think that is a big part of the problem. If we had such a thing it would be much easier to write tightly coupled parallel code, and higher level parallelization (from compilers) would follow inevitably.

Of course I'm not saying this is some sort of magic bullet. We would still need to split up computations and use threads as best as possible, but I think this is an obvious tool that we are missing.

Matlab isn't that smart, you still have to tell it that the for loop is parallizable for example. I might be wrong but I don't think Java or C# do either. Their frameworks/VM's supply API's to do multi-threading you simply call into them for the support that you need. C has had pthreads for a long time (since it was standardized?), for some reason the C++ committee's never agreed on an implementation.

my current major language (Igor pro) will use all the cores automatically, and how many languages do multithread this way? Matlab(?), Octave(?)
LabVIEW, by its very nature [which is graphical - based on "G" - the "Graphical" programming language] is kinda/sorta topologically self-threading: If a piece of LabVIEW code sits off in its own connected component, then [more or less] it gets its own thread.

Of course, all your ".h" & ".c" [or ".cc"] files [& their innards] might very well break down int

"But seriously, isn't the OS responsible for the heavy lifting with regards to task scheduling and concurrency? Oh, wait, this is Microsoft, right? Perhaps this is similar to their take on Security being somebody else's problem."Huhhh?My guess is that you never wrote any code.Linux doesn't do any more heavy lifting for you than Windows does. I doubt that OS/X does.So what are you talking about.An OS will never figure out what part of your program is going to need to be in which thread. A compiler MAY at som

Why should programmers be knocked for only using the tools they have?It's a lot easier to develop for something if you can actually get your hands on it. When this "nifty but underutilized" sort of hardware gets out there where everyone can use it, perhaps the problem will sort itself out. Not everyone has the resources to test their ideas in this area.

If you're going to knock anyone, knock the professors for ignoring this area of research and not capturing the attention of their students who are now "subst

I've read that, and the ideas that it links to. What you are proposing is that everything be converted to a form of multi-branch pipeline programming, correct? That is, think of a standard Unix pipe. Then imagine a process having multiple inputs / outputs, and each of those outputs can be connected to a input on a different process. So once the base modules are done, programming would be a matter of connecting various inputs and outputs, like designing an electronic circuit.I can see how this would help

The cure for solving all of parallel programming problems (deadlocks, priority inversion etc) is the Actor model: each object is a separate thread, and calling a method does not invoke code, it only puts a request in the message queue of the called object. Then the thread behind the object wakes up and processes the requests.If an object wants a result from another object, then it obtains a future value that represents the result of the computation when it will be ready. When the caller wants the actual val

Actually, according to the latest Dr Dobbs, Herb is the *chair* of the ISO C++ Standards committee. (He had an article on lock hierarchies being used to avoid deadlock)

He's really going to know what he's talking about, then.

As chair of the committee, I'd say there's a pretty fair chance that he *does*.

I really love people who bash things just because Microsoft is involved. Contrary to what seems to be a popular belief here, they have some incredibly intelligent people who are very good at what they do there.

I've always thought that, didn't realise there was a law for it. People used to optimise everything way back when, but now I suspect that most people just let the faster processor take care of things rather than trying to squeeze every nanosecond of performance out of their apps:( At least graphics are still getting faster just because they're adding more parallel processors to the chips..

People used to optimise everything way back when, but now I suspect that most people just let the faster processor take care of things rather than trying to squeeze every nanosecond of performance out of their apps:(

Thank God for that.

I'm glad that coders today can use high-level tools and languages without having to spend half their time on performance tweaking.

Take as an example a game like Halo (or Guitar Hero, or World of Warcraft, or whatever your favorite modern game is). If the developers of these

...while all the clever folks have already started writing their scalable applications in something reasonable, like Erlang?

From erlang site:

1.4. What sort of problems is Erlang not particularly suitable for?

People use Erlang for all sorts of surprising things, for instance to communicate with X11 at the protocol level, but, there are some common situations where Erlang is not likely to be the language of choice.

The most common class of 'less suitable' problems is characterised by performance being a prime requirement and constant-factors having a large effect on performance. Typical examples are image processing, signal processing, sorting large volumes of data and low-level protocol termination.