For past year I have been working a lot on concurrency in Java and have build and worked on many concurrent packages. So in terms of development in the concurrent world, I am quite confident. Further I am very much interested to learn and understand more about concurrent programming.

But I am unable to answer myself what next? What extra should I learn or work on to inherit more skills related to Multi-core processing. If there is any nice book (read and enjoyed 'concurrency in practice' and 'concurrent programming in java') or resource's related to Multi-core processing so that I can go to the next level?

6 Answers
6

Since you've read Doug Lea's and Brian Goetz's books then you've definitely covered the best material out there to date.

Going forwards, there's the new concurrency enhancements in Java 7. Most noticeably the Fork/Join framework and the new asynchronous NIO APIs.

Java 8 will introduce further concurrency improvements with lambdas/parallel collections.

Another thing to seriously look at is alternative ways of dealing with concurrency. To be blunt, Java's 'lock mutable objects' approach is always going to be error prone, no matter how much the APIs are improved. So I recommend looking at Scala's actor model and Clojure's STM as alternative ways to deal with concurrency issues whilst maintaining interoperability with Java.

Ha thanks a lot for the book :). Can you also please suggest some other good book as your recommended book isn't available here (local edition) in India.PS: Concurrency in Practice is a gem of a book
–
JatinSep 13 '11 at 15:52

@Martijn, Neat! I have been curious about Groovy and Scala for a while now and wanted to play around with it to learn more. Is your book geared towards beginners in these languages or does it assume prior experience?
–
maple_shaft♦Sep 13 '11 at 19:29

@Jatin Puri - I really don't know any other titles over 'concurrency in practice' and 'concurrent programming in java', there is Henry Wong's 'Java Threads' O'Reilly title, but that's about it.
–
Martijn VerburgSep 14 '11 at 13:10

The D programming language provides two paradigms for concurrent programming, both of which have their uses and are rather interesting.

std.concurrency provides message passing with no default memory sharing. All global and static variables in D are thread-local by default and spawn and send do not allow sending messages that contain mutable pointer indirection. Limited sharing can be obtained via the shared keyword, which entails additional checking by the type system. Outside the safe dialect of the language you can force classic C/Java-style global/shared variables using the __gshared keyword, but all bets are off then as far as race safety. This model is detailed in a free chapter of Andrei Alexandresu's book "The D Programming Language".

std.parallelism is less safe but in some ways more flexible than std.concurrency and is geared specifically towards multicore data and task parallelism for increasing data processing throughput rather than general-case concurrency. It features a parallel foreach loop, asynchronous function calls, parallel reductions, etc. It provides mechanisms to make it easier to write race-safe code but doing so still requires some degree of discipline.

The approach to concurrency is very novel, and in my view a significant advance on what you see in Java and most other languages. Some key points:

Identity and state are separated - OOP complects object identity with it's current state in the form of mutable member variables. Clojure strictly separates identity (managed references) and state (immutable data structures) in a way that significant simplifies development of reliable concurrent programs.

Persistent immutable data structures - because everything is immutable, you can take a snapshot of data / state at any time and be assured that it won't get mutated underneath you. But better than that - they are persistent data structures that share data with previous versions. As a result, operations are much closer to O(1) rather than the O(n) you would pay for a copy-on-write strategy for immutable data.

Software transactional memory - rather than using locks, you just enclose code in a (dosync ...) block and they are automatically run as a transaction. No risk of deadlocks, and no need to develop complex locking strategies. This is a massive win, especially when combined with the immutable data structures above. Effectively, Clojure implements multi-version concurrency control in its STM.

Functional programming paradigm is used to make it much easier to write reliable concurrent code. Basically if you take an immutable data structure, run it through a pure function and output a different immutable data structure, then your code is guaranteed to be safe for concurrency.

If you want to take it to a whole new level, you might want to look into programming with CUDA.

This allows you to distribute your algorithms over hundreds of processing cores on your graphics card rather than the few main CPU cores. There are even Language bindings which apparently make it relatively easy to accelerate high level languages like python using GPGPU techniques.