At the end of the section “Parallel and distributed computing”, I wrote:

(It should be clear that I didn’t really try to read those texts, and just browsed through them.)

Well, I worked through the survey “Parallel Algorithms” by G. Blelloch and B. Maggs in the meantime, and finished the first quarter of “Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques” by U. Vishkin. The terms EREW, CREW, CRCW (E=exclusive, C=concurent, R=read, W=write) are explained, but their detailed relation to NC1, L, NL, SAC1, and AC1 is not discussed. I found such a discussion (including t…

@YOUSEFY Yes, there are such theorems. The text by Blelloch and Bruce contains representative and easy samples of such theorems. That text is a very good starting point, since it is both relatively short and easy to read, and parallel computation really is something basic The more detailed and precise material which I copied from my original comment above is ...

... great on a technical level, but gives a wrong impression of the difficulty of parallel computation. Uzi Vishkin is right when he argues that parallel computation can be taught to high school students. The currently observed difficulty in practice is our own fault, by letting hardware architectures deviate too far from the informal work-depth model.

Or maybe it would be more precise to say that current processors try to exploit work-depth parallelism automatically on their own without giving the programmer the chance to help them, and offload the task of locally distributed computing to the user pretending it would be parallel computing and not giving him proper tools for that task. But as long as FPGAs can beat the speed of general purpose hardware by an order of magnitude for truly parallel tasks, ... well, room for improvement!

Thank you @vzn and @ThomasKlimpel ! I have another question: in this paper: (https://pdfs.semanticscholar.org/7096/cc76c02d6f11844c6c3d14a80139b20345e8.pdf) he states about the Feder-Vardi conjecture as following: "In particular the widely believed Feder-Vardi dichotomy conjecture [FV98] states that every constraint satisfaction problem (CSP) is either NP hard or in P". Now, does it mean that we don't have class NP? it seems that NP-complete either be in P or in PSPACE. What do you think about this results? What does it tell?!

@YOUSEFY Note that every NP-complete problem is NP hard. NP hard is just a short way of saying "at least as hard as an NP-complete problem". You want to know whether I think that the Veder-Vardi dichotomy conjecture has been proven? Well, the proofs are in peer review now, one or two proofs have been shown incorrect, but the feeling is that most of the other proofs will survive, perhaps with minor corrections.

I have a question to you: Was your question about parallel and distributed computation motivated by trying to better understand quantum computing? Or was it more motivated by trying to better understand computer architecture and models of computation?

In computational complexity theory, a branch of computer science, Schaefer's dichotomy theorem states necessary and sufficient conditions under which a finite set S of relations over the Boolean domain yields polynomial-time or NP-complete problems when the relations of S are used to constrain some of the propositional variables. It is called a dichotomy theorem because the complexity of the problem defined by S is either in P or NP-complete as opposed to one of the classes of intermediate complexity that is known to exist (assuming P ≠ NP) by Ladner's theorem.
Special cases of Schaefer's dichotomy...

it seems there is some important principle here, not fully understood, possibly related to the NP "transition point", maybe connected to other problems...

@enumaris re deepmind theres a lot of media coverage of their latest alphago vs chess iteration, have you seen it? what do you think? was looking at their open positions, they have some in US/ CA on the applied engr team deepmind.com/careers