Berkeley discusses progress in parallel programming

Separately Berkeley, MIT and at least four other universities are creating the Center for Energy Efficient Science. It will conduct research in moving toward semiconductor circuits that can operate on a millivolt of power.

"We could be using a million times less energy to process information," said Eli Yablonovitch, a Berkeley professor who will work with the center.

In a panel discussion Yablonovitch and others called for a new generation of power engineers who can apply the techniques of the Internet to craft a smart electric grid.

"We need new thinkers to have an impact on this area," said Randy Katz, a Berkeley professor who helped launch a low-power research effort at the event in 2009.

"This is an opportunity to think about what is the right background—it's not the old Handbook of Power Engineering," Katz said. "It's an opportunity to train a new generation of people who understand both IT systems and how power moves around," he added.

David Culler, a Berkeley professor working on an initiative for energy-efficient buildings, said engineers need to understand a variety of mechanical, civil and electrical disciplines in this sector. "I really worry we are not training people for the wide range of issues coming up," Culler said.

He called for a smart grid that uses Internet-like techniques such as distributed services and separately-defined implementation layers that can evolve independently. "Just like we have virtual networks as overlays on the Net, there's no reason we can't have virtual private grids--that's how you evolve the infrastructure," Culler said.

Katz agreed, adding that new regulations including a carbon tax are needed to motivate utilities and power users. "In order to have the innovation take place the true cost of energy has to be reflected, it's the only way to get people to invest," he said.

Finally, Berkeley professor Michael Franklin formally announced the AMP Lab, a new research center seeking to drive cloud computing to the next level. The center aims to address what Franklin called the scalability problem involving algorithms, machine learning and people.

Machine learning algorithms and data analytics don't scale to increasingly large and complex data sets. Meanwhile cloud services lack crowd-sourcing tools to harness large groups of people over the Internet to tackle shared problems.

The lab is a spin-out of a Berkeley center developing software that will help individuals use cloud computing to launch new Web services. The new lab wants to enable many people to collaborate to collect, generate, clean and make sense of large data sets, he said.

In other words, UC Berkeley has made no progress at all in coming up with a solution to the parallel programming crisis. They are no closer now than they were when they first began more than a year ago. All those millions invested by Intel and Microsoft have simply been wasted. Bravo, Berkeley. Failure must be the name of game in this business.
*
What's amazing about all this is that there is somebody at Berkeley who could have made a real difference in this research. His name is Edward Lee, the man who showed the world that multithreading is evil. What's even more frustrating is that the folks at Berkeley research have visited my site many times and have read my ideas on the crisis. Not once did any of them contact me to discuss these ideas. Well, neither will I contact them when my time in the limelight comes. Sooner or later, they'll come around. And they won't be able to say that nobody told them because I know they all read Rick Merritt's articles and the responses. Keep it up, Rick. You've been covering this crisis from the beginning and it will not go away anytime soon.
*
How to Solve the Parallel Programming Crisis:
http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html
*
UC Berkeley‚??s Edward Lee: A Breath of Fresh Air:
http://rebelscience.blogspot.com/2008/07/uc-berkeleys-edward-lee-breath-of-fresh.html

Although it is no use to be so frustrated as Mapou is, he is essentially right. I have witnessed this for at least 20 years, the best they can come up with is a crude form of reverse engineering and a ad-hoc API while trying to get the parallelism out of a program that destroyed all parallel information from the original problem domain by using a sequential programming language. I call this the von Neuman syndrome.
Nevertheless, clean parallel programming paradigms have been been developed since the 70's. I refer here to Hoare's CSP process algebra. Languages and processors have been build that use it as a basis (transputer, occam, ...). We took another approach by embedded the programming model in the services of a distributed RTOS (Virtuoso). One customer's system had even 12000 processors (heterogeneous and using less the 1 Mbytes/node).
Recently we went further and redeveloped a new one from scratch but using formal modeling. The code size is now around 5 to 10 KBytes. And yes we have no thread model but a clean task model. It works completely transparent across any type of processors and any type of interconnect technology. The formal modeling made all the difference.
*
http://www.altreonic.com/sites/default/files/Whitepaper_OpenComRTOS.pdf

Eric Verhulst,
*
Hoare's CSP is not the answer either, otherwise we would not be here discussing the crisis. Hoare, like most academics, is addicted to complexity. Occam is way too abstract and too complicated to be the solution the industry is looking for.
*
The big surprise in all this is that the solution to the crisis is not rocket science. It is a simple parallelizing concept that has been in use for decades. We already use it to simulate parallelism in video games, simulations and cellular automata. We just need to take the concept down to the instruction level within the processor itself and adopt a synchronous reactive software model.
*
Folks, the days of Turing, Babbage and Lady Ada are coming to a quick end. It's time to wake up and abandon the flawed ideas of the baby-boomer generation and forge a new future. The boomers were wildly successful but this is a new age, the age of massive parallelism. The boomers need to retire and pass the baton to a new generation of computists.
Download the E-Book if you're interested:
http://rebelscience.blogspot.com/p/rebel-science-bookstore.html

tbbright,
*
Wow. You're absolutely right. We are in the mess that we are in because of academia. They invented it all. All the bright engineers and PhD research managers at Microsoft, Intel and elsewhere came from academia. It's painful to see the computer industry throwing money at the very people who gave us this mess in the first place and whose livelihood depends on the mess being with us forever. UC Berkeley, UIUC, Stanford and the others have a vested interest in seeing that this crisis lasts for as long as possible. Why? Because no crisis = no money. I'm sure this is not going to win me a lot of friends out there but I don't care. It's the truth, godd*mmit.

Hi,
How will the new programming platform (by Berkeley) be comparable with already existed multicore programming tools such as Wind River Systems or QNX?
Especially by using these tools, you can literally visualize all of the cores on your board and you just program each core separately using the integrated RTOS semaphores for multi-tasking.