Browse categories:

Hide popular topics:

/r/technology is a place to share and discuss the latest developments, happenings and curiosities in the world of technology; a broad spectrum of conversation as to the innovations, aspirations, applications and machinations that define our age and shape our future.

Rules:

1. Submissions

Guidelines:

Submissions must be primarily news and developments relating to technology

Submissions relating to business and politics must be sufficiently within the context of technology in that they either view the events from a technological standpoint or analyse the repercussions in the technological world.

Please do not submit the following:

i) Submissions violating the guidelines.

ii) Images, audio or videos: Articles with supporting image and video content are allowed; if the text is only there to explain the media, then it is not suitable. A good rule of thumb is to look at the URL; if it's a video hosting site, or mentions video in the URL, it's not suitable.

vii) Mobile versions of sites and url shorteners: please directly submit the desktop version of a webpage in all cases.

2. Behaviour

Remember the human You are advised to abide by reddiquette; it will be enforced when user behavior is no longer deemed to be suitable for a technology forum. Remember; personal attacks, directed abusive language, trolling or bigotry in any form, are therefore not allowed and will be removed.

3. Titles

Submissions must use either the articles title, or a suitable quote, either of which must:

Removed threads will either be given a removal reason flair or comment response; please message the moderators if this did not occur.

All legitimate, answerable modmail inquiries or suggestions will be answered to the best of our abilities within a reasonable period of time.

Rule violators will be warned. Repeat offenders will be temporarily banned for a period from one to seven days. An unheeded final warning will result in a permanent ban. This may be reversed, however, upon evidence of suitable behavior.

The main value of this type of device is in the educational space. This is to parallel programming, what cheap entry-level FPGAs are for hardware design students.

For those thinking you're going to build 'super-computers' or make a bitcoin mining rig that turns a profit, this is not the board you're looking for. As some mentioned earlier ITT, it's less powerful than an entry level GPU.

That being said, though it has essentially no commercial value, it's important because it is a cheap and relatively simple piece of hardware, that demonstrates the key characteristics of a supercomputer cluster (a lot of threads). I can see this board being required equipment for a post-grad computer science course on parallel computing.

A GPU is SIMD, typically with 2-4 wavefronts. In each wavefront, there are 32-128 threads that are all running the same instructions on different data. However the Epiphany-IV can run 64 independent programs at once, taking different paths and doing their own things.

GPUs are a "wide" architecture, but Epiphany is the first "grid-based" supercomputer for ~$99. GPUs are very mature technologies, while Epiphany is still in the R&D stage.

I also don't see Epiphany being a competitor to GPUs in graphics or bitcoin mining space. Instead, it could be used for robotics, AI research, or studying new programming paradigms that will be useful for future grid computer systems.

I never meant to infer it will suck. My point was that it will excel in its own usage domain. You say it will be "slow as snails in branching code", well that's all relative. The per-thread performance of branching code on Epiphany should be faster than a GPU's but slower than a modern superscalar CPU's. For some areas in AI such as computer vision, speech recognition, and neural networks, I expect Epiphany will often outperform CPUs and GPUs.

Right now the real advantage is for $99, Parallella is giving everyone a chance to work with and shape the programming paradigm of the future. Around 2006, Intel hit the wall with single threaded CPU scaling. It has gone from doubling every 2 years to a few % increase per year. We are at the beginning of this paradigm shift to massively multi-core CPUs. But the tools and the theory are still in their infancy.

With existing programming tools, when programmers try to parallelize a program, as the complexity grows, the speedup diminishes because of issues in concurrency, locking, and asynchronicity. Parallella will give programmers the opportunity to try different programming paradigms to tackle these problems such as functional, reactive, or flow-based programming models.

I understand what you're saying but algorithms have already been developed to take advantage of the 32 threads per warp of a GPGPU meaning that there is very little performance gain to be had to be able to split branching to units of less than 32. Not to mention there is no l2/shared cache NO CACHE, no shared memory NO SHARED MEMORY and a distinct lack of a scheduler for thousands of threads. While a warp is waiting for data in a GPU the scheduler can put on another warp to execute during the waiting period.

I understand your sentiment about single core, I have taken courses in Multi core and Many core programming. But the fact is that the only way to speed up many-core is to get more data, and the Epiphany processor limits both data sets and the bandwidth of the data. (Amdahl's law vs Gustafson's law)

Parallela uses tools (opencl) designed for GPUs to do parallel tasks. So to say that it is going to each parallel programming is disingenuous at best.

It isn't going to allow jack shit new paradigms because it works like a GPGPU co processor.

Yes, if people just use opencl to program for it, there is no advantage to using a GPU. And yes, lack of a scheduler and shared memory make it a poor choice for traditional multithreaded programming. But for large projects traditioal multithread programming is failing. Some of the nastiest bugs I've run into come from deadlock and other threading issues. This is why most of the new languages these day like Go/Rust primary aim is to address concurrency.

The hope with parallela is that this will help researchers come up with new tools on both the HW and SW side of the equation. Also, as it's still in R&D some of your concerns may be addressed in future HW.

It doesnt matter what you use you're still using OpenCL or some encapsulation of OpenCL. I never said traditional multithreaded. Traditional multithreaded programming is very different from GPU programming where you spawn thousands of threads to do individual units of work split into groups of 256 threads. If you have a GPU program with some kind of deadlock or even any kind of locking you are most certainly doing GPU programming wrong. If one thread can lock 2000 threads it becomes a massive problem.

The difference between traditional multithreaded programming and GPGPU programming is that you have no control over the GPU program once it has begun. You cant talk to it, you cant tell it to stop, you cant send it messages. You can only wait for it to finish.