Our new Indie Games subforum is now open for business in G&T. Go and check it out, you might land a code for a free game. If you're developing an indie game and want to post about it, follow these directions. If you don't, he'll break your legs! Hahaha! Seriously though.

Our rules have been updated and given their own forum. Go and look at them! They are nice, and there may be new ones that you didn't know about! Hooray for rules! Hooray for The System! Hooray for Conforming!

SELECT * FROM posts WHERE tid = 'PA PROGRAMMING THREAD'

Posts

I don’t think that’s what I was thinking of. I’ll have to look it up when I get home. Maybe it was something related to GUIs and not threading. I definitely remember a moment where it was like “you can extend this class to do this… or you can do it this totally different way”.

I don’t think that’s what I was thinking of. I’ll have to look it up when I get home. Maybe it was something related to GUIs and not threading. I definitely remember a moment where it was like “you can extend this class to do this… or you can do it this totally different way”.

Maybe Runnable?

Yeah now that you mention it. Mine is sort of a cheat on Java's threading to make it more mimic c#'s now that I'm looking at after a year.

Is there another way to do concurrency in Java, without extending Thread? I vaguely recall that.

Not sure what you mean by that. Threads are a fundamental concept in traditional concurrency. You can't really "not" deal with them in the sense that they are how you achieve concurrent computation.

If you're instead talking about synchronized/locks (which is what causes headaches when dealing with shared-memory concurrency) then as Bowen described above, java.util.concurrent provides a variety of lock-free data structures, futures, and atomic variables for you to use.

If you really mean alternatives to threads, then you are really talking about different models of concurrency, e.g., hoare's channels and tuple spaces. There are library implementations of these concepts (I believe) but they are not necessarily production-ready, well understood by the masses, or otherwise fit for commercial use. You'll need to go to other languages, e.g, go, for those.

Java has some more recent (not super recent, just more recent than the basic Thread class) abstraction layers around threading that simplify it a lot. It handles thread pooling, hides a few details, etc. I can't for the life of me remember what the library is called, though. Might be this Runnable deal being mentioned. It's very similar to Android thread handling if I remember right... Android's threading is probably just a thin layer on top of that lib.

Runnable is just an interface that captures a class that is suppose to act as a method that takes no parameters and returns no results. Thread implements Runnable so you can have the convenience of simply extending a thread with the task you want to execute. Alternatively, you can create a new Thread and pass it a Runnable, e.g.,

Is there another way to do concurrency in Java, without extending Thread? I vaguely recall that.

Not sure what you mean by that. Threads are a fundamental concept in traditional concurrency. You can't really "not" deal with them in the sense that they are how you achieve concurrent computation.

If you're instead talking about synchronized/locks (which is what causes headaches when dealing with shared-memory concurrency) then as Bowen described above, java.util.concurrent provides a variety of lock-free data structures, futures, and atomic variables for you to use.

If you really mean alternatives to threads, then you are really talking about different models of concurrency, e.g., hoare's channels and tuple spaces. There are library implementations of these concepts (I believe) but they are not necessarily production-ready, well understood by the masses, or otherwise fit for commercial use. You'll need to go to other languages, e.g, go, for those.

Java has some more recent (not super recent, just more recent than the basic Thread class) abstraction layers around threading that simplify it a lot. It handles thread pooling, hides a few details, etc. I can't for the life of me remember what the library is called, though. Might be this Runnable deal being mentioned. It's very similar to Android thread handling if I remember right... Android's threading is probably just a thin layer on top of that lib.

Yeah, there the facilities are largely in java.util.concurrent that make the task of dealing with shared-memory concurrency easier. But my point is that the question is ill-specified. It sounds like we were just talking about syntax, however.

Runnable is just an interface that captures a class that is suppose to act as a method that takes no parameters and returns no results. Thread implements Runnable so you can have the convenience of simply extending a thread with the task you want to execute. Alternatively, you can create a new Thread and pass it a Runnable, e.g.,

Yep, in general, that's what anonymous inner classes are designed for (which makes them quite the overkill for the tasks of which they are required).

of course all of this threading talk kinda ignores the whole "spawn another process" method of parallelization, and let your kernel do the hard parts. depending on how much you rproblem resembles a Monte-Carlo, you CAN get very effective parallel processing as long as you're nor using MSFT OS's. (Processes in UNIX envs take very little overhead to spin up, and well, the NT model is ... a little heeftier)

great way to do it if your datasets can be processed independently then aggregated, and then you don't have to deal with every single god-awful implementation of "threads" or "pthreads" or whatever monstrosity that you want to use.

Of course, then you have IPC problems, but i've already said that this is for monte-carlo like problems :-)

On Edit: I think it was Stroustrup who said that whoever comes up with an efficient (technically and from an implementation angle) way to thread, that will be the language of the future. You have how many "cores" in you box now? Yeh, how many are being used? Why is your program using only one?

Yep, in general, that's what anonymous inner classes are designed for (which makes them quite the overkill for the tasks of which they are required).

of course all of this threading talk kinda ignores the whole "spawn another process" method of parallelization, and let your kernel do the hard parts. depending on how much you rproblem resembles a Monte-Carlo, you CAN get very effective parallel processing as long as you're nor using MSFT OS's. (Processes in UNIX envs take very little overhead to spin up, and well, the NT model is ... a little heeftier)

My sed skills are rusty (if they ever existed at all). How do I print lines until I run into a blank line?

echo "line1\nline2\nline3\n\nafter" | sed -ne "1,/\n\n/ p"

I'd like that to print just the lines before the double newline. But it actually prints everything. I know sed usually works line-by-line, but I'm betting there's a way to make it treat newlines like any other character.

Do named pipes in windows work sort of like sockets? If I set up a named pipe, and I want multiple clients connected, is everyone going to see what's shooting down a named pipe or is it per client/server type thing?

Do named pipes in windows work sort of like sockets? If I set up a named pipe, and I want multiple clients connected, is everyone going to see what's shooting down a named pipe or is it per client/server type thing?

They work like sockets. It's a private channel, you have to call ConnectNamedPipe() once for each client that wants to connect

Do named pipes in windows work sort of like sockets? If I set up a named pipe, and I want multiple clients connected, is everyone going to see what's shooting down a named pipe or is it per client/server type thing?

They work like sockets. It's a private channel, you have to call ConnectNamedPipe() once for each client that wants to connect

Do named pipes in windows work sort of like sockets? If I set up a named pipe, and I want multiple clients connected, is everyone going to see what's shooting down a named pipe or is it per client/server type thing?

They work like sockets. It's a private channel, you have to call ConnectNamedPipe() once for each client that wants to connect

that's what I thought thanks phyphor

One thing to potentially keep in mind - there's no parallel of listen()/accept(), instead you create the socket and accept() on each one, so if someone tries to open when there's nobody there due to a near-simultaneous open you'll get an error

My sed skills are rusty (if they ever existed at all). How do I print lines until I run into a blank line?

echo "line1\nline2\nline3\n\nafter" | sed -ne "1,/\n\n/ p"

I'd like that to print just the lines before the double newline. But it actually prints everything. I know sed usually works line-by-line, but I'm betting there's a way to make it treat newlines like any other character.

Couldn't you just match on an empty line instead of trying to match on double newline?

echo -e "line1\nline2\nline3\n\nafter" | sed -ne '/^$/q; p'

The q is telling sed to quit, but I don't know if that's useful or not if you wanted to have sed do something else too. I'm not really that well versed in sed either, so I just kinda made something up that matches your exact specification.

Do named pipes in windows work sort of like sockets? If I set up a named pipe, and I want multiple clients connected, is everyone going to see what's shooting down a named pipe or is it per client/server type thing?

They work like sockets. It's a private channel, you have to call ConnectNamedPipe() once for each client that wants to connect

that's what I thought thanks phyphor

One thing to potentially keep in mind - there's no parallel of listen()/accept(), instead you create the socket and accept() on each one, so if someone tries to open when there's nobody there due to a near-simultaneous open you'll get an error

That's fine, I assumed it worked like non blocking sockets of some variety. If no one's listening on the pipe then whoops.

Yep, in general, that's what anonymous inner classes are designed for (which makes them quite the overkill for the tasks of which they are required).

of course all of this threading talk kinda ignores the whole "spawn another process" method of parallelization, and let your kernel do the hard parts. depending on how much you rproblem resembles a Monte-Carlo, you CAN get very effective parallel processing as long as you're nor using MSFT OS's. (Processes in UNIX envs take very little overhead to spin up, and well, the NT model is ... a little heeftier)

great way to do it if your datasets can be processed independently then aggregated, and then you don't have to deal with every single god-awful implementation of "threads" or "pthreads" or whatever monstrosity that you want to use.

Of course, then you have IPC problems, but i've already said that this is for monte-carlo like problems :-)

On Edit: I think it was Stroustrup who said that whoever comes up with an efficient (technically and from an implementation angle) way to thread, that will be the language of the future. You have how many "cores" in you box now? Yeh, how many are being used? Why is your program using only one?

The real problem here is not threads vs. processes but rather how to managed shared memory or more generally how to manage interacting processes (in the general sense), i.e., its a coordination problem related to, but not identical to the problem of utilizing multiple cores.

Yep, in general, that's what anonymous inner classes are designed for (which makes them quite the overkill for the tasks of which they are required).

of course all of this threading talk kinda ignores the whole "spawn another process" method of parallelization, and let your kernel do the hard parts. depending on how much you rproblem resembles a Monte-Carlo, you CAN get very effective parallel processing as long as you're nor using MSFT OS's. (Processes in UNIX envs take very little overhead to spin up, and well, the NT model is ... a little heeftier)

great way to do it if your datasets can be processed independently then aggregated, and then you don't have to deal with every single god-awful implementation of "threads" or "pthreads" or whatever monstrosity that you want to use.

Of course, then you have IPC problems, but i've already said that this is for monte-carlo like problems :-)

On Edit: I think it was Stroustrup who said that whoever comes up with an efficient (technically and from an implementation angle) way to thread, that will be the language of the future. You have how many "cores" in you box now? Yeh, how many are being used? Why is your program using only one?

The real problem here is not threads vs. processes but rather how to managed shared memory or more generally how to manage interacting processes (in the general sense), i.e., its a coordination problem related to, but not identical to the problem of utilizing multiple cores.

Yeh, it's always the IPC that screws you with concurrency, that and busywaits, but we've got philosophers and forks for all that.

First, I did some time a a HPC company based around Beowulf technology, and while *in general* it is far more cost efficient and often faster to used a distributed super computer, sometimes you really can't have the setup time. Like when you're calculating the trajectory of an incoming Nuke. So, there will always be a super computer manufacturer in the US :-)

Second, I have a relative who works at one of the major slot-machine companies. A couple of years ago, there was a bug whereupon a series of button presses, money inserts/denials and other seemingly random stuff would cause the jackpot to payout. Every time. Root cause? RAM was entirely shared, and nobody told each other where they were writing. It was an interaction between the video and the sound bits that made the jackpot go off...

It's so simple! We don't need programmers! I mean we still need functions and loops and exceptions, but when they are all in boxes instead of plain text... fuck fuck FUCK FUCK FUCK YOU!

Everyone knows the hard part of programming is all that typing, not the logic and thought process that goes into it. If we can make it drag and drop then everyone will be able to do it and we can pay less!

I'm in the process of setting up my own, small CMS. I'm trying to figure out the best way to do that "Newest/most important content with blurb overlayed over an image/adjacent to an image" thing that seems to be common (like here: http://bruins.nhl.com/ or here: http://www.ign.com/).

Specifically, I'd like to know: is the front page stuff uploaded separate from the actual articles? Or is it done all at once? The way I have my alpha set up at the moment is that I have an inline editor with which to write content, along with a file upload input for that larger front page image, and other form inputs for various other things (like manually setting the URL slug). I'm just not sure if uploading it all at once is the best way to go.

I don't have a lot of experience with 3rd party solutions. I try to stay away from WordPress because it's a POS under the hood. The only one I've ever really dabbled with is PHPFusion, which was pretty shitty. Any insight on how it's done for real would be appreciated.

Generally there is a way to do both. If I submit an article admins can tag it as front page worth. Or, all articles are tagged to the front page based on a FIFO type system. Or articles are moved to the front page by popularity. Or there's a specific section to create front page articles directly (or assign articles to the front page like in the first one).

That's all up to the creator/content originator to decide which system they want.

Generally there is a way to do both. If I submit an article admins can tag it as front page worth. Or, all articles are tagged to the front page based on a FIFO type system. Or articles are moved to the front page by popularity. Or there's a specific section to create front page articles directly (or assign articles to the front page like in the first one).

That's all up to the creator/content originator to decide which system they want.

The next decade is going to be interesting as far as distributed computing is concerned. The last 40years we have spent making supercomputers look like single machines by depending on fat interconnects that allow any node to communicate with the entire system. The issue is that the meantime to failure on an exascale machine is boot time. With that in mind we can't rely on classic MPI parrelization, and the cloud framework isn't a drop in solution as we would never get sufficient speed using MPI on those type of nodes. The move to heterogenous computation is a start down the right path, but it really feels like we also need to push more of these issues down into the language. I have hopes that something like Julia can be one of the solutions.

Yes Ethea! I want to get my foot in the door of that side of things, but I really don't have as much experience with distributed computing as I would like. I have an interview Tuesday for an internship that might fix that, though.

"I resent the entire notion of a body as an ante and then raise you a generalized dissatisfaction with physicality itself" -- Tycho

The next decade is going to be interesting as far as distributed computing is concerned. The last 40years we have spent making supercomputers look like single machines by depending on fat interconnects that allow any node to communicate with the entire system. The issue is that the meantime to failure on an exascale machine is boot time. With that in mind we can't rely on classic MPI parrelization, and the cloud framework isn't a drop in solution as we would never get sufficient speed using MPI on those type of nodes. The move to heterogenous computation is a start down the right path, but it really feels like we also need to push more of these issues down into the language. I have hopes that something like Julia can be one of the solutions.

I'm not sure how Julia will turn out, but Erlang is a more mature effort on the front of language+runtime support for distributed computing.

The next decade is going to be interesting as far as distributed computing is concerned. The last 40years we have spent making supercomputers look like single machines by depending on fat interconnects that allow any node to communicate with the entire system. The issue is that the meantime to failure on an exascale machine is boot time. With that in mind we can't rely on classic MPI parrelization, and the cloud framework isn't a drop in solution as we would never get sufficient speed using MPI on those type of nodes. The move to heterogenous computation is a start down the right path, but it really feels like we also need to push more of these issues down into the language. I have hopes that something like Julia can be one of the solutions.

I'm not sure how Julia will turn out, but Erlang is a more mature effort on the front of language+runtime support for distributed computing.

I haven' looked at erlangs performance with regards to vector operations and other forms of simd. Julia has me interested as it can target gpus once nvcc has the planned llvm backend.

Yes Ethea! I want to get my foot in the door of that side of things, but I really don't have as much experience with distributed computing as I would like. I have an interview Tuesday for an internship that might fix that, though.