Interesting results, thanks for sharing. I can perhaps shed some light on the performance differences.

> Buffer 4.259 5.006

In v0.10, buffers are sliced off from big chunks of pre-allocated memory. It makes allocating buffers a little cheaper but because each buffer maintains a back pointer to the backing memory, that memory isn't reclaimed until the last buffer is garbage collected.

Buffers in node.js v0.11 and io.js v1.x instead own their memory. It reduces peak memory (because memory is no longer allocated in big chunks) and removes a whole class of accidental memory leaks.

That said, the fact that it's sometimes slower is definitely something to look into.

> Typed-Array 4.944 11.555

Typed arrays in v0.10 are a homegrown and non-conforming implementation.

Node.js v0.11 and io.js v1.x use V8's native typed arrays, which are indeed slower at this point. I know the V8 people are working on them, it's probably just a matter of time - although more eyeballs certainly won't hurt.

These are very interesting findings. On a higher level though: Are there any significant performance differences between the APIs of node and io? E.g. tcp package processing, file system access etc? I know that a lot of them are effectively C, so independent of the V8 version.

And the point of this is what exactly ? The author advertises Goss "Easy and cheap concurrency", "Low memory overhead" and "Easy deployment", but the example does not show how Go will help in this particular case.

It might make sense to put some compute intensive tasks in Go, or any other compiled language thats more efficient, but exec'ing and returning JSON isnt really a great idea.

If you were going to consider this seriously, then you should have the target app start once, listen for comands on some channel like a socket, and have it return information. JSON is probably a little too verbose for this kind of use-case as well, but it depends how efficient Ruby would be at unpacking alternative serialization formats.

the requirement for pre-processing makes me a bit suspicious, it's going to be a pain to interact with other parts of the toolchain, so i would hope there's a very good reason for it. i would make this at the top of the README.md.

The same comments apply to a lesser extent to the C++14 requirement- does your framework really benefit from using C++14 features over say C++11 ? Again I would make this clear at the top of the README.

If the author has seen Facebook proxygen[1] (another C++ http framework), a comparison to it would be helpful.

At first glance, it looks like the Silicon dependency microhttpd is somewhat analogous to proxygen. Is the Silicon value-added proposition the extra layers on top such as auto gen of client-side code, etc?

the point of this kind of thing is that it can allow super fast performance for critical endpoints:

Suppose your app consumes a JSON api and it turns out that one or two endpoints constitute most of the server load.

Rewriting one or two endpoints using a lightweight, fast framework can be a great solution, so long as you are already being smart about caching and there aren't other bottlenecks in your architecture.

What gaming engines already implement or plan to implement path tracing in them in the future? Path tracing looks like the future of gaming graphics, and would do especially well with VR in a decade or so (hopefully by then path tracing will also be accelerated in hardware).

"Here's a concrete example: suppose you have millions of web pages that you want to download and save to disk for later processing. How do you do it? The cool-kids answer is to write a distributed crawler in Clojure and run it on EC2, handing out jobs with a message queue like SQS or ZeroMQ.

The Taco Bell answer? xargs and wget. In the rare case that you saturate the network connection, add some split and rsync. A "distributed crawler" is really only like 10 lines of shell script."

I'm becoming a stronger and stronger advocate of teaching command-line interfaces to even programmers at the novice level...it's easier in many ways to think of how data is being worked on by "filters" and "pipes"...and more importantly, every time you try a step, something happens...making it much easier to interactively iterate through a process.

That it also happens to very fast and powerful (when memory isn't a limiting factor) is nice icing on the cake. I moved over to doing much more on CLI after realizing that doing something as simple as "head -n 1 massive.csv" to inspect headers of corrupt multi-gb CSV files made my data-munging life substantially more enjoyable than opening them up in Sublime Text.

Perhaps I'm missing something. It appears that the author is recommending against using Hadoop (and related tools) for processing 3.5GB of data. Who in the world thought that would be a good idea to begin with?

The underlying problem here isn't unique to Hadoop. People who are minimally familiar with how technology works and who are very much into BuzzWords will always throw around the wrong tool for the job so they can sound intelligent with a certain segment of the population.

That said, I like seeing how people put together their own CLI-based processing pipelines.

I think it is unsafe to parallelize grep with xargs as in done in the article, because, beyond delivery order shuffling, the output of the parallel greps could get mixed up (the beginning of a line is by one grep and the end of a line is from a different grep, so, reading line by line afterwards, you get garbled lines).

Author begins with fairly idiomatic shell pipeline, but in the search for performance the pipeline transforms to a awk script. Not that I have anything against awk, but I feel like that kinda runs against the premise of the article. The article ends up demonstrating the power of awk over pipelines of small utilities.

Another interesting note is that there is a possibility that the script as-is could mis-parse the data. The grep should use '^\[Result' instead of 'Result'. I think this demonstrates nicely the fragility of these sorts of ad-hoc parsers that are common in shell pipelines.

Some have questioned why I would spend the time advocating against the use of Hadoop for such small data processing tasks as that's clearly not when it should be used anyway. Sadly, Big Data (tm) frameworks are often recommended, required, or used more often than they should be. I know to many of us it seems crazy, but it's true. The worst I've seen was Hadoop used for a processing task of less than 1MB. Seriously.

It's about how Joyent took the concept of a UNIX pipeline as a true powertool and built a distributed version atop an object filesystem with some little map/reduce syntactic sugar to replace Hadoop jobs with pipelines.

The Bryan Cantrill talk is definitely worth your time, but you can get an understanding of Manta with their 3m screencast: https://youtu.be/d2KQ2SQLQgg

Bottom line is - you do not need hadoop until you cross 2TB of data to be processed (uncompressed).Modern servers ( bare metal ones, not what AWS sells you ) are REALLY FAST and can crunch massive amounts of data.

Just use a proper tools, well optimized code written in C/C++/Go/etc - not all the crappy JAVA framework-in-a-framework^N architecture that abstracts thinking about the CPU speed.

Bottom line, the popular saying is true:"Hadoop is about writing crappy code and then running it on a massive scale."

I had an intern over the summer, working on a basic A/B Testing framework for our application (a very simple industrial handscanner tool used inside warehouses by a few thousand employees).

When we came to the last stage, analysis, he was keen to use MapReduce so we let him. In the end though, his analysis didn't work well, took ages to process when it did, and didn't provide the answers we needed. The code wasn't maintainable or reusable. shrug It happens. I had worse internships.

I put together some command line scripts to parse the files instead- grep, awk, sed, really basic stuff piped into each other and written to other files. They took 10 minutes or so to process, and provided reliable answers. The scripts were added as an appendix to the report I provided on the A/B test, and after formatting and explanations, took up a couple pages.

This also isn't a straight either or proposition. I build local command line pipelines and do testing and/or processing. When either the amount of data needed to be processed passes into the range where memory or network bandwidth makes the processing more efficient on a Hadoop cluster I make some fairly minimal conversions and run the stream processing on the Hadoop cluster in streaming mode. It hasn't been uncommon for my jobs to be much faster than the same jobs run on the cluster with Hive or some other framework. Much of the speed boils down to the optimizer and the planner.

Overall I find it very efficient to use the same toolset locally and then scale it up to a cluster when and if I need to.

One common misconception about using Hadoop is that use Hadoop if your data is large. Usage of Hadoop should be more driven based on the growth of data rather than size.

I agree that for the given use case, the solution is appropriate and works fine. Problem mentioned in the given post is not a Big Data problem.

Hadoop will be helpful in case if there are millions of games are played everyday and we need to update the statistics daily e.t.c. For this case, the given solution will hit bottleneck and there will be some optimisation/code change needed to keep running the code.

Hadoop and its ecosystem are not a silver bullet and hence should not be used for everything. The problem has to be a Big Data problem

I feel ag (silver surfer, a grep-ish alternative) should be mentioned (even though he dropped it in his final awk/mawk commands) as it tends to be much faster than grep, and considering he cites performance throughout.

on a couple of GB this is true, actually if you have ssd's I'd expect any non compute bound task to be faster on a single machine up to ~10gb after which the disk parallelism should kick in and Hadoop should start to win.

I've had the pleasure and displeasure of working with small datasets (~7.5GB of images) in shell. One often needs to send SIGINT to the shell when it starts to glob expand or tab complete a folder with millions of files. But besides minor issues like that, command line tools get the job done.

About 5 years ago I worked at a company that took the "pile of shell scripts" approach to processing data. Our data was big enough and our algorithms computationally heavy enough that a single machine wasn't a good solution. So we had a bunch of little binaries that were glued together with sed, awk, perl, and pbsnodes.

It was horrible. It was tough to maintain-- we all know how hard to read even the best awk and perl are. It was difficult to optimize, and you always found yourself worrying about things like the maximum length of command lines, how to figure out what the "real" error was in a bash pipeline, and so on. When parts of the job failed, we had to manually figure out what parts of the job had failed, and re-run them. Then we had to copy the files over to the right place to create the full final output.

The company was a startup and the next VC milestone or pivot was always just around the corner. There was never any time to clean things up. A lot of the code had come out of early tech demos that management just asked us to "just scale up." But oops, you can't do that with a pile of shell scripts and custom C binaries. So the technical debt just kept piling up. I would advise anyone in this situation not to do this. Yeah, shell scripts are great for making rough guesses about things in a pile of data. They are great for ad hoc exploration on small data or on individual log files. But that's it. Do not check them into a source code repo and don't use them in production. The moment someone tries to check in a shell script longer than a page, you need to drop the hammer. Ask them to rewrite it in a language (and ideally, framework), that is maintainable in the long term.

Now I work on Hadoop, mostly on the storage side of things. Hadoop is many things-- a storage system, a set of computation frameworks that are robust against node failures, a Java API. But above all it's a framework for doing things in a standardized way so that you can understand what you've done 6 months from now. And you will be able to scale up by adding more nodes, when your data is 2x or 4x as big down the line. On average, the customers we work with are seeing their data grow by 2x every year.

I feel like people on Hacker News often don't have a clear picture of how people interact with Hadoop. Writing MapReduce jobs is very 2008. Nowadays, more than half of our users write SQL that gets processed by an execution engine such as Hive or Impala. Most users are not developers, they're analysts. If you have needs that go beyond SQL, you would use something like Spark, which has a great and very concise API based on functional programming. Reading about how clunky MR jobs is just feels to me like reading an article about how hard it is to make boot and root floppy disks for Linux. Nobody's done that in years.

This kind of approach can probably scale out pretty far before actually needing to resort to true distributed processing. Compression, simple transforms, R, etc... You can probably get away with even more by just using a networked filesystem and inotify.

There is also an interesting and fun talk to watch by John Graham Cumming from CloudFlare. http://www.youtube.com/watch?v=woCg2zaIVzQ using Go instead of xargs. Kind of fits into: "Using the right tool for the job". There is no Big Data involved but it shows a sweetspot where it might make sense(make it easier) to not use a shell script (i.e retries, network failure)

What is missed in the article and many of these comments is that Hadoop isn't always going the best tool for one job. It shines in its multitenancy- when many users are running many jobs-each developed in their favorite framework or language(bash/awk pipeline? No problem) running over datasets bigger than single machines can handle.

Heres a probably unpopular opinion....Pipes make things a bit slow. A native pipeless program would be a good bit faster - incl. an acid db. Note that doing this in python and expecting it to beat grep wont work...

The other thing is that hadoop - and some others are slow on big data (peta, or more) vs own tools. Theyre necessary/used because of massive clustering (10x the hardware deployed easily beats making ur own financially).

I suspect its a general lack of understanding the way computers work (hardware, os ie system architecture) vs "why care it works and python/go/java/etc are easy for me i dont need to know what happens under the hood".

on a tangential note, sometimes I use a slower methods for UI reasons. For example avoiding blocking the UI, or allowing for canceling the computation, or displaying partial results during the computation (that last one might completely trash the cache).

Hadoop is replacing many datawarehousing dbs like netezza, teradata, exadata. In the process, many datwarehousing developers have become hadoop developers, who write sql code; after all, hadoop got a sql interface via hive.

Shell commands are great for data processing pipelines because you get parallelism for free. For proof, try a simple example in your terminal.

sleep 3 | echo "Hello world."

That doesn't really prove anything about data processing pipelines, since echo "Hello world." doesn't need to wait for any input from the other process; it can run as soon as the process is forked.

cat *.pgn | grep "Result" | sort | uniq -c

Does this have any advantage over the more straightforward verson below?

grep -h "Result" *.pgn | sort | uniq -c

Either the cat process or the grep process is going to be waiting for disk I/Os to complete before any of the later processes have data to work on, so splitting it into two processes doesn't seem to buy you any additional concurrency. You would, however, be spending extra time in the kernel to execute the read() and write() system calls to do the interprocess communication on the pipe between cat and grep.

Also, the parallelism of a data processing pipeline is going to be constrained by the speed of the slowest process in it: all the processes after it are going to be idle while waiting for the slow process to produce output, and all the processes before it are going to be idle once the slow process has filled its pipe's input buffers. So if one of the processes in the pipeline takes 100 times as long as the other three, Amdahl's Law[1] suggests that you won't get a big win from breaking it up into multiple processes.

great article!PS. probably some hardcore unix guy would tell you that you are abusing cat. The first cat can be avoided, and you might gain even better performance. Also using gnu grep seems to be faster.

Awk and Sed aren't very accessible to most people who did not grow up learning those tools.

The whole point of tools built on top of Hadoop (Hive/Pig/HBase) is to make large scale data processing more accessible (by hiding the map-reduce as much as possible). Not everyone will want to write a Java map-reduce in Hadoop. However, many can write a HiveQL statement or Pig textual script. Amazon Redshift brings it even farther - they are a Postgres compatible database, meaning you can connect your Crystal Reports/Tableau data analysis tool to it, treating it like a traditional SQL database.

What's really important is that you never should keep a compromised system like this running, even if you think you found all modifications the attacker did.You probably didn't.So save your configs and set this machine up from scratch.

I've been using outlook after gmail IMAP was blocked here in China and it is working just fine. And as far as I can see the block of gmail IMAP is now gone again. It is still a lot slower compared to outlook though.

Doesn't surprise me the slightest to be honest. Having worked on a customized fork of OpenStack that used a pure L3 networking model I know that you are set for pain the moment you don't want to run everything on a single Ethernet segment.

It doens't help that the Neutron data-model at the time that I was working on in (say 12 months ago or so) was terrible and basically impossible to scale/make to perform.

Inevitably you were then stuck with the deprecated and janky nova-network interface. Which while efficient and fast was also old and missing tons of stuff - meaning more monkey patching and janking around. Not to mention the fact that because of it's deprecation many completely ridiculous bugs befell it in later releases. (Grizzly and onwards basically)

TBH I am so disillusioned with the project I hope I don't have to work in or around it again.

Sounds like these guys are doubling down on the IAAS model, 'premium bare metal'? Certainly there are a lot of people who'd like to run on bare metal, with a more configurable network, but how realistic is it at this time?

This. Network hardware vendors have no incentive to make their devices more easily automated, and in fact face disincentive not to.

Doe anyone remember the excitement and promise around Google App engine when it was first announced, and before they changed the pricing model to per instance? The ability to put your app on the cloud, and scale up to the free tier, then out from the free tier on a paid plan if that's what you needed.

> General Clapper praised the food; his hosts later presented him with a bill for his share of the meal.

Not only are they evil, but they're cheap too.

But the fact is that the hosts would have billed for the meal because the U.S. government asked to be billed.

The USG requires that officials traveling on business not accept gratuities, gifts, dinners, or anything above a certain value (which is about US$100 -- it gets adjusted for inflation, so it might be higher today).[1]

There is an exemption to allow acceptance of gifts of travel expenses of more $100 when officials travel outside the United States on business, but only if "such acceptance is appropriate, consistent with the interests of the United States, and permitted by the employing agency".[1]

In this case, General Clapper and his staff probably didn't want to deal with the question of whether it was "appropriate" or deal with reporting requirements, so they just asked for the bill. Or, their North Korean hosts, knowing U.S. policy, were proactive in making up a bill.

Either way, the NYT article should have mentioned the USG policy. If they can't get that little thing right, it makes me wonder about the accuracy of the rest of the article.

> Mr. Jang said that as time went on, the North began diverting high school students with the best math skills into a handful of top universities, including a military school specializing in computer-based warfare called Mirim University, which he attended as a young army officer.

I realize I'm not engaging the core topic being discussed, but stories like this are why I'm surprised people like Will Scott haven't gotten in trouble. (I don't want to single him out, but he's the best example I have at hand.) For the past two years, he's gone to North Korea to volunteer teaching computer science.[1][2] At best, his students' skills will be wasted on some silly Android apps praising the supreme leader. More likely, these students will go on to make software for less-than-ethical purposes: wargame simulation, nuclear explosion modeling, missile guidance systems, or network/server subversion.

I'm not saying this software shouldn't exist, just that the world would be better-off if the DPRK had more difficulty writing it. And I'm surprised the State Department hasn't fined or revoked the passport of any American who has aided the DPRK in this manner.

"We realized there was another actor [South Korea] that was also going against them [North Korea] and having great success because of a 0 day they wrote. We got the 0 day out of passive and were able to re-purpose it. Big win."

NSA learned of a 0-day exploit being used by South Korea (not five eyes) and re-purposed it. They had knowledge of an exploit in the wild. Did they share this with anyone in order to close this security flaw? They exploited it. This is not a case of the NSA developing an exploit in house. They took this from the wild. This would seem to confirm suspicions that NSA is/was willing to allow active 0-days to fester, leaving the general public exposed.

We know that the NSA tapped into computer systems and the backbone of essentially every country on Earth - I don't see how NK would have somehow been excluded.

What's interesting is what information the New York Times includes that is not covered in the NSA document, presumably from unidentified officials and former officials.

The document on Der Speigel speaks primarily about taking copies of intelligence from SK hacking efforts against NK and also taking copies of intelligence from NK hacking efforts that had in turn been hacked by SK (and in turn by NSA - "fifth party collection").

The document mentions the NSAs unwillingness to rely on intelligence filtered through so many third parties and made efforts to establish its own foothold.

Essentially none of the article is backed by the document as a first source and must have come from the unnamed sources.

This is the second NYTimes article I've seen that has suggested that the NSA was collecting information on a group while that group was planning an attack, but that the collection or the analysis was not sufficient to stop the attack. (The other article was on the Mumbai terrorist attack).

This is interesting and you could look at it a number of different ways:

- Collecting data is one thing, but understanding what it means is incredibly challenging and the NSA might not be doing a great job.

- Even when they can't prevent an attack, there is still value in having this data so that they can attribute the attack and understand something about the motives and methods of the attackers.

Might be me, but I'd be surprised if they hadn't. They hacked so many countries including China[1], Mexico[1], Belgium[1], Syria[3], Iran[4], etc. (after saying that a digital attack is an act of war[2]). I don't remember each and every leak and I don't feel like looking up everything, but they seem to have targeted loads of people in various countries. I doubt North Korea (which is not even an ally) is the exception.

Typical for the NYT to bury the strong countervailing evidence against the official war-mongering story in a couple of paragraphs 2/3rds of the way through the article.

Still, the sophistication of the Sony hack was such that many experts say they are skeptical that North Korea was the culprit, or the lone culprit. They have suggested it was an insider, a disgruntled Sony ex-employee or an outside group cleverly mimicking North Korean hackers. Many remain unconvinced by the efforts of the F.B.I. director, James B. Comey, to answer critics by disclosing some of the American evidence. ... it would not be that difficult for hackers who wanted to appear to be North Korean to fake their whereabouts.

If that's true, who's to say our guys didn't launch the attack from their computers? Why would they even admit to being in there? The NSA doesn't say anything unless 1) they have to, or 2) they want to. I don't see why they would make this claim.

According to the article, NSA noticed the first spear-phishing attacks against Sony in September. Yet they didn't realize admin credentials had been stolen until much later. Nor did they seem to notice terabytes of data being exfiltrated out of Sony. Fishy story.

In 5 years time when this tit for tat results in some massive disruption in the US (power outage or something) people are going to be severely angry and say NK attacked them for no reason, etc. (i.e 9/11)

The US yet again going around the world making enemies, and giving them perfectly valid reasons to retaliate.

I've read enough comments about "We already knew that blah blah blah ...", "What's interesting is that blah blah blah ...". Seems that you guys get used to the reality so fast, the only thing you can do is trying to dig into some detail about this kind of news and to avoid the discussing about whether this kind of things is RIGHT or WRONG from the beginning!

I'm planing to watch POI for the second time, may your god bless you American, and may there be a real-hero like Reese or Carter.

But we all know that most people are just as normal as Lionel, they don't have the courage to face the problem alone. So let's just wait for your bright future. LOL

"Online and off, some teams consistently worked smarter than others. More surprisingly, the most important ingredients for a smart team remained constant regardless of its mode of interaction: members who communicated a lot, participated equally and possessed good emotion-reading skills."

can be read as a variation on John Boyd's OODA loop. Boyd made the point, repeatedly, that in war and sometimes in business, victory typically goes to whoever can iterate through ideas more quickly, as new information comes in. The winners are not necessarily smarter, they simply iterate faster based on the information they have. And the same seems to be true of the teams being described here.

Finally, teams with more women outperformed teams with more men. Indeed, it appeared that it was not diversity (having equal numbers of men and women) that mattered for a teams intelligence, but simply having more women. This last effect, however, was partly explained by the fact that women, on average, were better at mindreading than men.

I wonder how this article would have been received if it stated : 'Teams with more men performed better. This can be explained by the fact that men, on average, are better at solving certain problems than women'. I almost never hear about a scientific study where they discover that men are better than women at something.

My guess is that what make a group successful in the short run, like if you throw a group of strangers together and tell them to do tasks together in a monitored environment for 5 hours, may not be the same things that make a group successful in the long run.

The importance of emotional intelligence should decrease as roles are carved out and members define their niches, and the importance of individual ability should increase as the group optimizes itself.

People overvalue (democratic) teamwork. I'd like to argue that a good leader with a team of followers is more effective than a team where everyone is equal. For example: the pyramids, cannot be build by one man, but wouldn't have existed if it wasn't for central leadership.

Take Steve Jobs, it was his vision that made Apple successful.

Teams need skill, but they must also be undivided. Democracy in teams essentially divides the team, those opposed and forced to act according to the majority will not be cooperative. The best results are that of a single visionair with or without a team of followers.

I might be wrong, but I had a look at the original paper's methods and the experimental process seems to be based around producing many p-values and then trawling looking for significance. They don't seem to correct for multiple-testing outcomes though. This resembles what's been described as p-hacking (http://www.nature.com/news/scientific-method-statistical-err...)

I could well be missing something though, so please correct me if I'm off the mark.

The inherent flaw of teamwork is that responsibility is actually atomic. As soon as you assign the same responsibility to a group of two or more persons, things will start to go wrong sooner or later. If you look more closely at successful teams of the past, you will notice that they succeeded because they actually worked as a well-coordinated group of individuals, with a clear separation of responsibilities. Ringo Starr never played the guitar, and John Lennon kept his hands from the drums. If a team is unable to assign responsibilities to its individual members, you don't get a team, you get a committee. And it is no accident that "design by committee" has a negative connotation. What you get out of a committee is not the greatest idea a single member had. What you get out of a committee is the lowest common denominator. And if that denominator happens to be low enough, you won't get any usable results at all.

Wonder how did they measure the individual skill of participants. Because as far as I am aware skill is what helps teams to succeed most. And if everyone was of the same skill, it's not that surprising if those with other advantages, like better emotion reading, performed better.

I am surprised that they say that more women lead to a better performance. There is no single female-dominated team that comes to my mind that achieved exceptional performance. All the great achievements and inventions of humanity, top notch startups, product teams etc. - all consisting of few or no women. Am I missing here something?

For the uninitiated, this solves a major (major!!) pain in the Haskell community: the lack of namespacing. If you declare two records with an 'id' field in the same module, you would get conflicts. People have been either using prefixing (fooId and barId) or splitting up modules to solve this problem, which can both considered to be ugly hacks.

I am so glad the author has been able to pull this off, and it is one of those "why didnt anybody think of this before, it is so obvious!" moments.

It seems to me that most of these issues could be solved at the syntactic level simply by allowing recordval.fieldname syntax. I truly don't understand why Haskell doesn't allow this - anyone have some insight?

When will the US DoJ hire a decent PR firm? Again and again during these three years they've made cock-ups that push people (like myself) towards supporting Dotcom even if we don't particularly like the guy, and even if we think for the actual charges he's likely in the wrong.

> In the forfeiture case, prosecutors will argue why Dotcoms claim on the frozen assets should not be allowedand therefore forfeited to the US governmentunder the "doctrine of fugitive disentitlement." That idea posits that if a defendant has fled the country to evade prosecution, then he or she cannot make a claim to the assets that the government wants to seize under civil forfeiture.

Hooray, a loophole in the laws that might let them seize his assets: call him a fugitive! Even though he hasn't visited America one single time in his life. And even though he is still in the country where he calls home, and has done for more than a year before this case started.

Maybe it's not the DoJ's fault, maybe Dotcom and his lawyers (and PR) are really, really good. But nearly every story in the past 3 years has made him look good and them look bad, which shouldn't be the case when you've got a government department charged with delivering justice against a guy with a track-record of being an asshole and breaking the law.

""Congress, initially as part of the War on Drugs but later expanded to include most federal offenses, criminalized almost every financial transaction that flows from funds that are the proceeds of specified unlawful activity," Bruce Maloy, an Atlanta-based attorney and an expert in US extradition law, told Ars by e-mail.

"In simplest terms, if you possess funds from a crime and do anything with the money other than bury it in the ground or hide it under the mattress, you have committed a new crime. Spending the money is a new crime, opening a bank account is a new crime. These expenditures do not have to be in furtherance of the original crime, but my recollection is that here it alleged that they are. In short, throwing in a money laundering allegation is quite common in US federal indictments."

Page Pate, another Atlanta-based defense lawyer who has also worked on international extradition cases, agreed. "It's almost automatic to add money laundering charges to any offense whether it's drug-related or not," he said. "I haven't seen it that often in criminal copyright cases. The US has been very aggressive in adding money laundering and forfeiture in criminal cases.""

I find this interesting for how it demonstrates recursion as a means of looping. Most simple code tutorials like this just follow the procedural style, and make you throw a set of instructions inside of a loop block.

CargoBot has given me hours of having to really think about how to structure my "code" to get the recursive calls just right. I haven't gotten deep into Lightbot yet, but it should be a fun puzzle if later levels have similar types of recursive challenges.

This was really good but there were a few levels that left a bit to be desired. Some of them were basically "find a way to mash this code into these functions without exceeding the instruction cap" while others encouraged quite elegant programming. The last one of the final stage was probably the best example, I thought it was quite elegant.

It's not that simple. My 6-year-old son is having hard time - even with the Junior version on Android. Sometimes I do have issue with the regular version as well - it's much easier to write code than use the constructs.

I agree with his comments on HPKP. I looked in to adding HPKP headers to a couple of my sites, and figured out how to do it, but I'm nervous about enabling it. It seems far too easy to make a mistake and lock people out of being able to visit your site. The trouble is, if you make a mistake, they're not locked out until you get around to fixing it. They're locked out until the expiry date which you set in the HPKP headers, which could be months away.

The main tenet of this blog post is that you should argue for your compensation based on the amount of value you add to the company.

That's nice in theory, but the techies' dilemma is that it's often difficult or impossible to put a hard number on the value they have added.

How many customers were retained because you decreased response times by 100ms? How many customers were gained because of that slick UI you created? How many discounts were not handed out because of downtime that wasn't suffered because of your cautious error handling and exemplary testing? How much money was saved because of that documentation you made that helped the new hire ramp up faster?

Even when you can put a hard number on work you've done, like decreasing hosting costs by $15k per month, isn't that why you're paid so handsomely already? How are you going to do that again next year? (Why haven't you done it already?) Wasn't it a group effort?

The reality is, you're basically going to get paid based upon what your employer has deemed everyone else in your position is earning, plus or minus some % based upon experience level, your reputation, and how badly the company needs the position filled. If you don't like that, time to go into management or sales.

The most important salary negotiation tip:DON'T TELL THEM HOW MUCH YOU ARE CURRENTLY BEING PAID OR GIVE THEM A RANGE.

Recruiters always ask this up front and INSIST that they must know. I have NEVER been denied the opportunity to interview for refusing to give a number upfront.

If you're applying at a company, it means you've done at least a little research on what they should be expected to pay and you see somewhere around that range as acceptable. You don't have to tell them that you've researched their rates and find them acceptable, because that too would be like giving them a range, instead the research is simply to avoid wasting your time. You wouldn't want to interview for a job that pays the position with compensation worth at most $60k when you're already making at least $100k.

This way, you have an advantage: you know roughly how much they pay but they have almost no idea how much (i.e. how little) you will accept as compensation. Best case scenario, they offer you MORE than what your research said they would, and you negotiate a little more on top of it and accept, assuming you actually like the job. Even if they say no to your counter offer, you're still ahead. Worst case scenario, they offer you less, they say no to your counter offers, and you have to decline. Either your research was wrong or they were lowballing you, either way you've got multiple other interviews in process (right?) so move on. If you find your research is repeatedly off the mark, find better sources.

No matter what, don't give them a number. Make them give you a number first and negotiate from there.

Another B.S. article about negotiating IT salary's.If IT salary's kept pace back before the Dot bomb days everyone would be making 150k starting.

Here is why we don't

IT has for the most part never been a money maker of a lot of companies. They see it as a loss leader.IT is seen as the "Oh boy here comes the IT budget again."Unless of course your business is making software for the masses. But then I have been at 3 companies like that and IT was always the first on the chopping block.You are paid only as much as it takes to replace you, unless you walk on water then you should be at Google or FaceBook.

The only way to get a 20 to 25k raise is to find another job. Unless of course you are doing the job of 4 people which in IT 99.9% of the times you are.Don't believe me? Go ask for a 20k raise if their eyebrows shoot up like Mr. Spock then you know how much you are worth. I sure hope you have another job lined up because that's the queue to them you are looking.

This has been my experience and I have done exactly what I have stated above.

I now do security and compliance. No more wake up calls, no more you can't go on vacation because Oct. through January is the time of the year where we make our money.

Oh and that 100K salary mark gets very hard to justify every year especially when the CFO looks at the books and has a list of who is making over 100K per year. Unless of course you live in California and NewYork then that's welfare wages.

i've ignored these tenets exactly once, and that remains an unpleasant memory for me. every other time, it's resulted in increases of 10-40% in comp & benefits. of course, you should balance comp with qualitative advantages, but the point is not to just let yourself get screwed. =)

That means, if you have 100 good cars selling at $2000, and 100 lemon cars selling at $1000, all the lemons will sell first, because the people making purchasing decisions aren't expert enough to tell the difference.

So it is very rare to be paid based on value, because why would a [CEO|CTO|hiring manager|whatever] pay $1M for this guy, when he can pay $150k for that guy, and he can't really tell the difference between them?

Just an idea, I haven't seen this discussed anywhere, what do you guys think?

Well, there's all sorts of factors involved, but I've found that the more you know about the company, the better you are at negotiating. That sounds like a no-brainer, but that's probably just as important as market value.

For example, I was contracting a company that had most of their software development teams working as contractors. About 6 months into contracting there, I learned that they had made a strategic decision to try and hire everybody instead of contracting and the reason they did that was to save on money and more importantly, retention.

So they would ask me to convert, and I would pretty much blow them off for a couple months, and they would ask me again a couple months later. So after a few times of this routine, I knew I was in a better position to start the negotiations. Also, I had waited for several other contractors to convert and got some tidbits from them.

At the end of the day, I negotiated for much more than average salary in my market.

If you're contracting, it pays to be contracting with a staffing firm that knows the decision makers in the company.

> Technology people are without a doubt the most inept group when it comes to negotiating for compensation.

I would rephrase it as:

People that love what they do are without a doubt the most inept group when it comes to negotiating for compensation.

I think it doesn't matter if you are from IT, an architect or any other job. If you really like what you do, you don't mind to do it for minimal money and business people will exploit that weakness. For example, in architectural field, relevant architects are sometimes invited to be guest teachers in colleges. Once one said during his class:

Nice advice, but I feel like articles like this make marginal improvements to my comfort and skill at negotiation.

I think perhaps it boils down to unfamiliarity: I've done it a very small number of times, and it feels alien and weird. Especially some of the language used. Compare that with a manager who has hired $x employees a month for a decade, and doesn't think anything of it.

I wonder: Are there any videos available of negotiations (real or realistically faked) where one could build familiarity with what a successful negotiation looks and sounds like? I think that might be a useful addition to textual "how to" guides like this.

There is one danger to taking stock that should be made more explicit - you weaken your future negotiation power with the company, since payment by stock already ties your hands some, especially if you haven't gotten past the 1 year cliff.

I'm about to ask for a raise for the first time but I'm not entirely sure how to handle it. I've been with this company for 7 years and only received 2-3% increases the past 3 years. I interviewed at a competitor and they offered me a 20% increase, which I turned down.

During negotiations should I simply ask for them to match? I don't want them to think I'm actively looking for a new job but want to get paid my worth.

Every two to three years start looking for another job - chances are you are being underpaid and holding out for a meaningful raise or promotion is usually a bad idea. Few companies seem to keep salaries properly indexed to the market. At least in IT.

When you get an offer - always always counter it. No reasonable business is going to withdraw an offer if you counter. Figure out what you're worth and add 5-10%. If the first offer is already on the 'high end' of the spectrum still counter and ask for a signing bonus, options, better year end bonus, or more vacation. That will most likely be the most profitable email/phone-call you'll have - at least until you repeat the process in 2-3 years.

As an intermediate developer I've managed to quadruple my wage after going from full-time (40 hours) to contracting rates (30+ hours, 6-month term). I brought up the fact that my side projects are starting to gain traction, but I know the work I'm doing for the company is important and I'd like to finish what I started.

I also brought up that I've had a few job offers with highly reputable firms, and my current career growth had stagnated where I was currently positioned.

This post is a condensed version of an already short book I recently read - Breaking the Time Barrier [1]. I recommend reading both.

Like the author of this post says, calculating the value you're adding with your work is the only true way to accurately price yourself. It may not be as easy to do as someone who works in sales but it's worth your time to do so.

Shortly after reading BtTB, I had a new contract opportunity come my way. I doubled my hourly rate.

As long as you do not own what is being produced you will always be in a 'time for money' situation. You are trading your time for money (sometimes 5% more a year - or 15% more a year, if you strategically play chess with the job market and your skills niche).

I believe we are all much better off if we focus on building our own niche community/product (a.k.a. http://javascriptweekly.com - the number of subscribers is not as important as the quality of the subscribers).

If you want to know what companies offer as base salaries check this website I made that uses H1B wage data. The tech companies have a very wide range for negotiation at each level and it's important to know that when you go into salary negotiations. Although this is H1B data, for top companies these are the same base salaries that US residents are also making, and they are pretty damn high....Look at google for example: http://salarytalk.org/search#%7B%22qcompanyName%22%3A%22goog...

I'm very happy to be with a company that has a standard policy of yearly 10-15% pay increases. We tend to do unexciting and sometimes just plain weird contract work for other companies, so my company has to pay us what we're worth to keep us around.

A tech employer in Vancouver once told me that if an engineer tries to negotiate for a higher salary that is a signal that they care only about money and that they wouldn't be good hire, regardless of their experience and skillset. This is one of the "big" tech companies in Vancouver, BC. Never mind the hundreds of small sweatshops here.

In something like 25 years in this industry I have seen this opinion time and time again. "I am not being paid what I'm worth". You can negotiate to higher levels of compensation, but there is more to life (and your job) than money. I have often seen people price themselves out of a job.

A normal high tech company runs an R&D budget that is less than 10% of earnings. The rest of the money goes to cost of sales, infrastructure (building, chairs, etc) and various other things. This means that on average your contribution needs to pull in at least 10x your cost in order for you to be seen as being worth it. There are also a lot of costs for hiring employees -- employee tax, insurance, benefits, etc, etc. So if you make $100K in salary, then your contribution has to bring in maybe $1.2 - $1.5 million.

You may think "Oh those bastard sales people are making way too much and aren't providing any benefit", but you will find that the company will still budget less than 10% of earnings for R&D. Whether it is justified or not, quite a few of the people in the "management side" identify a lot better with sales and understand their value a lot more. Unless you think of a way to significantly increase earnings, then you are depleting the pool of cash for R&D when you ask for a raise.

"Not my problem," you think.

Except Jane down the corridor appears to be very nearly as productive as you are (whether it is true or not is completely beside the point because everything will be judged by its appearances). She makes something like 60% of what you make. She's a freaking bargain! You, on the other hand, bitch and moan that you can't make ends meet on $100K and that you are living out of garbage bins. Plus you see yourself as the saviour of the company and without you everything will just collapse. Managers think, "God, please don't make me talk to that guy again".

The order comes down from above -- either 1) Our competitors are kicking our ass and we need to downsize R&D OR 2) We need to ramp up explosively to hit the next big business wave, so we need more programmers!

How will we reduce our expenditures or hire more programmers with the same amount of money? Easy! We'll do away with those bitchy-moany prima donnas and hire more of the absolute bargains that never complain.

Here's a secret I've learned. Being seen as worth significantly more than you are paid means your boss always approaches you with a sense of gratitude. In fact, creating a sense of solidarity with management in this respect shows loyalty. While it is true that, in general, companies do not return such loyalty, individuals in management will tend to select a handful of people that they trust and "can not do without". Those people will not be the guys that constantly threaten to leave for greener pastures, or that constant complain that they aren't appreciated.

I have never negotiated salary. I have either taken what has been offered and then worked hard to become an integral member of the team or I have refused the job. I have left jobs that I didn't like, but I have never left to make more money. Nor have I ever threatened to do so. I probably get paid less than I might if I pushed hard, but I can tell you that I enjoy the privileges of being "that dependable guy" much more than any salary could provide.

as a business owner/operator involved in everything from sales to hiring to vendor purchasing, here's the most important thing:

if you can't walk away from a negotiation, it's not really a negotiation. if you feel, at any point, compelled to stay even though your interests are not being met in a reasonable matter, you're going to get screwed - and it's your fault, not the other person's.

in a true negotiation both parties are attempting to find an optimum solution that solves for 2 sets of 'peer to peer' requirements. it's supposed to be a cooperative endeavor. if at any point it turns contentious, you back out immediately. the opportunity was never there. it was just an illusion. this is the hard thing for people to grasp.

unfortunately, for most people these are things you learn by doing. "not all that glitters is gold".

another angle: How much would the company loose if they had to recruit another person exactly like you? The headhunters get 20% of your salary the first year, than it's cost time and working hours to find new employee, bring them up to speed, the insecurity that the new guy is not the right person, etc..

As a floor, you need to fund a modest life, and that means retirement, healthcare, personal improvemwnt, food, home, etc...

Add some small margin onto that for quality of life.

Second, get business minded. It helps to take an interest in the company, know it's financials, and it's goals. This helps you position your value in direct, meaningful terms.

I very strongly agree with being ready to walk when it comes time for salary discussions. If you can't walk, you will get the lowest comp, unless you have some relationship or other leverage to play on.

The bigger the increase, the more this matters.

The idea that you are as valuable as a replacement and training is false. This ignores you as a person, relationships you have, etc... this is often the framing, but do not be afraid to expand the discussion.

Set goals and justify them. This is something sales people do all the time. They are motivated by those goals and communicate them easily and consistently.

Ie: I need 200k this year to fund my travel and home investment plans.

A tech person may have identical goals, or maybe is wanting to build something, or send a kid to school.

Value these and frame your motivation to work in terms of your goals, company, love of the work, etc... Managers, and higher level people in the company understand and respect goal oriented people. Make sure your goals and the company align, or make basic sense and there are no conflicts.

This all speaks to the work being worth it and it also speaks to reasonable expectations as opposed to just greed. Greed isn't bad. Clear, meaningful goals are easier to sell and for others to identify with. When others identify with your life purposes, they can value them and very easily see how you are inclined to stay and work for them. They also nicely dodge the "how much is enough?" Type questions.

Get your other offers and or secure relationships needed to know you can land on your feet should you need to make a change.

Be flexible. The company isn't seeing it's goals play out all at once, and you won't either, but there should be a path to get there that is realistic.

All of this boils down to a "this is where I need to be and why it matters" conversation.

Shared and aligned goals is a great basis for a loyalty type arrangement. People will work hard for others who take care of them and seeing that play out is worth a lot.

Another advantage of goals is there sometimes is more than one way to get there besides cash. Nice to have options on the table.

If it goes well, great. If not, you have your fall back.

You may be able to contract too. Where a company really cannot pay you what you need to realize your goals, perhaps a more flexible arrangement can leave you empowered to do it your self.

This does not need to be a conflict of interest, particularly where you may have more skills not being used, or relationships where you can add value in atomic, easy to execute ways.

Another consideration is involvement with sales and marketing. If you can take some risk, you may find opportunities to capture nice rewards by being part of that process. This takes some people skills, but getting them is worth it.

Ask sales how valuable a tech person who can handle and understand the sales process is. They could be your most potent value advocates.

You help them close a big one, and it directly benefits them. You leaving will present an opportunity cost they will have zero problem justifying.

Of course there are spiffs and such potentially mixed up in this and it all depends on who you are. Taking some risk will differentiate you from other techs and that can be worth a lot.

The first time you walk it is hard. The first time you cultivate advocates is hard. The first time you take risk is hard, and the first time you get a nice increase is hard.

All worth it. Actually it all is as worth it as you think it is. And people count on those things being hard enough and not worth it enough to keep you inexpensive.

What is worth what? That is your primary question to answer. Sort your life goals out, value them, decide on risk and alternatives, and then proceed to have the dialog needed to get you there.

Once you start down this path, you do not stop. It becomes part of you and others will see that mindset and treat you accordingly.

interesting to read all these from the engineer's perspective. from the manager's side of the table things are quite different. If an employee comes to me and asks for a raise then I begin the process to replace them. We give reasonable raises, we pay fair market value and an engineer might make a little more elsewhere but they'll be giving back their RSUs and the opportunity to work on really cool stuff. If, however, they want something else then good luck to them, people are all different and it's a free country, but asking for raises means they aren't happy so they will leave anyway, either way, I begin looking.

when someone joins, unless they are at vp level they really don't have much negotiating opportunity, we make a decent offer and they either take it or leave it. we very seldom adjust the offer.

Hey, I work at a startup in Budapest (allmyles.com), I'll ask the team if we could teach him a thing or two by passing off some work. We mainly work with Python, and having him around (office is next to Dek Ferenc tr) might prove fun for all of us. Remote works for us as well, of course.

Disclaimer: We're rather bootstrapped, so I can make no promises. (Jeez, we might not be able to afford a 15 year old programmer.) An even larger issue might be that I think it's illegal to give work to anyone under 16. I'll have to ask around and if you send me some contact info to bence.nagy@allmyles.com, I'll get back to you once we've got things figured out.

Learn programming fundamentals through TopCoder and HackerRank, which have programming puzzles and teach algorithm and data structure fundamentals

Then also get involved in an open-source project, just keep looking for interesting subjects and find one you like. Programming is more about managing projects, and it will give you something to put and talk about on your resume, and meet new people, and lead to jobs

Everything you need is online. "Pick a job and become the person that does it" - Mad Men

It seems like almost everybody who replied is a web developer, and assumes your son wants to build websites. Let me suggest something different.

Figure out which part of programming he's interested in, why he likes it. Is he good in maths/physics, does he want to write games, websites, apps, etc.

If he's just interested in "programming" in general, get him a nice book on algorithms and suggest different languages he can use to implement them (something C-like, a scripting language, a functional one, a modern Lisp dialect).

Oh right, get him to use Linux or BSD, as that will give him access to a ton of free development tools. I also mention free because I don't think he should focus on making money at 15, but find something he enjoys learning. If necessary, you could provide incentives to learn this properly, instead of following possibly short-lived market trends.

I suggest he start with creating his profile at odesk.com and freelancer.com . Initially, he needs to apply for smaller projects and at significantly lower prices than the other bidders. He can build a good portfolio and start moving to bigger and paying clients.

He could start with web development initially. It's the fastest way to get going. Then move to other subjects like algorithms, machine learning, Big data.

Ask him to build some stuff which can solve some real world problems that he faces in every day life. If other people also face such problems, they will pay to use his tools.

15 is late -- by mainstream reports, at that age you're either a billionaire already or you're never going to make it. </joking>

Back in my days, I'd have built websites for friends and some would turn into (badly) paid gigs. If he's got other interests (sports club etc) they could become engagements.

If really he wants to do websites, you'll have to start with JavaScript/jQuery, then graduate to server-side (node, PHP, Python/Django, Ruby/Rails, pick your poison).

Consider that the real dazzle these days is mobile, and it's not that much harder than making websites (or he can make mobile-optimized websites, which can be almost as cool). It might be easier to get engagements on that side at the moment, since most people/clubs/businesses already have a website but they likely don't have an app.

I started along the same path 12 years ago, so I can share my experience.

* I started with PHP. My suggestion today is to start with Django (Rails is fine too, but Django has less magic and so things are lot more explicit)

* Bootstrapping out of nowhere is difficult. He will have a lot to learn from the knowledge perspective. So, looking out for opportunities, negotiating payment etc. can be draining. This is where you can help so that he focuses on learning his craft first.

* It will be tempting to work for free. Advice him against that. He can work for a low fee, but getting paid puts the work in the right context.

* Stay away from portals like Elance and ODesk.

* Working as an intern at a company would really help him with the other meta aspects like planning, team collaboration etc. which are all important to pursue his long-term goals.

If it's solely for the purpose of making money (as opposed to learning for the challenge/curiosity) then do some research: freelancer.com, odesk etc etc. I'd check which technologies are in demand and go from there. It's also important to play to your strengths, for instance i find web development very finiky, i tend to excel in more meaty things like transaction processing. hth

If I were him, I would first ask around - family, local businesses, organizations, etc. to see if someone needs a website or internal database or something like that, and then learn what you need to learn to do it. Another option might be to find groups or small businesses or individuals that do coding and see if he can learn from them and help out.

What I wouldn't do is just try to learn programming for its own sake - such as taking a course or buying a book without any idea of how it might be useful. He'll just forget it and perhaps even decrease his interest in programming. Flip it around and find a project first, a reason to learn programming.

Generating money from programming is hard, really hard. I'm a professional developer (though a junior one) and I can't find ways to do it reliably.

If it's still interested in programming, here is my list of languages :

- Desktop : Python obviously. You can make little graphical interfaces, easy scripts, manipulate data in Excel or Word, even some remote automation since there are network libraires.

- Web : I would recommand against full-blown web frameworks like Django or Rails. Start small by using some simple static sites using HTML+CSS and then learn to build dynamic ones using PHP and Javascript.

I would also add that there are others ways to make money than programming : I know a 17 y.o. which rigged up a farm of Minecraft servers for his highschool and he's being paid by his classmates for the hosting.

How much does your son want to do programming? Does he do it for fun already?

What other interests does he have? Are there parts within these interests that could have relevance for programming? For example he might like bird watching, so perhaps a mobile app for birding might be good, or he might love platformer video games, etc etc.

In short, he should do what he is interested in, don't worry about the choice of technology - by far the most important thing is that there is some interest, passion and enthusiasm. The choice of technology is less important than the choice of what to do.

I started at 15 (15 years ago). Although today I am a Python guy and generally a little anti PHP. I think that the ease of PHP with web servers translates almost immediately with beginners. It will teach him to search for what else is out there. Plus if he is going to be doing little church/synagogue websites, he start to grasp HTML/CSS/Javascript at the same time. Same way I started.

Programming is very competitive market. Python is probably good choice for first language. Together with programming language I would recommend to learn some database, perhaps Cassandra, RethinkDB or SQLite.

A $450 Seiki 50" 4k display makes an excellent teaching tool. It's so large you can see code from a higher elevation so to speak. Visualizing and "seeing" how a script flows become much easier. You can stand by the display and point to how a function call executes the code where the function is defined and things like that.

This is the stack I would set up with, optimized for ease of use, elegance, and market demand:

Debian sid or Ubuntu

Tiling window manager

Vim

Python flask on the backend

Bootstrap on the front end.

A hacker that is comfortable with Linux and the command line, python, html, css and js can find work anywhere.

It basically has zero value now, but perhaps my only claim to "fame" in the 90s was writing the first (at least popularly known) raycaster in QBasic which then spawned a ton of clones (the comp.lang.basic.misc community was strong then). It was absolutely hideous and naive but getting credited in far smarter people's larger creations as their inspiration was an eye opener for me :-) (And a feat I have not repeated, alas!)

I remember using basic during my demo coding days to find the optimum algorithms. When everything is running at 0.5fps it's easy to sport which implementation is the fastest. So after trying a few methods I would determine the fastest and implement it in asm1.

IIRC the version bundled with DOS6.22 was severly limited: it couldn't generate executables and the built-in help was only for IDE, without help on the language at all. That's because MS was also selling QBasic "pro" version. It was basically impossible to buy any software then in my country, so I was stuck with normal QBasic for a few years, until I got Internet connection and pirated a full version (it was still impossible to buy software here - no one bothered selling it at all here until a bit later).

Still, bundling a quite capable IDE with an OS was a very nice practice, but it basically ended then, probably because MS also sold Visual Basic. Without QBasic I wouldn't have started programming at all - I wonder how it would be if I was starting right now. Is there any ubiquitous, simple language available basically everywhere and with a nice IDE by default? JavaScript almost fits the bill, but as a beginner who doesn't know any better you're likely to use Notepad, while with QBasic you had a real IDE from the beginning.

I'm wondering what impact to future programmers the current state of programming will have. Back when i was using qbasic there were very few distractions, as a self taught programmer if you wanted to build something you had to figure out how to do it yourself. No frameworks, no google, no stackoverflow noise just a couple of libraries and infromation from a couple of books I had access to as a kid. I remember for my first PC I knew the purpose of every single file on its 21mb of hard disk, try doing that today..

Shout-out to QB64 - An attempt at a modern version/compiler for QBasic code, as well as updated features. I used to do a fair amount of programming in QB64, and the design of the language hampers it as a general-purpose language, it's a fun language for learning and writing in.

In the late 90's there were several great QBasic communities (NeoZones was one I would frequent the most). I think these more than the fondness of the language/IDE were what kept me interested in learning software. Most of the communities seemed to die along with popularity of the language in the early 2000's.

This is fantastic. My first game was in GW-BASIC, but I quickly upgraded once I discovered QBasic on DOS 6.22. I also loved the MS-DOS Shell (http://en.wikipedia.org/wiki/DOS_Shell) found on the supplemental disk of v6.22. It felt like multitasking...but wasn't.

I just want to say that this is an beautiful example of the hacker ethos being put to use. The creator got into a new field, saw a hole in information, and put together a neat, useful resource. Just perfect.

Interesting to read this thread to see that lots of people have to medicate their dogs. Is this a regional thing? I've had my dog 8 years and apart from the required injections it has never needed any medication.

I don't want to be a downer, but I don't like this concept. I understand the desire to educate yourself about the medicine you need for your pet, but the best thing you can do is find a good vet that you trust to give you this advice. In fact the mechanism that you have where by you allow the user to buy said medication from Amazon makes it seem like your putting this list out as an alternative to proper medical care.

There is much more to taking care of a pet then reading the packages of some medication and then buying it from amazon.

If the place you adopted the dog from didn't give you information, you should talk to a vet about what parasites are common in your region and what treatment options there are. --Many times vets will offer a couple of options for non-prescription treatment and prevention.

If you're in a region that doesn't have fleas and ticks, there's no reason to treat your dog for them. The same goes for many of the other parasites listed.

I'm using NexGard(not listed - just approved in 2014) and seems to be working fine so far - but since its new there isn't much data on it. I preferred the oral over a topical so I didn't have to worry about topical applications transferring to the home, others, clothing etc.

Trifexis is awesome and what I use for my active dog (and we have never had flea problems). The problem is that the pill smells very, very strongly of mold. I have to smash up the pill and mix it into peanut butter for my dog to even look at it. I also administer it outside and use gloves, because it will have your house smelling of mold for days. With a smaller dog (thus smaller dose/pill), you could force it down, but a 30+ lb dog will have too big of a pill and won't eat it normally for any reason.

Would be really awesome if there was some way to provide feedback on the effectiveness. My experience with cat medicine showed that what it says on the package is not to be trusted (for example all the "natural" stuff against fleas was completely useless)

In your research, did you explore other common immunizations like bordetella, parvo, rabies? (I realize these wouldn't fit into any overlapping things like you have here, I'm just curious if you might add it at some point.)

I've had 5 dogs in my lifetime that have lived to age 15+. All I do is take them to the vet yearly and feed them good quality dry dog food.

I have lived in rural Arkansas to the east bay California, as far as exposure to various things. I've had a half wolf dog in AR that would routinely kill and eat armadillos, opossums, terapins, and all kinds of small wildlife. He was probably exposed to all kinds of nasty things.

Never did I give him any of these dog preventative medications that seems to be so popular today.

A Veterinarian with reasonable prices is greatly valued. So valuable, with the right marketing I feel people would leave their estates to your business; I would. That's all I really have to say toVeterinarians, but I feel a lot of you need to ask for some businees advise. I see a lot of people skipping the trips to the vet because they just can't afford the high priced boutiqe veterinarian practice. I know people are inherintly cheep, but I think most just want to be treated fair. I also know what happens when vet hospitals offer too much for free.(SFSPCA offered a "no kill" policy. People started to abandon their animals, and ruined a great organization.)I will pass along-- I've always had big dogs. A Bullmastiff, and mixed breed American Bulldog/Pit. They look sturdy, but they are fragile. The purebred Bull Mastiff was always at the veterinarian. She had multiple problems fromhuge paws that attracted Foxtails/grass seeds to Entropian.I had a great income so going to the vet was no problem. I now have a low income and thank goodness for the mixed breeds.They are still fragile, but don't need to go to the vet as often. I still hear vets telling big breed dog owners about the benefits of exercise. Yes, exercise the dog, but let them choose when and where. All my dogs were over 100lbs, and when I exercized them too much their bodies fell apart.For the Bull Mastiff, a walk around a small lake was too muchon a summer day with a gallon of water. She just dropped half way around. I sat with her until dusk, and then we just made it back to the car. My point is they, especially the Bulldog breeds are fragile.

When I started working on https://trackif.com, I thought the premise was thin because prices couldn't fluctuate that much. I assumed everything gradually declined in price, and that it'd primarily be driven by store-A vs store-B price dropping.

They mention HDMI cables specifically. I just went into a Best Buy and asked for their cheapest HDMI cable. The salesman showed me one for $15. The Amazon basics cable is $5.49. If you've got Prime and you factor in the shipping costs of using another website, it's hard to beat Amazon's price.

It's not only amazon that does this with loss leader pricing, it its also my local grocery store with milk and bread.

For me what gives me the impression amazon has the lowest prices are their nearly nonexistent profits. Whatever's up may not the the cheapest but it's always difficult to find something cheaper elsewhere, even if it does exist.

I'm sure Amazon is constantly adjusting their prices in order to maximize their sales and revenue; they even have some price automation tools as part of their inventory management system for people who sell their stuff through Amazon.

However I'm not sure these adjustments are meant to make people perceive that Amazon has the lowest prices. Instead it seems like they're meant to ensure that Amazon actually has the lowest prices on the most popular and high volume items. On those items they are pricing for high volume, while on the lower volume items they need a higher price to get an equivalent margin. That's what this looks like to me: maximizing margins across products with different sales volumes.

Surprisingly most home improvement things are cheaper at Home Depot rather than Amazon. I learned this one the hard way. Also Amazon routinely displays "original" prices that are much higher than other places and with the "discount" falls in the same price range.

This is all old, old news. Back in the 1970's, a friend of mine was shopping for a nice SLR camera. He knew which camera he wanted, and diligently researched ad after ad, finally settling on one with the cheapest price. We all piled into his car to go get it.

Sure enough, he bought the camera body dirt cheap. But he walked out of the store with a lense, filter, case, flash, film, and a few other accessories. When back home, he ruefully discovered that the total price he shelled out was higher! He didn't realize that the accessories were priced higher than the competition. People simply are not price sensitive to add-ons, and salesmen have known that for centuries.

Gillette is famous for pretty much giving away the razor and making money on the blades.

When I decided I didn't feel great about supporting Amazon any longer due to its reported treatment of its business partners, corporate employees, and warehouse employees, I started shopping around and was surprised to find it wasn't so hard to find deals just as good or better elsewhere.

Sometimes prices are just lower elsewhere, sometimes free shipping comes without a requirement to make a $35 order. (Or pay a high annual fee for free shipping that wouldn't amortize well for me.)

And sometimes Amazon still is the cheapest, but not by so much that it feels imperative to shop there if I have reasons not to.

I read on NPR a while ago some guys found an arbitrage opportunity in book prices. So - he would track the most sought after books, buy them when price were low, usually around July/August, and then sell them back on Amazon when prices were high, around the time of September and January. Makes sense.

Another common sales technique is to have 3 models in a line - the stripper, the standard, and the deluxe. The stripper was barely functional, and its sole purpose was to have a cheap price to attract customers to the showroom. The deluxe had every silly feature the manufacturer could think of, like pinstriping on a dishwasher. It had a very high price. It's sole purpose was to 'frame' the price of the standard model and make it look like a bargain.

The standard model was the one the manufacturer expected to sell. Of course, the rare price-insensitive customer would buy the deluxe, and the salesman was happy to sell that and collect the large commission.

I also like the simple but effective Keepa http://keepa.com to compare amazon prices in different countries at the same time.In Europe for example, Hometheater amps are 50% cheaper in Germany than france, while france is cheaper on something else and UK is cheaper on tools and sometimes projectors (depending of FX rates)

> The startup wants to help Amazon competitors think about pricing in as sophisticated a way as Amazon does.

The catch is that if several big retailers apply the Amazon strategy, a self-reinforcing feedback loop will drive the prices for popular products to zero and the prices for less popular products to +inf. This will make popular products even more popular, which further strengthens the effect. The question that this startup has to answer is thus how they are going to keep the market from exploding and how they can benefit several clients at the same time.

I'm curious why this lists but few of the Alexa top 10, such as Google, Yahoo!, Facebook, Twitter, and others. The first two are mega-sites and only the root domain would count most likely, but social sites constitute a lot of communication. (Even better would be to say whether app connections are secure, such as knowing whether Snapchat connections are over TLS or not, though that's probably out of scope.)

I don't suppose we should be checking the pages that should actually be secure. IE Ubuntu is listed as bad, why not check their login page? https://login.launchpad.net/ or launchpad.net. Perhaps once https://letsencrypt.org/ comes available it will be worth the extra effort to encrypt everything. In the interim it's most likely a waste of funds, especially for projects that operate on donations.

Edit: I was surprised to see the WSJ listed as Bad. Checking their login form, something that should be encrypted, the post goes to... https://id.wsj.com a secure page. I wont go through the entire list, but I expect most of the ones in this list have a similar configuration.

Decades ago in grad school, I was under my 1969 VW bus doing maintenance. Had a set of combination wrenches at curbside. Ten year old blind kid who lived next door came out and wanted to know in detail what I was doing, and I talked him through what I was doing to get ready to pull the engine to install a new clutch plate & throwout bearing.

I could see the nuts and bolts, and knew what wrench size I needed, but when I'd reach for it, I'd often pick the wrong one (13mm felt pretty much like 12mm, etc.). He asked me to say (only once) the wrench size each time I picked one off the curb. Thereafter, he told me what size I was about to select as soon as I touched it, making it scrape ever so slightly against the concrete. Saved me some time by telling me when I was about to pick up the wrong wrench.

I wasn't surprised, as I'd seen him on his bike in the neighborhood, doing the click echolocation described in This American Life. I was in a neurophysiology program at the time, and was totally impressed with his living demo of neuroplasticity and audio world mapping.

Sorry if this is off-topic with respect to the content of the episode, but does anybody know what kind of software NPR uses to create these transcripts? There is a very specific structure to the layout and I'm wondering what the data entry UI looks like, if it's a commercial package, or if they rolled their own. Thanks.

As an educator the whole power of expectation thing probably was more interesting to me than things like the echolocation aspect. I mean yes, we know that students perform better when you expect more of them, but some of the aspects of how that translates into actual performance gains hadn't occurred to me before.

One of the things I was hoping I'd hear people comment on here what Miller says towards the end, that perhaps Kish's aversion to physical closeness, perhaps "loving" in general, and a desire to be independent, were mutually exclusive. While, I personally would agree there is an argument that being a helicopter parent does not help build a sense of independence in someone, I'm really curious if it really is true that a desire of closeness to others in general is really the opposite of that.

I guess it makes some sense naively (independence or apartness vs. dependence or together-ness)? But the way I think about it is that a desire to be independent needs to be tempered with a desire to socialize and be close with others. May be they are orthonormal axes in a person's "personality space," but a healthy balance of both will give a person a good norm overall. This seems opposite of what they hint at toward the end, that these two qualities may not be orthogonal but in fact, anti-parallel.

Perhaps Daniel's aversion helped him overcome the enormous odds against him, an entire culture that put that "blind" label on him and would have necessarily pulled him far into that the more "dependent" side--I mean, that was the point on the whole rest of the episode. It was needed for him given the extreme pressure he was under from this culture. Still, I hope that for many other people who do not have this extreme desire for independence that they need not be forced into it just so that the rest of us see them as equal. That certainly is not fair for them if they do not desire it.

I say all this because while I have a close love for a few close friends and family members, I usually like to be independent myself. However, I've learned as I've gotten older that sometimes you need to rely on others even when you think you won't, which isn't easy for me. A healthy balance seems better, as I've reluctantly accepted.

I am just a casual observer, but how did they narrow it down to the way the rats were touched? Couldn't it have just been the people "running" the experiments instead being more casual about their results? Maybe for a dumb rat you hit your stopwatch a second later or a second faster for a smart rat, for example.

I really enjoyed this episode (and pretty much everything This American Life does). The one fact of the podcast that got me really thinking was how a lot of the blind will naturally turn to clicking in order to help me navigate.

I kept saying "RGB, RGB" and got to level 4 before I was bored. I'm also color blind, so I was thinking I'm gonna fail here, but it wasn't that bad, probably because I wasn't distinguishing shades and such

Could have completed level 10, maybe 11 with a little luck, without those lags (mostly due to all those things running on my computer atm, so basically it's mainly my fault) making the game ignore one click out of 3~4.

I'm considering in staying with Yosemite and iOS 8, and haven't seen any singificant breakage over previous versions.

If anything it's better than Mavericks. And Mail woes are 99% gone too.

Just to add another viewpoint, since only people with negative experiences tend to write.

Of course all software has bugs, but not everybody is bitten by all of them. Some are legitimate complaints. Other are by peoplw who install every BS addon, haxie etc they find, have el-cheapo external peripherals, or blame third party software issues to the OS maker.

(That said, I've had the "22 px sheet" bug, and it's the second point release already --I run the beta--, it should have been fixed by now, and I dislike how they abandoned Aperture).

I bought my Mom an iPad over an android tablet telling her that everything will just work. After the iOS 8 update, I am shying away from answering her questions about bugs, and questioning my decision about the iPad. People who are non-tech proficient form the biggest consumer-base for Apple, and it is terrible that Apple is forgetting how it gained this loyal consumer-base in the first place - through reliable software which 'just works'. It only makes more business sense to go back to their original software quality even if it requires dumping regular releases, because they will start losing (probably already have) customers real soon if they don't.

The flickering menubar icons drive me insane. It's proof that Apple has is either unwilling or unable to commit enough resources to OSX to ensure that its quality is consistent with what users have come to expect.

SJ used to be relentless about nitpicking to keep quality up. Probably not happening consistently across all apps and platforms as much. Tim may need to appoint a Quality Czar whom is detail-oriented, accepts no bull and has "wrath of God" authority to make folks take them seriously.

No question that Apple's quality has gone done. However, who's has gone up? Or better yet: Who is actually building quality software systems?Comparing only in the same problem domains as Apple:All of my Android devices have been rife with equivalently bad issues. Windows? Different quality issues, but just as bad. Google web systems? Same case. Better in some aspects, worse in others.Perhaps I'm old and jaded. Still, seems like we've reached a point in software development where building quality systems is not possible with existing methodologies. Where some problems, while we are able to develop 90% solutions, the last 10% might as well be impossible.The even more jaded part of me wonders: Does it even matter?

Most of the articles I've read on Apple software quality seem like larger industry wide issues to me. For example with the iTunes issue mentioned in this article this is a problem that every metadata / library based media player has to deal. If you let more than one app touch your audio files then you're pretty much guaranteed to have problems. Different apps/services may not write or sort on the same tags. No one's fault exactly just the way things are. The example of dictionary / thesaurus lookup moving to a system wide text service is an instance of a feature clearly being improved but if the users aren't aware it changed is that really an improvement? The entire industry sucks at user education. There's no good reason every major software developer shouldn't have hours and hours of free training/how-to videos available for users to cope with change. For the issue of GMail SMTP rejecting iWork file format attachments it's the industry wide problem of users being stuck between the best interests of various companies. Apple wants to change/improve the iWork format but Google wants to protect users from files it can't scan. Again no one is really at fault it's just the way things are.

Has anyone found Safari under Yosemite and iOS8 to be of "disappointing" quality? I know of DNS being broken in Yosemite but my wife and I find Safari to be extremely irritating- it just sits there at 20% of address bar progress after entering an address and pressing Enter.

EDIT: And another thing - Spotlight now takes a significant amount of time to get results. I notice a large difference between my personal i7 2012 MBP with Yosemite and the 2008 single-CPU (quad core) Xeon running Mavericks at work. Maybe it's the disk difference, but I sometimes wonder if Spotlight is doing anything as there is no search indication / activity indication.

I think that Apple is a victim of its decision to release new OS version every year. Users expect a lot of changes from iOS n+1 or OS X m+1. You can't just fix all bugs and release a new version. And constant feature improvement introduces new bugs and deadline frames won't allow to release properly tested fixes for old bugs.

I believe that feature-wise OS X 10.10 and iOS 8 are quite nice. Apple really should adapt something like Intel's tick-tock strategy. Release iOS 8 with new features at 2014. Release iOS 8S with all bugs fixed in 2015. Release iOS 9 with new features at 2016 but allow customers to downgrade to iOS 8S if they want to, for at least a year. They'll have to support 2 iOS versions, but people will have a choice between new features and stability.

Jean-Louis was always pretty honest about stuff. The problem is Apple has too many things going on and only the big stuff gets attention. Take XCode for example, please take it way before I shoot something.

I think apple are facing the same problems a lot of other companies face - it's no longer green field development and the existing codebase means they're being weighed down by regression. Maybe not enough automated tests. However some bugs are basic error really. Take the frequently visited icons on mobile safari. They keep getting the wrong favicon - that's just shoddy programming and testing.

I currently have two Macs. A late 2013 rMBP 13" which works flawlessly, and an iMac 27" with 680mx(later 2012?).

The iMac has felt wonky for lack of a better word. It doesn't hang or crash more than Mavericks did, but f.lux causes WindowServer to crash randomly which makes all users to be logged out in a microsecond. I've reported this of course.

Another thing that troubles me is the amount of rubbish logging done by the system. Have Console.app open for a while and see what nonsense it barfs out. How can it work at all with all those problems?

I just rated Yosemite 1 star on App store and wiped everything and installed Mavericks. It is so much better now. The last OS was just a shameless cloud infestation ridded with bugs.

I use Remote Screen for work, and EVERY time I disconnect my MacBook Pro freezes completely for 1-3 minutes (no mouse, just a froze desktop). Sometimes I need to hard reset it. Screen sharing used to work nicely in 10.9, 10.8, 10.7 and so on. Why was it necessary to mess with something good?

If more developers knew what backward compatibility is, I'd be happily using Snow Leopard right now. The most frustrating part is there's nothing fundamentally new worth all the bugs and a constant envy for new hardware.

IMHO currently the worst bugs are in discoveryd, which replaces mDNSResponder for Bonjour.

If you remove a service on OSX 10.10 it's removal will be broadcast. But it doesn't stop there. No... Most of the time the service will be published again after that and after a second or so it will be finally removed. How the hell did this pass even the most basic QA checks?!

I had an issue with Mac Mail and after the upgrade it just stopped synching my exchange files correctly and that was the end of it. Called apple support, very friendly guy, talked me through it, we tried a few fixes, there was a workaround but it wasn't the same again. Moved over to MS Outlook and haven't gone back. Kind of a shame.

I continue to be flabbergasted that so many otherwise savvy observers believe that a random assortment of software annoyances constitutes a crisis at Apple. Articles like this could have been, and were, written at any time in the past fifteen years.

Quite honestly Yosemite is the most buggy Apple product I've ever used - it has made working with OSX a chore. Apple has not fixed one of the bug reports I filed during the beta phase (which I'm not convinced has finished).

> but how could such an obvious, non-esoteric bug escape Apples attention in the first place?

And the answer, if I recall correctly as to what was going on, was that this wasn't a bug with Apple's software at all. It was a consequence of the file format actually being a package, meaning it was really a document. Apple software all worked with documents just fine, and you'll find that if you tried to use Mail.app to send it, it would all Just Work. The issue is that Gmail and other such services never even considered the idea that a user might want to send a whole folder and did not have any way to support that.

So the "fix" was to change the file format to actually be a compressed archive of the package (I assume it was a zip file, but I don't know how to go back and check). This made it work with all of the stupid software out there that assumed users would only want to transfer individual files.

Sure, perhaps the Pages team could have foreseen this issue. But that doesn't make it a bug in their software, just a case of only prioritizing compatibility with other aspects of Apple's software ecosystem.