It depends on what you want to prove. If the claim is that the system is useful for automatic rating of images (not sure anybody claimed this, but they do give pictures ratings on their website), then one would expect a false negative rate of [fairly tiny number], which a single randomly picked data point to the contrary gives quite a significant evidence against.

Not saying it doesn't work "pretty well" in a sort of cool tech sense, though.

I don't think it's that narrow. At least for myself, the description of the GP is more or less accurate. I run Linux because it's just so much more convenient for pretty much everything I do compared to Windows. I used to have a dual boot configuration, for games, but after I realised I hadn't booted into Windows for more than a year, I decided to just drop it. Once Steam started releasing games officially for Linux, I have taken up gaming again, buying more games than I have ever before. And for whatever reason, it has turned out that almost every game I have wanted to get so far has been released on Linux. I don't really care if the FPS is slightly lower because some developer haven't optimised something properly, and if it's ever a huge issue I'll just buy a more powerful graphics card. What I want is to be able to start and run the game at more or less decent performance, without jumping through hoops or rebooting the system, and I'll buy it. If not, I'll find some other way to enjoy my leisure time. I don't mind spending money on games if they are available, but I don't need them.

Admittedly, my experience with TFS is a bit limited, but what you are describing is not the same thing. In fact, it highlights one of the problems I was talking about: With SVN, because dealing with branches is more cumbersome, people use the working copy as the place where they do their work. The result is that you have no version management of the work you are doing until you commit to the central repository.

This is not how you work with Git. With Git, you commit often. Every commit is tiny; it's the smallest possible atom of work which can't be divided further in any meaningful way. Before I started using Git, there was a whole lot of commenting out lines back and forth, committing only a selection of the dirty files in my working copy, and all kinds of similar things which effectively amounts to a kind of error prone, small-scale revision management by hand. Now all this is handled by Git, and it does it much better. Every change can be traced, reordered, combined, split, and so on. So when I say that my topic branch is floating nicely on top of the master, I'm not talking about the stuff in my working copy, I'm talking about my local revision history.

Furthermore, what I'm saying is not that certain things are impossible to do with Subversion. I'm saying that the mechanics of those things are so different that it fundamentally changes how you think of an use revision control in practice. It took me a few months of intensive Git usage to realise that.

Unless you're ok with a source control system that's trivially capable of throwing away history and corrupting your entire repository irreparably [...]

I don't get what you people are doing to corrupt your repositories, or even "throwing away history" (whatever that means). To me, it sounds like saying "A bicycle is better than a car, in case you run yourself over with it". To really lose stuff in Git, you have to really try to deliberately shoot yourself in the head, or be such an insane klutz you should be let anywhere near a computer with write access to a central repository anyway.

[...] all for the sake of making your history log "pretty".

That is because you think of the revision history as a chronological "trace" of what you have been doing (I notice how you even refer to it as a "log"), rather than a graph describing how different snapshots of the code base relate to each other logically. It has nothing to do with it being "pretty" and everything to do with maintainability. If you need to lift a feature developed in for example one fork of a project into another, it is much better if the development of that feature is clearly separated and logically related to other version of that fork's code base, than as something that "happened" somewhere in a chronological history at some point. This is a fundamental philosophical difference between how most people work with SVN versus how most people work with Git, and one of the strongest aspects of Git IMO.

SVN is good because it makes you deal wtih conflicts immediately. Git is bad because it delays dealing with conflicts until days, weeks, or even months later, at which point it all goes to hell.

I really don't get the argument why limiting my own options would be something good. If things "go to hell" for you because you don't merge/rebase often enough, then do it more often! There is nothing inherent in Git preventing you from using the SVN workflow. It is even the default behaviour if you just work on the master branch and do pulls.

In fact, I would argue that Git is actually much better than SVN in the respect of dealing with a moving target, since you have the rebase feature. With SVN, you either need to keep all your work in one huge uncommitted blob in the working copy, or you need to commit half-finished work into the central repository. With Git, you can keep your topic branch floating nicely on top of the master branch's head, while building it step by step in multiple commits. There is simply no sane way to do that with SVN.

I agree there are some things that could be improved with Git's submodule feature (I assume you are aware of it, though you called it something else), but is SVN's externals really that much better? In my experience, both works, each with it's own pros and cons, though neither of them is "extremely painful".

I don't think git and svn are even really comparable in that way. I used to be a proponent of svn, but since I learnt git properly, there is no going back to svn ever again. The entire philosophy is different at a fundamental level, completely changing the way at least I work with version control. Git is more like a flexible framework where I can juggle different versions and multiple development threads, reordering things, rebasing onto different branches, or even create completely new workflows (such as a process for formal code review before merging with the master branch). Every commit is small and adds one meaningful unit of functionality to the code base, and you can clearly see how features are composed of isolated strings of such commit, and how those features, possibly developed in parallel by different people, are merged into e.g. a common master branch in a coherent way.

Subversion, in comparison, is more like a kind of central, static trail log of everything that has been going on, where each commit is huge and usually intertwined with different other activities. Sure, theoretically you could work in the same way with svn, but that would be like saying I could use email in place of irc to do interactive text chatting; practicality simply prevents it from working that way.

I don't really understand why you see the need for a cloud solution to host a git repository, or why you even think it is easier to set one up with svn. Hosting a git repository is insanely trivial: you just put it on a machine with an ssh service running, and you're done! In fact, where I work, we regularly use each others' development computers as "hosts" when we pull and push topic branches between each other, before they are ready to go into the central repository. There is simply no setup other than creating a user account.

What they have "invented" appears to be pretty ridiculous, yes, but you are attacking it from the wrong angle. What you are saying is that what they describe isn't suitable for a certain application, namely what you think when you hear "cloaking". Science isn't about finding applications, though, it's about making discoveries and understanding how nature works. There might be other applications that you can't think about right now, and if science would limit itself to what we now know is useful in some particular way, much of it would never be discovered.

However, here is the real problem: Where is the scientific discovery here? All they have done is placed a series of lenses in a row, focusing the rays at some points, which means you need to be closer to the principal axis to block them there. (Notice how they never cover the centre of the "cloaked" area.) Lenses to that, though; focus rays. They have just named the volume surrounding the aperture "the cloaked region", fiddled with lenses to get it narrow, and written a paper about it. Pretty much any optical system containing lenses will have such a "cloaked region".

It seems scientific funding today has gotten so concentrated on quantitatively measurable output (meaning the number of published articles) that people publish any little trivial idea they have, preferably multiple times with slightly different wording, or in very small steps to extend it over as many articles as possible.

Well, I'm of the opinion that having good intentions shouldn't exempt someone from reasonable criticism. Besides, this is a technology site, so what's the problem of discussing how appropriate certain technological choices are? It's not like people are saying this company should be shut down or their product banned.

Furthermore (and this is actually a bit of an honest question), how much of a fucking breakthrough is this really? Have they actually done something unique, or are they just the first ones courageous (or overconfident?) enough to actually go through trying to replacing someone's heart completely? What's fundamentally different from other implants, such as ventricular assist devices, other than the application?

You do realise that there are a lot of people with excellent cognitive abilities dying of heart failure every day, and that many could have lived decades of high quality life had their hearts been healthy, right?

True. On the other hand, when designing something as critical as a heart, you'd better have extremely thorough quality assurance and testing to make as sure as humanly possible that faults are discovered before you make someone's life depend on it.

While I agree that requiring open heart surgery to reach the firmware probably is taking it too far, I wouldn't like to have an artificial heart installed, where the developers have had the luxury of thinking they can always fix problems later. The assumption should be that once you have connected someone's life to it, the firmware will not change.

...and to extract work while doing so. No? Which means converting it to some form where it has higher entropy.

Look, it seems like you are more or less just making stuff up based on your rather incomplete understanding of thermodynamics, which is why recommended you to actually read up on the subject. Setting aside the rather dubious claim that a perfectly isolated system can even exist, what you are describing is exactly how you could define a perpetual motion machine of the second kind: an isolated system that performs work using energy from a single heat reservoir, without transferring heat to an external cooler reservoir. Such a machine cannot exist, because it violates the second law of thermodynamics, which states that the entropy of an isolated(!) system never decreases. The system tends towards thermal equilibrium, where all the energy is converted to a uniform distribution of heat.

If you had bothered to look any of this up, you would have already known this, instead of speaking out of your ass based on what you think a perpetual motion machine is and why it must be impossible.

The key flaw in your reasoning is that you seem to think of energy as something that's equivalent regardless of its form. It is not so. In fact, whenever we use energy to perform some work, it actually isn't the energy in itself that we are using, but its state of being far from equilibrium. Its "order", for the lack of a better word. The energy is just a carrier. And when we are using heat energy, we are actually not using the heat in itself, but exploiting the temperature differential between the heat reservoir and a cooler reservoir. That's why all heat producing power plants need cooling water, and the reason jet engines get higher efficiency when flown through cooler air at high altitude (even if it's thinner). Conversely, it explains why a refrigerator requires external energy even though it is removing energy from its interior, and why a heat pump can have more than 100% heating efficiency whereas distributed heating can never reach 100%.

One physicist who have written a lot about these things is Ilya Prigogine, if you are interested to read more, although I'm sure you could find many others.

I think you should ask yourself what it actually means to "use" energy. What purpose can energy have that does not involve irreversibly transforming it to heat?

Or to put it another way: If you have a system that takes in a lot of useful energy, and it does not transform this energy to heat (which inevitably would be radiated as black-body radiation as the system's temperature increases), then you are either: 1) wasting energy, by not exploiting all the work it could have performed before releasing it, or 2) just storing it without actually using it (although the process of storing it would involve performing some work as well).

If, on the other hand, you have managed to build a magical system that can perform useful work without extracting it from the energy you are continually collecting, but can "reuse" energy like a perpetual motion machine, then why the fsck are you collecting more? You don't need it!

Honestly, though, I really think you should pick up a physics text book that covers thermodynamics if you want to understand these things. From your responses (assuming you're not just trolling) it's evident that if you ever read one, you either didn't understand it, or you've forgotten some pretty basic principles and need to refresh. Now the argument sounds more like "I don't really know, therefore aliens can do it. Easy peasy."