trawg writes: "In the throes of another holiday shopping season, with the year's top video game launches still setting records above all other entertainment mediums, gamers down under are still paying more than anyone else in the world, with many publishers adding an "Australia tax" to their latest tiles. We take a look at the factors and forces responsible for the disparity in retail prices for games in Australia relative to other developed nations, and as a solution suggest that consumers should support developers and publishers that don't discriminate, and import, and circumvent the products of those that do."

cstacy writes: The Nassau County (New York) Police Department is "very concerned" about reports that shreds of police documents (with social security numbers, phone numbers, addresses, license plate numbers, incident reports, and more) rained down as confetti in the Macy's Thanksgiving Day Parade. The documents also unveiled the identities of undercover officers, including their SSNs and bank information, according to WPIX-TV. Macy's has no idea how this happened, as they use commercial, colored confetti, not shredded paper.

Hugh Pickens writes writes: "Salvatore Iaconesi, a software engineer at La Sapienza University of Rome, writes that when he was recently diagnosed with brain cancer his first idea was to seek other opinions so he immediately asked for his clinical records in digital format, converted the data into spreadsheets, databases, and metadata files and published them on the web site called The Cure. "The responses have been incredible. More than 200,000 people have visited the site and many have provided videos, poems, medical opinions, suggestions of alternative cures or lifestyles, personal stories of success or, sadly, failures — and simply the statement, "I am here." Among them were more than 90 doctors and researchers who offered information and support." The geneticist and TED fellow Jimmy Lin has offered to sequence the genome of Iaconesi's tumor after surgery and within one day Iaconesi heard from two different doctors who recommended similar kinds of "awake surgery," where the brain is monitored in real time as different parts are touched and a brain map is produced and used during a second surgery. "We are creating a cure by uniting the contributions of surgeons, homeopaths, oncologists, Chinese doctors, nutritionists and spiritual healers. The active participation of everyone involved — both experts and ex-patients — is naturally filtering out any damaging suggestion which might be proposed," writes Iaconesi. "Send us videos, poems, images, audio or text that you see as relevant to a scenario in which art and creativity can help form a complete and ongoing cure. Or tell us, "I am here!" — alive and connected, ready to support a fellow human being.""

kfogel writes: "First, the main thing: two thumbs up, and maybe a tentacle too, on Version Control with Git, 2nd Edition by Jon Loeliger and Matthew McCullough (O'Reilly Media, 2012). If you are a working programmer who wants to learn more about Git, particularly a programmer familiar with a Unix-based development environment, then this is the book for you, hands down (tentacles down too, please).

But there's a catch. You have to read the book straight through, from front to back. If you try to skip around, or just read the parts you feel you need, you'll probably be frustrated, because — exaggerating, but only slightly — every part of the book is linked to every other part. Perhaps if you're already expert in Git and merely want a quick reminder about something, it would work, but in that case you're more likely to do a web search anyway. For the rest of us, taking the medicine straight and for the full course is the only way. To some degree, this may have been forced on the authors by Git's inherent complexity and the interdependency of its basic concepts, but it does make this book unusual among technical guides. A common first use case, cloning a repository from somewhere else, isn't even covered until Chapter 12, because understanding what cloning really means requires so much background.

Like most readers, I'm an everyday user of Git but not at all an expert. Even this everyday use is enough to make me appreciate the scale of the task faced by the authors. On more than one occasion, frustrated by some idiosyncrasy, I've cursed that Git is a terrific engine surrounded by a cloud of bad decisions. The authors might not put it quite so strongly, but they clearly recognize Git's inconsistencies (the footnote on p. 47 is one vicarious acknowledgement) and they gamely enter the ring anyway. As with wrestling a bear, the question is not "Did they win?" but "How long did they last?"

For the most part, they more than hold their own. You can sometimes sense their struggle over how to present the information, and one of the book's weaknesses is a tendency to fall too quickly into implementation-driven presentation after a basic concept has been introduced. The explanation of cloning on p. 197 is one example: the jump from the basics to Git-specific terminology and repository details is abrupt, and forces the reader to either mentally cache terms and references in hope of later resolution, or to go back and look up a technical detail that was introduced many pages ago and is suddenly relevant again[1]. On the other hand, it is one of the virtues of the book that these checks can almost always be cashed: the authors accumulate unusual amounts of presentational debt as they go (in some cases unnecessarily), but if you're willing to maintain the ledger in your head, it all gets repaid in the end. Your questions will generally be answered[2], just not in the order nor at the time you had them. This isn't a book you can read for relaxation; give it your whole mind and you shall receive enlightenment in due proportion.

The book begins with a few relatively light chapters on the history of Git and on basic installation and local usage, all of which are good, but in a sense its real start is Chapters 4-6, which cover basic concepts, the Git "index" (staging area), and commits. These chapters, especially Chapter 4, are essentially a design overview of Git, and they go deep enough that you could probably re-implement much of Git based just on them. It requires a leap of faith to believe that all this material will be needed throughout the rest of the book, but it will, and you shouldn't move on until you feel secure with everything there.

From that point on, the book is at its best, giving in-depth explanations of well-bounded areas of Git's functionality. The chapter on git diff tells you everything you need to know, starting with an excellent overview and then presenting the details in a well-thought-out order, including an especially good annotated running example starting on p. 112. Similarly, the branching and merging chapters ensure that you will come out understanding how branches are central to Git and how to handle them, and the explanations build well on earlier material about Git's internal structure, how commit objects are stored, etc. (Somewhere around p. 227 my eyes finally glazed over in the material about manipulating tracking branches: I thought "if I ever need this, I know where to find it". Everyone will probably have that reaction at various points in the book, and the authors seem to have segregated some material with that in mind.) The chapter-level discussions on how to use Git with Subversion repositories, on the git stash command, on using GitHub, and especially on different strategies for assembling multi-source projects using Git, are all well done and don't shirk on examples nor on technical detail. Given the huge topic space the authors had to choose from, their prioritizations are intelligently made and obviously reflective of long experience using Git.

Another strength is the well-placed tips throughout the book. These are sometimes indented and marked with the (oddly ominous, or is that just me?) O'Reilly paw print tip graphic, and sometimes given inline. Somehow the tips always seem to land right where you're most likely to be thinking "I wish there were a way to do X"; again, this must be due to the author's experience using Git in the real world, and readers who use Git on a daily basis will appreciate it. The explanation of --assume-unchanged on p. 382 appeared almost telepathically just as I was about to ask how to do that, for example. Furthermore, everything they saved for the "Advanced Manipulations" and "Tips, Tricks, and Techniques" chapters is likely to be useful at some point. Even if you don't remember the details of every tip, you'll remember that it was there, and know to go looking for it later when you need it (so it might be good to get an electronic copy of the book).

If there's a serious complaint to be made, it's that with a bit more attention the mental burden on the reader could have been reduced in many places. To pick a random example, in the "Branches" chapter on p. 90, the term "topic branch" is defined for the first time, but it was already used in passing on p. 68 (with what seems to be an assumption that the reader already knew the term) and again on pp. 80-81 (this time compounding the confusion with an example branch named "topic"). There are many similar instances of avoidable presentational debt; usually they are only distractions rather than genuine impediments to understanding, but they make the book more work than it needs to be. There are also sometimes ambiguous or not-quite-precise-enough statements that will cause the alert reader — which is the only kind this book really serves — to pause and have to work out what the authors must have meant (a couple of examples: "Git does not track file or directory names" on p. 34, or the business about patch line counts at the top of p. 359). Again, these can usually be resolved quickly, or ignored, without damage to overall understanding, but things would go a little bit more smoothly had they been worded differently.

Starting around p. 244 is a philosophical section that I found less satisfying than the technical material. It makes sense to discuss the distinction between committing and publishing, the idea that there are multiple valid histories, and the idea that the "central" repository is purely a social construct. But at some point the discussion starts to veer into being a different book, one about patterns for using Git to manage multi-developer projects and about software development generally, before eventually veering back. Such material could be helpful, but then it might have been better to offer a shallower overview of more patterns, rather than a tentative dive into the "Maintainer/Developer" pattern, which is privileged here beyond its actual prominence in software development. (This is perhaps a consequence of the flagship Git project, the Linux kernel, happening to use that pattern — but Linux is unusual in many ways, not just that one.)

The discussion of forking and of the term "fork", first from p. 259 and reiterated from p. 392, is confusing in several ways. It first uses the term as though it has no historical baggage, then later takes that historical baggage for granted, then finally describes the baggage but misunderstands it by failing to distinguish clearly between a social fork (a group of developers trying to persuade users and other developers to abandon one version and join another), which is a major event, and a feature fork (that is, a branch that happens to be in another repository), which is a non-event and which is all that sites like GitHub mean by forking. The two concepts are very different; to conflate them just because the word "fork" is now used for both is thinking with words, and doesn't help the reader understand what's going on. I raise this example in particular because I was surprised that the authors who had written so eloquently about the significance of social conventions elsewhere would give such an unsatisfactory explanation of this one.

Somewhat surprisingly, the authors don't review or even mention the many sources of online help about Git, such as the #git IRC channel at Freenode, the user discussion groups, wikis, etc. While most users can probably find those things quickly with a web search, it would have been good to point out their existence and maybe make some recommendations. Also, the book only covers installation of Git on GNU/Linux and MS Windows systems, with no explicit instructions for Mac OS X, the *BSD family, etc (however, the authors acknowledge this and rightly point out that the differences among Unix variants are not likely to be a showstopper for anyone).

But this is all carping. The book's weaknesses are minor, its strengths major. Any book on so complicated a topic is bound to cause disagreements about presentation strategy and even about philosophical questions. The authors write well, they must have done cubic parsecs of command testing to make sure their examples were correct, they respect the reader enough to dive deeply into technical details when the details are called for, and they take care to describe the practical scenarios in which a given feature is most likely to be useful. Its occasional organizational issues notwithstanding, this book is exactly what is needed by the everyday Git user who wants to know more — and is willing to put in the effort required to get there. I will be using my copy for a long time.

Footnotes

[1] One of my favorite instances of this happened with the term "fast-forward". It was introduced on p. 140, discussed a little but with no mention of a "safety check", then not used again until page 202, which says: "If present, the plus sign indicates that the normal fast-forward safety check will not be performed during the transfer." If your memory is as bad as mine, you might at that point have felt like you were suddenly reading the owner's manual for an early digital wristwatch circa 1976.

[2] Though not absolutely always: one of the few completely dangling references in the book is to "smudge/clean filters" on p. 294. At first I thought it must be a general computer science term that I didn't know, but it appears to be Git-specific terminology. Happy Googling.

[3] (This is relegated to a floating footnote because it's probably not relevant to most readers.) The book discusses other version control systems a bit, for historical perspective, and is not as factually careful about them as it is about Git. I've been a developer on both CVS and Subversion, so the various incorrect assertions, especially about Subversion, jumped out at me (pp. 2-3, p. 120, pp. 319-320). Again, this shouldn't matter for the intended audience. Don't come to this book to learn about Subversion; definitely come to it to learn about Git.

[4] As long as we're having floating footnotes, here's a footnote about a footnote: on p. 337, why not just say "Voltaire"?

[5] Finally, I categorically deny accusations that I gave a positive review solely because at least one of the authors is a fellow Emacs fanatic (p. 359, footnote). But it didn't hurt."

SternisheFan writes: The structure of the universe and the laws that govern its growth may be more similar than previously thought to the structure and growth of the human brain and other complex networks, such as the Internet or a social network of trust relationships between people, according to a new study. “By no means do we claim that the universe is a global brain or a computer,” said Dmitri Krioukov, co-author of the paper, published by the Cooperative Association for Internet Data Analysis (CAIDA), based at the San Diego Supercomputer Center (SDSC) at the University of California, San Diego.“But the discovered equivalence between the growth of the universe and complex networks strongly suggests that unexpectedly similar laws govern the dynamics of these very different complex systems,” Krioukov noted
Having the ability to predict – let alone trying to control – the dynamics of complex networks remains a central challenge throughout network science. Structural and dynamical similarities among different real networks suggest that some universal laws might be in action, although the nature and common origin of such laws remain elusive By performing complex supercomputer simulations of the universe and using a variety of other calculations, researchers have now proven that the causal network representing the large-scale structure of space and time in our accelerating universe is a graph that shows remarkable similarity to many complex networks such as the Internet, social, or even biological networks. These findings have key implications for both network science and cosmology,” said Krioukov.

Hugh Pickens writes writes: "The Orlando Sentinel reports that a google search was made for the term "foolproof suffocation" on the Anthony family's computer the day Casey Anthony's 2-year-old daughter Caylee was last seen alive by her family — a search that did not surface at Casey Anthony's trial for first degree murder. In the notorious 31 days which followed, Casey Anthony repeatedly lied about her and her daughter's whereabouts and at Anthony's trial, her defense attorney argued that her daughter drowned accidentally in the family's pool. Anthony was acquitted on all major charges in her daughter's death, including murder. Though computer searches were a key issue at Anthony's murder trial, the term "foolproof suffocation" never came up. "Our investigation reveals the person most likely at the computer was Casey Anthony," says investigative reporter Tony Pipitone. Lead sheriff's Investigator Yuri Melich sent prosecutors a spreadsheet that contained less than 2 percent of the computer’s Internet activity that day and included only Internet data from the computer’s Internet Explorer browser – one Casey Anthony apparently stopped using months earlier — and failed to list 1,247 entries recorded on the Mozilla Firefox browser that day — including the search for “foolproof suffocation.” Prosecutor Jeff Ashton said in a statement to WKMG that it's "a shame we didn't have it. (It would have) put the accidental death claim in serious question.""

An anonymous reader writes: Game designer Tadhg Kelly has an article discussing where the games industry has gone over the past several years. Gaming has become more of a business, and in doing so, become more of a science as well. When maximizing revenue is a primary concern, development studios try to reduce successful game designs to individual elements, then simply seek to add those elements to whatever game they're working on, like throwing spices into a stew. Kelly points out that indie developers who are willing to experiment often succeed because they understand something more fundamental about games: fun. Quoting: 'The guy who invented Minecraft (Markus “Notch” Persson) didn’t just create a giant virtual world in which you could make stuff, he made it challenging. When Will Wright created the Sims, he didn’t just make a game about living in a virtual house. He made it difficult to live successfully. That’s why both of those franchises have sold millions of copies. The fun factor is about more than making a game is amusing or full of pretty rewards. If your game is a dynamic system to be mastered and won, then you can go nuts. If you can give the player real fun then you can afford to break some of those format rules, and that’s how you get to lead rather than follow the market. If not then be prepared to pay through the nose to acquire and retain players.'

An anonymous reader writes: Vaccines, contrary to opinions from the anti-science crowd, are some of the most effective tools in modern medicine. For some diseases, a single shot is all it takes for lifetime immunity. Others, though, require booster shots, to remind your immune system exactly what it should prepare to fight. Failure to get these shots threatens an individual's health, and the herd immunity concept as well. Scientists are now looking into 'self-boosting' vaccines in order to fix that problem. Some viruses are capable of remaining in the human body for a person's entire lifetime. If researchers can figure out a way to safely harness these, it may be possible to add genes that would create proteins to train the immune system against not just one, but multiple other viruses (abstract). This is a difficult problem to solve; changing the way we do vaccinations will itself have consequences for herd immunity. It also hinges on finding a virus that can survive the immune system without have uncomfortable flare-ups from time to time.

An anonymous reader writes: The amusing “but does it run Crysis?” question has a cousin: “but does it run Minecraft?” The makers of Raspberry Pi can now officially say that yes, yes it does. Called Minecraft: Pi Edition, the latest flavor of the popular game carries “a revised feature set” and “support for several programming languages,” so you can code directly into Minecraft before or after you start playing. That means you can build structures in the traditional Minecraft way, but you can also break open the code and use a programming language to manipulate things in the game world.

stern writes: The internet may be contributing to divorces (thanks, Facebook!) but it's also reducing the pain, especially the bitter fighting associated with joint custody. Calendars are now much easier to coordinate, and if one parent denies a court-ordered phone call to another, there's no way to hide the fact that the call didn't happen. Because of these and other technologies, divorce has changed radically in the last ten years.

Frosty Piss writes: Scientists have discovered a new smell, but you may have to go to a laboratory to experience it yourself. The smell is dubbed "olfactory white," because it is the nasal equivalent of white noise, researchers report in the journal Proceedings of the National Academy of Sciences. Just as white noise is a mixture of many different sound frequencies and white light is a mixture of many different wavelengths, olfactory white is a mixture of many different smells. In a series of experiments, they exposed participants to hundreds of equally mixed smells, and what they discovered is that our brains treat smells as a single unit, not as a mixture of compounds to break down, analyze and put back together again.. The web site LiveScience talks about it here.

slashchuck writes: One of the drawbacks of Google's Nexus $ was its lack of support for 4G LTE. Now comes a report from Anand Tech that is supported on on the Nexus 4.

It seems that a simple software update can allow the Nexus 4 smartphone to run on LTE Band 4. All users have to do is dial *#*#4636#*#* (INFO) or launch the Phone Info app. After that, choosing to connect to AWS networks should allow the Nexus 4 to run on LTE networks on Band 4.

The AnandTech report states explicitly that the LG Nexus 4 only works on LTE Band 4, on 1700/2100MHz frequencies, and supports bandwidths of 5,10, and 20MHz.

An anonymous reader writes: The Register has a BlackBerry 10 preview up. "BlackBerry users have a love-hate relationship with their phones. The devices were often forced upon users rather than chosen. At the same time, the handhelds were the most usable and useful communications gadgets you could put in your pocket.", however for a publication with an open pro-Microsoft bias, the review is surprisingly positive and it goes on to look at BB10's hub feature, "utilitarian" and efficient compared to Windows Phones which shows "style and novelty" whilst being "a bit limiting", BlackBerry's feature may actually improve the system rather than detracting. With BlackBerry providing a QT environment (compatible with Sailfish we discussed earlier) and having managed to maintain BB's 3rd place in the Mobile OS market, it looks like there may be a chance of a real three way competition between QT, Android and iOS in the mobile market.

An anonymous reader writes: Advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking.... But what is new in recent months is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just 'neural nets' for their resemblance to the neural connections in the brain. 'There has been a number of stunning new results with deep-learning methods,' said Yann LeCun, a computer scientist at New York University who did pioneering research in handwriting recognition at Bell Laboratories. 'The kind of jump we are seeing in the accuracy of these systems is very rare indeed.' Artificial intelligence researchers are acutely aware of the dangers of being overly optimistic.... But recent achievements have impressed a wide spectrum of computer experts. In October, for example, a team of graduate students studying with the University of Toronto computer scientist Geoffrey E. Hinton won the top prize in a contest sponsored by Merck to design software to help find molecules that might lead to new drugs. From a data set describing the chemical structure of 15 different molecules, they used deep-learning software to determine which molecule was most likely to be an effective drug agent.

An anonymous reader writes: Alex Norton is the man behind Malevolence: The Sword of Ahkranox, an upcoming indie action-RPG. What makes Malevolence interesting is that it's infinite. It uses procedural generation to create a world that's actually endless. Norton jumped into this project without having worked any big gaming studios, and in this article he shares what he's learned as an independent game developer. Quoting: "A large, loud portion of the public will openly hate you regardless of what you do. Learn to live with it. No-one will ever take your project as seriously as you, or fully realise what you’re going through.... The odds of you making money out of it are slim. If you want to succeed, you’ll likely have to sell out. Just how MUCH you sell out is up to you.' He also suggests new game devs avoid RPGs for their first titles, having a thorough plan before you begin (i.e. game concepts explained well enough that a non-gamer could understand), and to consider carefully whether the game will benefit from a public development process.

An anonymous reader writes: An article at BusinessWeek highlights an issue most corporate workers are familiar with: the flood of useless reply-all emails endemic to any big organization. Companies are beginning to realize how much time these emails can waste in aggregate across an entire company, and some are looking for ways to outright block reply-all. "A company that’s come close to abolishing Reply All is the global information and measurement firm Nielsen. On its screens, the button is visible but inactive, covered with a fuzzy gray. It can be reactivated with an override function on the keyboard. Chief Information Officer Andrew Cawood explained in a memo to 35,000 employees the reason behind Nielsen’s decision: eliminating 'bureaucracy and inefficiency.'" Software developers are starting to react to this need as well, creating plugins or monitors that restrict the reply-all button or at least alert the user, so they can take a moment to consider their action more carefully. In addition to getting rid of the annoying "Thanks!" and "Welcome!" emails, this has implications for law firms and military organizations, where an errant reply-all could have serious repercussions.

ryzvonusef writes: Start page for majority of Pakistanis – when they first visited it this morning – was found hacked and defaced. Yes, Google.Com.PK along with 284 other.PK domains were hacked today (and are still defaced).

According to Irfan Ahmed, an expert on Pakistani websites and web-servers, this defacement is due to change in DNS entries for 284.PK domains that are managed by MarkMoniter.

Apparently no one has claimed the responsibility for the incident, but a message appearing on defaced pages, including on Google.com.pk is displaying a message in Turkish language, hinting that the hacker could be Turkish in origin.

Hacker hasn’t left any message for anyone, unlike the norm that hackers follow to convey their message through such defacements.

However, there is a phrase saying “Downed Pakistan”, a sign of victory for hackers when the deface a website.