destinyland writes: O'Reilly and Associates just announced that they're offering a 50% discount on every ebook they publish for Cyber Monday. Use the code CYBERDAY when checking out to claim the discount (which expires at midnight). Amazon has also discounted their Kindle Fire tablets to just $129. Due to a prodcution snafu, they've already sold out of the new Kindle Paperwhite, and won't be able to ship any more until December 21

amkkhan writes: Elon Musk, founder of the private spaceflight company SpaceX, is has his eye on forming a Mars colony, and you can be part one of the first Martian explorers for only $500,000. The Mars colony would be part of a Mars settlement program, and Must envisions ferrying up to 80,000 people to the red planet as part of the first Mars colony.

The Mars settlement program would start with 10 people, who would journey on to Mars on a reusable rocket created by SpaceX powered by liquid oxygen and methane, according to Yahoo! News.

"At Mars, you can start a self-sustaining civilization and grow it into something really big," Musk said, according to Space.com.

Lucas123 writes: Next year, smart phones will begin shipping with the ability to have dual identities: one for private use and the other for corporate. Hypervisor developers, such as VMware and Red Bend, are working with system manufacturers to embed their virtualization software in the phones, while IC makers, such as Intel, are developing more powerful and secure mobile device processors. The combination will enable mobile platforms that afford end users their own user interface, secure from IT's prying eyes, while in turn allowing a company to secure its data using mobile device management software. One of the biggest benefits dual-identity phones will offer is enabling admins to wipe corporate data from phones without erasing end users profiles and personal information.

Aaron Portnoy, the vice president of research at Exodus, said that finding the flaws wasn't even remotely difficult.

"The most interesting thing about these bugs was how trivial they were to find. The first exploitable 0day took a mere 7 minutes to discover from the time the software was installed. For someone who has spent a lot of time auditing software used in the enterprise and consumer space, SCADA was absurdly simple in comparison. The most difficult part of finding SCADA vulnerabilities seems to be locating the software itself," Portnoy said in a blog post.

Portnoy said that he plans to suggest to ICS-CERT that the group consider developing a repository of SCADA software to make it easier for security researchers to do their work.

Nerval's Lobster writes: "This is the story of the comparison that just wasn’t meant to be. It’s a story of everything that can go wrong in the customer end of the software world, and some thoughts on what needs to be done, especially in an area known as Installers. I’m a software engineer with 25 years of experience, and for years I’ve wanted to point out some of the shortcomings of my own industry to help make it better for everyone involved—not only for the end-users, but also for the IT people who have to support the products; the salespeople who have to sell and later, possibly, apologize for the software; for the executives whose hands are tied because they don’t have the technical knowledge to roll up their sleeves and help fix problems in the code; and for the programmers themselves who might get stuck with what some consider the absolute worst position for a programmer: maintenance of crappy code written by programmers who have long since left the organization."

sturgeon writes: A report out this morning pegs Amazon with a whopping 14% share of all daily Internet users — almost twice the nearest competitor (Ebay). And this number does not include all shopping sites absorbed by the growing Amazon empire.

The original report has interesting graphics comparing Amazon to other retailers like BestBuy.

Barence writes: "When it comes to programming, the classroom is moving online. A new wave of start-ups has burst onto the scene over the last year, bringing interactive lessons and gamification techniques to the subject to make coding trendy again. From Codecademy — and its incredibly successful Code Year initiative — to Khan Academy, Code School and Udacity, online learning is now sophisticated and high-tech — but is it good enough to replace the classroom? “We are the first five or six chapters in a book,” says Code School's Gregg Pollack in this exploration of online code classes, but with the number of sites and lessons growing by the week that might not be the case for long."

dw writes: In an interview with European Magazine, Vint Cert predicts that in the early 22nd century, 'Freshwater will be the new oil', and 'Dystopia will be hard to fend off with resource shortages and changes in arable land.', and he explains how he's been confronted with some confusion over the meaning of the title 'Chief Internet Envangelist'

This new 1/12 scale autobot is made using a custom 3D printer (build by Kenji himself) and finally Transformers fans around the World will be able to buy it.

The official price has not been disclosed. For now the production is limited to 10 pieces. It's possible to choose the color of the robot that comes built, programmed, complete with a wireless controller in a numbered case."

trawg writes: "In the throes of another holiday shopping season, with the year's top video game launches still setting records above all other entertainment mediums, gamers down under are still paying more than anyone else in the world, with many publishers adding an "Australia tax" to their latest tiles. We take a look at the factors and forces responsible for the disparity in retail prices for games in Australia relative to other developed nations, and as a solution suggest that consumers should support developers and publishers that don't discriminate, and import, and circumvent the products of those that do."

cstacy writes: The Nassau County (New York) Police Department is "very concerned" about reports that shreds of police documents (with social security numbers, phone numbers, addresses, license plate numbers, incident reports, and more) rained down as confetti in the Macy's Thanksgiving Day Parade. The documents also unveiled the identities of undercover officers, including their SSNs and bank information, according to WPIX-TV. Macy's has no idea how this happened, as they use commercial, colored confetti, not shredded paper.

Hugh Pickens writes writes: "Salvatore Iaconesi, a software engineer at La Sapienza University of Rome, writes that when he was recently diagnosed with brain cancer his first idea was to seek other opinions so he immediately asked for his clinical records in digital format, converted the data into spreadsheets, databases, and metadata files and published them on the web site called The Cure. "The responses have been incredible. More than 200,000 people have visited the site and many have provided videos, poems, medical opinions, suggestions of alternative cures or lifestyles, personal stories of success or, sadly, failures — and simply the statement, "I am here." Among them were more than 90 doctors and researchers who offered information and support." The geneticist and TED fellow Jimmy Lin has offered to sequence the genome of Iaconesi's tumor after surgery and within one day Iaconesi heard from two different doctors who recommended similar kinds of "awake surgery," where the brain is monitored in real time as different parts are touched and a brain map is produced and used during a second surgery. "We are creating a cure by uniting the contributions of surgeons, homeopaths, oncologists, Chinese doctors, nutritionists and spiritual healers. The active participation of everyone involved — both experts and ex-patients — is naturally filtering out any damaging suggestion which might be proposed," writes Iaconesi. "Send us videos, poems, images, audio or text that you see as relevant to a scenario in which art and creativity can help form a complete and ongoing cure. Or tell us, "I am here!" — alive and connected, ready to support a fellow human being.""

kfogel writes: "First, the main thing: two thumbs up, and maybe a tentacle too, on Version Control with Git, 2nd Edition by Jon Loeliger and Matthew McCullough (O'Reilly Media, 2012). If you are a working programmer who wants to learn more about Git, particularly a programmer familiar with a Unix-based development environment, then this is the book for you, hands down (tentacles down too, please).

But there's a catch. You have to read the book straight through, from front to back. If you try to skip around, or just read the parts you feel you need, you'll probably be frustrated, because — exaggerating, but only slightly — every part of the book is linked to every other part. Perhaps if you're already expert in Git and merely want a quick reminder about something, it would work, but in that case you're more likely to do a web search anyway. For the rest of us, taking the medicine straight and for the full course is the only way. To some degree, this may have been forced on the authors by Git's inherent complexity and the interdependency of its basic concepts, but it does make this book unusual among technical guides. A common first use case, cloning a repository from somewhere else, isn't even covered until Chapter 12, because understanding what cloning really means requires so much background.

Like most readers, I'm an everyday user of Git but not at all an expert. Even this everyday use is enough to make me appreciate the scale of the task faced by the authors. On more than one occasion, frustrated by some idiosyncrasy, I've cursed that Git is a terrific engine surrounded by a cloud of bad decisions. The authors might not put it quite so strongly, but they clearly recognize Git's inconsistencies (the footnote on p. 47 is one vicarious acknowledgement) and they gamely enter the ring anyway. As with wrestling a bear, the question is not "Did they win?" but "How long did they last?"

For the most part, they more than hold their own. You can sometimes sense their struggle over how to present the information, and one of the book's weaknesses is a tendency to fall too quickly into implementation-driven presentation after a basic concept has been introduced. The explanation of cloning on p. 197 is one example: the jump from the basics to Git-specific terminology and repository details is abrupt, and forces the reader to either mentally cache terms and references in hope of later resolution, or to go back and look up a technical detail that was introduced many pages ago and is suddenly relevant again[1]. On the other hand, it is one of the virtues of the book that these checks can almost always be cashed: the authors accumulate unusual amounts of presentational debt as they go (in some cases unnecessarily), but if you're willing to maintain the ledger in your head, it all gets repaid in the end. Your questions will generally be answered[2], just not in the order nor at the time you had them. This isn't a book you can read for relaxation; give it your whole mind and you shall receive enlightenment in due proportion.

The book begins with a few relatively light chapters on the history of Git and on basic installation and local usage, all of which are good, but in a sense its real start is Chapters 4-6, which cover basic concepts, the Git "index" (staging area), and commits. These chapters, especially Chapter 4, are essentially a design overview of Git, and they go deep enough that you could probably re-implement much of Git based just on them. It requires a leap of faith to believe that all this material will be needed throughout the rest of the book, but it will, and you shouldn't move on until you feel secure with everything there.

From that point on, the book is at its best, giving in-depth explanations of well-bounded areas of Git's functionality. The chapter on git diff tells you everything you need to know, starting with an excellent overview and then presenting the details in a well-thought-out order, including an especially good annotated running example starting on p. 112. Similarly, the branching and merging chapters ensure that you will come out understanding how branches are central to Git and how to handle them, and the explanations build well on earlier material about Git's internal structure, how commit objects are stored, etc. (Somewhere around p. 227 my eyes finally glazed over in the material about manipulating tracking branches: I thought "if I ever need this, I know where to find it". Everyone will probably have that reaction at various points in the book, and the authors seem to have segregated some material with that in mind.) The chapter-level discussions on how to use Git with Subversion repositories, on the git stash command, on using GitHub, and especially on different strategies for assembling multi-source projects using Git, are all well done and don't shirk on examples nor on technical detail. Given the huge topic space the authors had to choose from, their prioritizations are intelligently made and obviously reflective of long experience using Git.

Another strength is the well-placed tips throughout the book. These are sometimes indented and marked with the (oddly ominous, or is that just me?) O'Reilly paw print tip graphic, and sometimes given inline. Somehow the tips always seem to land right where you're most likely to be thinking "I wish there were a way to do X"; again, this must be due to the author's experience using Git in the real world, and readers who use Git on a daily basis will appreciate it. The explanation of --assume-unchanged on p. 382 appeared almost telepathically just as I was about to ask how to do that, for example. Furthermore, everything they saved for the "Advanced Manipulations" and "Tips, Tricks, and Techniques" chapters is likely to be useful at some point. Even if you don't remember the details of every tip, you'll remember that it was there, and know to go looking for it later when you need it (so it might be good to get an electronic copy of the book).

If there's a serious complaint to be made, it's that with a bit more attention the mental burden on the reader could have been reduced in many places. To pick a random example, in the "Branches" chapter on p. 90, the term "topic branch" is defined for the first time, but it was already used in passing on p. 68 (with what seems to be an assumption that the reader already knew the term) and again on pp. 80-81 (this time compounding the confusion with an example branch named "topic"). There are many similar instances of avoidable presentational debt; usually they are only distractions rather than genuine impediments to understanding, but they make the book more work than it needs to be. There are also sometimes ambiguous or not-quite-precise-enough statements that will cause the alert reader — which is the only kind this book really serves — to pause and have to work out what the authors must have meant (a couple of examples: "Git does not track file or directory names" on p. 34, or the business about patch line counts at the top of p. 359). Again, these can usually be resolved quickly, or ignored, without damage to overall understanding, but things would go a little bit more smoothly had they been worded differently.

Starting around p. 244 is a philosophical section that I found less satisfying than the technical material. It makes sense to discuss the distinction between committing and publishing, the idea that there are multiple valid histories, and the idea that the "central" repository is purely a social construct. But at some point the discussion starts to veer into being a different book, one about patterns for using Git to manage multi-developer projects and about software development generally, before eventually veering back. Such material could be helpful, but then it might have been better to offer a shallower overview of more patterns, rather than a tentative dive into the "Maintainer/Developer" pattern, which is privileged here beyond its actual prominence in software development. (This is perhaps a consequence of the flagship Git project, the Linux kernel, happening to use that pattern — but Linux is unusual in many ways, not just that one.)

The discussion of forking and of the term "fork", first from p. 259 and reiterated from p. 392, is confusing in several ways. It first uses the term as though it has no historical baggage, then later takes that historical baggage for granted, then finally describes the baggage but misunderstands it by failing to distinguish clearly between a social fork (a group of developers trying to persuade users and other developers to abandon one version and join another), which is a major event, and a feature fork (that is, a branch that happens to be in another repository), which is a non-event and which is all that sites like GitHub mean by forking. The two concepts are very different; to conflate them just because the word "fork" is now used for both is thinking with words, and doesn't help the reader understand what's going on. I raise this example in particular because I was surprised that the authors who had written so eloquently about the significance of social conventions elsewhere would give such an unsatisfactory explanation of this one.

Somewhat surprisingly, the authors don't review or even mention the many sources of online help about Git, such as the #git IRC channel at Freenode, the user discussion groups, wikis, etc. While most users can probably find those things quickly with a web search, it would have been good to point out their existence and maybe make some recommendations. Also, the book only covers installation of Git on GNU/Linux and MS Windows systems, with no explicit instructions for Mac OS X, the *BSD family, etc (however, the authors acknowledge this and rightly point out that the differences among Unix variants are not likely to be a showstopper for anyone).

But this is all carping. The book's weaknesses are minor, its strengths major. Any book on so complicated a topic is bound to cause disagreements about presentation strategy and even about philosophical questions. The authors write well, they must have done cubic parsecs of command testing to make sure their examples were correct, they respect the reader enough to dive deeply into technical details when the details are called for, and they take care to describe the practical scenarios in which a given feature is most likely to be useful. Its occasional organizational issues notwithstanding, this book is exactly what is needed by the everyday Git user who wants to know more — and is willing to put in the effort required to get there. I will be using my copy for a long time.

Footnotes

[1] One of my favorite instances of this happened with the term "fast-forward". It was introduced on p. 140, discussed a little but with no mention of a "safety check", then not used again until page 202, which says: "If present, the plus sign indicates that the normal fast-forward safety check will not be performed during the transfer." If your memory is as bad as mine, you might at that point have felt like you were suddenly reading the owner's manual for an early digital wristwatch circa 1976.

[2] Though not absolutely always: one of the few completely dangling references in the book is to "smudge/clean filters" on p. 294. At first I thought it must be a general computer science term that I didn't know, but it appears to be Git-specific terminology. Happy Googling.

[3] (This is relegated to a floating footnote because it's probably not relevant to most readers.) The book discusses other version control systems a bit, for historical perspective, and is not as factually careful about them as it is about Git. I've been a developer on both CVS and Subversion, so the various incorrect assertions, especially about Subversion, jumped out at me (pp. 2-3, p. 120, pp. 319-320). Again, this shouldn't matter for the intended audience. Don't come to this book to learn about Subversion; definitely come to it to learn about Git.

[4] As long as we're having floating footnotes, here's a footnote about a footnote: on p. 337, why not just say "Voltaire"?

[5] Finally, I categorically deny accusations that I gave a positive review solely because at least one of the authors is a fellow Emacs fanatic (p. 359, footnote). But it didn't hurt."

SternisheFan writes: The structure of the universe and the laws that govern its growth may be more similar than previously thought to the structure and growth of the human brain and other complex networks, such as the Internet or a social network of trust relationships between people, according to a new study. “By no means do we claim that the universe is a global brain or a computer,” said Dmitri Krioukov, co-author of the paper, published by the Cooperative Association for Internet Data Analysis (CAIDA), based at the San Diego Supercomputer Center (SDSC) at the University of California, San Diego.“But the discovered equivalence between the growth of the universe and complex networks strongly suggests that unexpectedly similar laws govern the dynamics of these very different complex systems,” Krioukov noted
Having the ability to predict – let alone trying to control – the dynamics of complex networks remains a central challenge throughout network science. Structural and dynamical similarities among different real networks suggest that some universal laws might be in action, although the nature and common origin of such laws remain elusive By performing complex supercomputer simulations of the universe and using a variety of other calculations, researchers have now proven that the causal network representing the large-scale structure of space and time in our accelerating universe is a graph that shows remarkable similarity to many complex networks such as the Internet, social, or even biological networks. These findings have key implications for both network science and cosmology,” said Krioukov.

Hugh Pickens writes writes: "The Orlando Sentinel reports that a google search was made for the term "foolproof suffocation" on the Anthony family's computer the day Casey Anthony's 2-year-old daughter Caylee was last seen alive by her family — a search that did not surface at Casey Anthony's trial for first degree murder. In the notorious 31 days which followed, Casey Anthony repeatedly lied about her and her daughter's whereabouts and at Anthony's trial, her defense attorney argued that her daughter drowned accidentally in the family's pool. Anthony was acquitted on all major charges in her daughter's death, including murder. Though computer searches were a key issue at Anthony's murder trial, the term "foolproof suffocation" never came up. "Our investigation reveals the person most likely at the computer was Casey Anthony," says investigative reporter Tony Pipitone. Lead sheriff's Investigator Yuri Melich sent prosecutors a spreadsheet that contained less than 2 percent of the computer’s Internet activity that day and included only Internet data from the computer’s Internet Explorer browser – one Casey Anthony apparently stopped using months earlier — and failed to list 1,247 entries recorded on the Mozilla Firefox browser that day — including the search for “foolproof suffocation.” Prosecutor Jeff Ashton said in a statement to WKMG that it's "a shame we didn't have it. (It would have) put the accidental death claim in serious question.""