Tuesday, May 31, 2016

His next title, Homo Deus: A Brief History of Tomorrow, is not out until September but early copies have begun to circulate. Its cover states simply: “What made us sapiens will make us gods”. It follows on from where Sapiens ends, in a provocative, and certainly speculative, gallop through the hopes and dreams that will shape the future of the species.

And the nightmares. Because even as the book has humans gaining godlike powers, that is only one eventuality Harari explores. It might all go pear-shaped, of course: we sapiens have a knack for hashing things up. Instead of morphing into omnipotent, all-knowing masters of the universe, the human mob might end up jobless and aimless, whiling away our days off our nuts on drugs, with VR headsets strapped to our faces. Welcome to the next revolution.

Harari calls it “the rise of the useless class” and ranks it as one of the most dire threats of the 21st century. In a nutshell, as artificial intelligence gets smarter, more humans are pushed out of the job market. No one knows what to study at college, because no one knows what skills learned at 20 will be relevant at 40. Before you know it, billions of people are useless, not through chance but by definition.

[---]

None of this puts us in the realm of the gods. In fact, it leads Harari to even more bleak predictions. Though the people may no longer provide for the state, the state may still provide for them. “What might be far more difficult is to provide people with meaning, a reason to get up in the morning,” Harari says. For those who don’t cheer at the prospect of a post-work world, satisfaction will be a commodity to pay for: our moods and happiness controlled by drugs; our excitement and emotional attachments found not in the world outside, but in immersive VR.

All of which leads to the question: what should we do? “First of all, take it very seriously,” Harari says. “And make it a part of the political agenda, not only the scientific agenda. This is something that shouldn’t be left to scientists and private corporations. They know a lot about the technical stuff, the engineering, but they don’t necessarily have the vision and the legitimacy to decide the future course of humankind.”

I know nothing about economics and—from evolutionary logic—could not have predicted a thing about the collapse of 2008, but I have disagreed for thirty years with an alleged science called economics that has resolutely failed to ground itself in underlying knowledge, at a cost to all of us.

Monday, May 30, 2016

What do you think about future job security of machine learning engineers?

Everyone has to keep learning. I am 100% convinced that even the top people in machine learning won’t have a job 10 years from now if they don’t move on. There will be other, future hot topics in Silicon Valley. Never stand still, never be complacent.

[---]

What skills would you focus on if you comback to your teen days?

Be fearless, be curious, and develop a growth mindset. For those who learn, there is no such thing as failure. It is the failure to learn that is the true failure in life.

I think we can reasonably conclude that complex life will be rare in the universe – there is no innate tendency in natural selection to give rise to humans or any other form of complex life. It is far more likely to get stuck at the bacterial level of complexity. I can’t put a statistical probability on that. The existence of Parakaryon myojinensis might be encouraging for some – multiple origins of complexity on earth means that complex life might be more common elsewhere in the universe. Maybe. What I would argue with more certainty is that, for energetic reasons, the evolution of complex life requires an endosymbiosis between two prokaryotes, and that is a rare random event, disturbingly close to a freak accident, made all the more difficult by the ensuing intimate conflict between cells. After that, we are back to standard natural selection.

Sunday, May 29, 2016

“We understand the human much better than other humans understand each other,” said Faception chief executive Shai Gilboa. “Our personality is determined by our DNA and reflected in our face. It’s a kind of signal.”

Faception has built 15 different classifiers, which Gilboa said evaluate with 80 percent accuracy certain traits. The start-up is pushing forward, seeing tremendous power in a machine’s ability to analyze images.

Yet experts caution there are ethical questions and profound limits to the effectiveness of technology such as this.

“Can I predict that you’re an ax murderer by looking at your face and therefore should I arrest you?” said Pedro Domingos, a professor of computer science at the University of Washington and author of “The Master Algorithm.” “You can see how this would be controversial.”[---]

Faception recently showed off its technology at a poker tournament organized by a start-up that shares investors with Faception. Gilboa said that Faception predicted before the tournament that four players out of the 50 amateurs would be the best. When the dust settled two of those four were among the event’s three finalists. To make its prediction Faception analyzed photos of the 50 players against a Faception database of professional poker players.

There are challenges in trying to use artificial intelligence systems to draw conclusions such as this. A computer that is trained to analyze images will only be as good as the examples it is trained on. If the computer is exposed to a narrow or outdated sample of data, its conclusions will be skewed. Additionally, there’s the risk the system will make an accurate prediction, but not necessarily for the right reasons.

All photographs are memento mori. To take a photograph is to participate in another person’s (or thing’s) mortality, vulnerability, mutability. Precisely by slicing out this moment and freezing it, all photographs testify to time’s relentless melt.

Saturday, May 28, 2016

The “Science Against Slavery” Hackathon,
was an all-day Hackathon aimed sharing ideas and creating science-based
solutions to the problem of human trafficking. Data scientists,
students and hackers honed in on data that district attorneys would
otherwise never find. Many focused on automating processes so agencies
could use the technology with little guidance. Some focused primarily on
generating data that could lead to a conviction—which is much easier
said than done. One effort from EPIK Project founder Tom Perez included
creating fake listings. They could then gather information on
respondents, including real world coordinates. Other plans compared
photos mined from escort ads and sites to those from missing person
reports. Web crawling could eventually lead to geocoding phone numbers
or understanding the distribution of buyers and sellers, as well as
social network analysis.

We obtained the risk scores assigned to more than 7,000 people
arrested in Broward County, Florida, in 2013 and 2014 and checked to see
how many were charged with new crimes over the next two years, the same benchmark used by the creators of the algorithm.The score proved remarkably unreliable in forecasting violent crime:
Only 20 percent of the people predicted to commit violent crimes
actually went on to do so.When a full range of crimes were taken into account — including
misdemeanors such as driving with an expired license — the algorithm was
somewhat more accurate than a coin flip. Of those deemed likely to
re-offend, 61 percent were arrested for any subsequent crimes within two
years.We also turned up significant racial disparities, just as Holder
feared. In forecasting who would re-offend, the algorithm made mistakes
with black and white defendants at roughly the same rate but in very
different ways.

The formula was particularly likely to falsely flag black
defendants as future criminals, wrongly labeling them this way at almost
twice the rate as white defendants.

White defendants were mislabeled as low risk more often than black defendants.

Could this disparity be explained by defendants’ prior crimes or the
type of crimes they were arrested for? No. We ran a statistical test
that isolated the effect of race from criminal history and recidivism,
as well as from defendants’ age and gender. Black defendants were still
77 percent more likely to be pegged as at higher risk of committing a
future violent crime and 45 percent more likely to be predicted to
commit a future crime of any kind.

Friday, May 27, 2016

For many years a tree might wage a slow and silent warfare against an encumbering wall, without making any visible progress. One day the wall would topple--not because the tree had suddenly laid hold upon some supernormal energy, but because its patient work of self-defense and self release had reached fulfillment. The long-imprisoned tree had freed itself. Nature had had her way.

Monday, May 23, 2016

A summary definition of some 70 descriptions of intelligence provides a definition for all other organisms including plants that stresses fitness. Barbara McClintock, a plant biologist, posed the notion of the 'thoughtful cell' in her Nobel prize address. The systems structure necessary for a thoughtful cell is revealed by comparison of the interactome and connectome. The plant root cap, a group of some 200 cells that act holistically in responding to numerous signals, likely possesses a similar systems structure agreeing with Darwin's description of acting like the brain of a lower organism. Intelligent behavior requires assessment of different choices and taking the beneficial one. Decisions are constantly required to optimize the plant phenotype to a dynamic environment and the cambium is the assessing tissue diverting more or removing resources from different shoot and root branches through manipulation of vascular elements. Environmental awareness likely indicates consciousness. Spontaneity in plant behavior, ability to count to five and error correction indicate intention. Volatile organic compounds are used as signals in plant interactions and being complex in composition may be the equivalent of language accounting for self and alien recognition by individual plants. Game theory describes competitive interactions. Interactive and intelligent outcomes emerge from application of various games between plants themselves and interactions with microbes. Behavior profiting from experience, another simple definition of intelligence, requires both learning and memory and is indicated in the priming of herbivory, disease and abiotic stresses.

The soil is the great connector of lives, the source and destination of all. It is the healer and restorer and resurrector, by which disease passes into health, age into youth, death into life. Without proper care for it we can have no community, because without proper care for it we can have no life.

Sunday, May 22, 2016

If dogs could talk, Melody Jackson knows what they would say. Or at least, what she'd like them to say.

Jackson, an associate professor at the Georgia Institute of Technology, has developed technology that is giving dogs a voice, an ability she says is crucial for search and rescue, bomb detection and therapy dogs. The dogs wear vests equipped with sensors that can send either audible cues or text notifications to a smartphone.

[---]

"A bomb-sniffing dog has pretty much one alert that says, 'Hey, I found an explosive." But that dog knows what explosive is in there. ... They know if it's something stable like C4 or something unstable and dangerous like TATP that needs to be handled carefully," Jackson says. The problem is "they have no way to tell their handler."Jackson and her research team have also developed a medical alert vest that allows a dog to find a missing or trapped person, activate a sensor, and let that person know that help is on the way. This task could be instrumental during an earthquake or disaster rescue where a trapped or injured person is in need of assistance. This vest is being beta tested by a real service dog team in California, Jackson says.

Georgia Tech is also working to develop a vest that allows the handler to track the dog wearing it. When the dog finds its target, the dog activates a sensor that sends GPS coordinates back to the handler. The dog then tells the person in jeopardy that help is on the way, and the rescue canine does not have to leave the victim's side.

Saturday, May 21, 2016

Deep learning works well across many applications when there is a lot of data, but what about one-shot or zero-shot learning, in which it is necessary to transfer and adapt knowledge from other domains to the current domain? What kinds of abstractions are formed by deep networks, and how can we reason with these abstractions and combine them? Networks can be fooled by adversarial inputs; how do we defend against these, and do they represent a fundamental flaw, or an irrelevant trick?

How do we deal with structure in a domain? We have recurrent networks to deal with time, and recursive networks to deal with nested structure, but it is too early to tell whether these are sufficient.

So I'm excited about Deep Learning because so many long-standing fields are excited about it. And I'm interested in understanding more because there are many remaining questions, and answers to these questions will not only tell us more about Deep Learning, but may help us understand Learning, Inference, and Representation in general.

[---]

Is there any place for software engineers that do not learn AI or Machine Learning in the next 10 years or does everyone have to learn it?

Machine Learning will be (or perhaps already is) such an important part of software engineering that everyone will have to understand where it fits in. But just like, say, database administration or user interface design, that doesn’t mean every engineer will have to be an expert in doing machine learning—it will be acceptable to work with others who are expert. But the more you know about machine learning, the better you will be at architecting a solution.

I also think that it will be important for machine learning experts and software engineers to come together to develop best practices for software development of machine learning systems. Currently we have a software testing regime where you define unit tests with calls to methods like assertTrue or assertEquals. We will need new testing processes that involve running experiments, analyzing the results, comparing today’s results to past results to look for drift, deciding if the drift is random variation or non-stationarity of the data, etc. This is a great area for software engineers and machine learning people to work together to build something new and better.

Here is what power does to just about every human being. It’s going to make you not pay attention to people as well as you used to pay attention to them. You may find yourself swearing at a colleague or telling them that their work is horseshit. You will be a little less careful in the language you use. You will be a little less thoughtful about how things look from their perspective. So just practise a little gratitude. Listen empathetically. It shouldn’t be that difficult.

The experiment the AI performed was the creation of a Bose-Einstein condensate, a hyper-cold gas, the process for which won three physicists the Nobel Prize in 2001. It involves using directed radiation to slow a group of atoms nearly to a standstill, producing all manner of interesting effects.

The Australian National University team cooled a bit of gas down to 1 microkelvin — that’s a millionth of a degree above absolute zero — then handed over control to the AI. It then had to figure out how to apply its lasers and control other parameters to best cool the atoms down to a few hundred nanokelvin (i.e. a billionth of a second), and over dozens of repetitions, it found more and more efficient ways to do so.

“It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” said ANU’s Paul Wigley, co-lead researcher, in a news release. “I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour. It may be able to come up with complicated ways humans haven’t thought of to get experiments colder and make measurements more precise.”

Monday, May 16, 2016

From within the dark confines of the skull, the brain builds its own version of reality. By weaving together expectations and information gleaned from the senses, the brain creates a story about the outside world. For most of us, the brain is a skilled storyteller, but to spin a sensible yarn, it has to fill in some details itself.

“The brain is a guessing machine, trying at each moment of time to guess what is out there,” says computational neuroscientist Peggy Seriès.

Guesses just slightly off — like mistaking a smile for a smirk — rarely cause harm. But guessing gone seriously awry may play a part in mental illnesses such as schizophrenia, autism and even anxiety disorders, Seriès and other neuroscientists suspect. They say that a mathematical expression known as Bayes’ theorem — which quantifies how prior expectations can be combined with current evidence — may provide novel insights into pernicious mental problems that have so far defied explanation.

Bayes’ theorem “offers a new vocabulary, new tools and a new way to look at things,” says Seriès, of the University of Edinburgh.

Experiments guided by Bayesian math reveal that the guessing process differs in people with some disorders. People with schizophrenia, for instance, can have trouble tying together their expectations with what their senses detect. And people with autism and high anxiety don’t flexibly update their expectations about the world, some lab experiments suggest. That missed step can muddy their decision-making abilities.

Given the complexity of mental disorders such as schizophrenia and autism, it is no surprise that many theories of how the brain works have fallen short, says psychiatrist and neuroscientist Rick Adams of University College London. Current explanations for the disorders are often vague and untestable. Against that frustrating backdrop, Adams sees great promise in a strong mathematical theory, one that can be used to make predictions and actually test them.

“It’s really a step up from the old-style cognitive psychology approach, where you had flowcharts with boxes and labels on them with things like ‘attention’ or ‘reading,’ but nobody having any idea about what was going on in [any] box,” Adams says.

Applying math to mental disorders “is a very young field,” he adds, pointing to Computational Psychiatry, which plans to publish its first issue this summer. “You know a field is young when it gets its first journal.”

Sunday, May 15, 2016

Just print the money! Well to be honest, a politician – and a central banker – should admit that increasing joblessness must be paid for somehow. (1) Raising taxes (not lowering them, Donald) is one way. (2) Issuing more and more debt via the private market is another (not a good idea either in this highly levered economy). (3) A third way is to sell debt to central banks and have them nance it perpetually at low interest rates that are then remitted back to their treasuries. Money for free! Well not exactly. The Piper that has to be paid will likely be paid for in the form of higher in ation, but that of course is what the central banks claim they want. What they don’t want is to be messed with and to become a government agency by proxy, but that may just be the price they will pay for a civilized society that is quickly becoming less civilized due to robotization. There is a rude end to ying helicopters, but the alternative is an immediate visit to austerity rehab and an extended recession. I suspect politicians and central bankers will choose to y, instead of die.

Private banks can fail but a central bank that can print money acceptable to global commerce cannot. I have long argued that this is a Ponzi scheme and it is, yet we are approaching a point of no return with negative interest rates and QE purchases of corporate bonds and stock. Still, I believe that for now central banks will print more helicopter money via QE (perhaps even the U.S. in a year or so) and reluctantly accept their increasingly dependent role in scal policy. That would allow governments to focus on infrastructure, health care, and introduce Universal Basic Income for displaced workers amongst other increasing needs. It will also lead to a less independent central bank, and a more permanent mingling of scal and monetary policy that stealthily has been in effect for over 6 years now. Chair Yellen and others will be disheartened by this change in culture. Too bad. If there is an answer, the answer is that it’s just that way.

Investment implications: Prepare for renewed QE from the Fed. Interest rates will stay low for longer, asset prices will continue to be arti cially high. At some point, monetary policy will create in ation and markets will be at risk. Not yet, but be careful in the interim. Be content with low single digit returns.

You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.

- John von Neumann, Quoted by M. Tribus — Scientific American, 225, “Energy and Information”, p. 180, 1971, Suggestion of von Neumann to Shannon regarding the name of his new uncertainty function

Saturday, May 14, 2016

On April 6, 1922, Einstein met a man he would never forget. He was one of the most celebrated philosophers of the century, widely known for espousing a theory of time that explained what clocks did not: memories, premonitions, expectations, and anticipations. Thanks to him, we now know that to act on the future one needs to start by changing the past. Why does one thing not always lead to the next? The meeting had been planned as a cordial and scholarly event. It was anything but that. The physicist and the philosopher clashed, each defending opposing, even irreconcilable, ways of understanding time. At the Société française de philosophie—one of the most venerable institutions in France—they confronted each other under the eyes of a select group of intellectuals. The “dialogue between the greatest philosopher and the greatest physicist of the 20th century” was dutifully written down.1 It was a script fit for the theater. The meeting, and the words they uttered, would be discussed for the rest of the century.

The philosopher’s name was Henri Bergson. In the early decades of the century, his fame, prestige, and influence surpassed that of the physicist—who, in contrast, is so well known today. Bergson was compared to Socrates, Copernicus, Kant, Simón Bolívar, and even Don Juan. The philosopher John Dewey claimed that “no philosophic problem will ever exhibit just the same face and aspect that it presented before Professor Bergson.” William James, the Harvard professor and famed psychologist, described Bergson’s Creative Evolution (1907) as “a true miracle,” marking the “beginning of a new era.” For James, Matter and Memory (1896) created “a sort of Copernican revolution as much as Berkeley’s Principles or Kant’s Critique did.” The philosopher Jean Wahl once said that “if one had to name the four great philosophers one could say: Socrates, Plato—taking them together—Descartes, Kant, and Bergson.” The philosopher and historian of philosophy Étienne Gilson categorically claimed that the first third of the 20th century was “the age of Bergson.” He was simultaneously considered “the greatest thinker in the world” and “the most dangerous man in the world.” Many of his followers embarked on “mystical pilgrimages” to his summer home in Saint-Cergue, Switzerland.

[---]

Bergson found Einstein’s definition of time in terms of clocks completely aberrant. The philosopher did not understand why one would opt to describe the timing of a significant event, such as the arrival of a train, in terms of how that event matched against a watch. He did not understand why Einstein tried to establish this particular procedure as a privileged way to determine simultaneity. Bergson searched for a more basic definition of simultaneity, one that would not stop at the watch but that would explain why clocks were used in the first place. If this, much more basic, conception of simultaneity did not exist, then “clocks would not serve any purpose.” “Nobody would fabricate them, or at least nobody would buy them,” he argued. Yes, clocks were bought “to know what time it is,” admitted Bergson. But “knowing what time it is” presupposed that the correspondence between the clock and an “event that is happening” was meaningful for the person involved so that it commanded their attention. That certain correspondences between events could be significant for us, while most others were not, explained our basic sense of simultaneity and the widespread use of clocks. Clocks, by themselves, could not explain either simultaneity or time, he argued.

If a sense of simultaneity more basic than that revealed by matching an event against a clock hand did not exist, clocks would serve no meaningful purpose:

They would be bits of machinery with which we would amuse ourselves by comparing them with one another; they would not be employed in classifying events; in short, they would exist for their own sake and not serve us. They would lose their raison d’être for the theoretician of relativity as for everybody else, for he too calls them in only to designate the time of an event.

The entire force of Einstein’s work, argued Bergson, was due to how it functioned as a “sign” that appealed to a natural and intuitive concept of simultaneity. “It is only because” Einstein’s conception “helps us recognize this natural simultaneity, because it is its sign, and because it can be converted into intuitive simultaneity, that you call it simultaneity,” he explained.5 Einstein’s work was so revolutionary and so shocking only because our natural, intuitive notion of simultaneity remained strong. By negating it, it could not help but refer back to it, just like a sign referred to its object.

Bergson had been thinking about clocks for years. He agreed that clocks helped note simultaneities, but he did not think that our understanding of time could be based solely on them. He had already thought about this option, back in 1889, and had quickly discounted it: “When our eyes follow on the face of a clock, the movement of the needle that corresponds to the oscillations of the pendulum, I do not measure duration, as one would think; I simply count simultaneities, which is quite different.”6 Something different, something novel, something important, something outside of the watch itself needed to be included in our understanding of time. Only that could explain why we attributed to clocks such power: Why we bought them, why we used them, and why we invented them in the first place.

A doctoral student at the University of Pennsylvania has identified a new species of fossil dog: Cynarctus wangi. The specimen, found in Maryland, would have roamed the coast of eastern North America approximately 12 million years ago, at a time when massive sharks like megalodon swam in the oceans. The coyote-sized dog was a member of the extinct subfamily Borophaginae, commonly known as bone-crushing dogs because of their powerful jaws and broad teeth. Fossils from terrestrial species from this region and time period are relatively rare, thus the find helps paleontologists fill in important missing pieces about what prehistoric life was like on North America’s East Coast.

[---]

This new dog gives us useful insight into the ecosystem of eastern North America between 12 and 13 million years ago.

Friday, May 13, 2016

Four years ago, I published a book called Life’s Ratchet, which explains how molecular machines create order in our cells. My main concern was how life avoids a descent into chaos. To my great surprise, soon after the book was published, I was contacted by researchers who study biological aging. At first I couldn’t see the connection. I knew nothing about aging except for what I had learned from being forced to observe the process in my own body.

Then it dawned on me that by emphasizing the role of thermal chaos in animating molecular machines, I encouraged aging researchers to think more about it as a driver of aging. Thermal motion may seem beneficial in the short run, animating our molecular machines, but could it be detrimental in the long run? After all, in the absence of external energy input, random thermal motion tends to destroy order.

This tendency is codified in the second law of thermodynamics, which dictates that everything ages and decays: Buildings and roads crumble; ships and rails rust; mountains wash into the sea. Lifeless structures are helpless against the ravages of thermal motion. But life is different: Protein machines constantly heal and renew their cells.

In this sense, life pits biology against physics in mortal combat. So why do living things die? Is aging the ultimate triumph of physics over biology? Or is aging part of biology itself?

Using deep neural networks, SyntaxNet and similar systems do take syntactic parsing to a new level. A neural net learns by analyzing vast amounts of data. It can learn to identify a photo of a cat, for instance, by analyzing millions of cat photos. In the case of SyntaxNet, it learns to understand sentences by analyzing millions of sentences. But these aren’t just any sentences. Humans have carefully labelled them, going through all the examples and carefully identifying the role that each word plays. After analyzing all these labeled sentences, the system can learn to identify similar characteristics in other sentences.

Though SyntaxNet is a tool for engineers and AI researchers, Google is also sharing a pre-built natural language processing service that it has already trained with the system. They call it, well, Parsey McParseface, and it’s trained for English, learning from a carefully labeled collection of old newswire stories. According to Google, Parsey McParseface is about 94 percent accurate in identifying how a word relates the rest of a sentence, a rate the company believes is close to the performance of a human (96 to 97 percent).

Death is always death, and in real life, especially in the world of the hospital, sudden death, whether violent and gruesome or unbelievably prosaic, is unsettling. What can one do? Go home, love your children, try not to bicker, eat well, walk in the rain, feel the sun on your face, and laugh loud and often, as much as possible, and especially at yourself. Because the antidote to death is not poetry, or miracle treatments, or a roomful of people with technical expertise and good intentions—the antidote to death is life.

Thursday, May 12, 2016

When Webb arrived in the Top End, it was difficult to find a saltwater crocodile. After the Second World War, overzealous hunters had blasted them for skins and didn’t stop even after the supply had collapsed and they were teetering on the edge of extinction. By the time salties were protected in 1971, just three to five percent of the original population remained.

Once salties began recovering, scientists and policymakers were faced with the next big obstacle: convincing people to keep them around. Persuading people to coexist with deadly predators over the long term, Webb asserts, is one of the world’s greatest conservation challenges. It’s easy to get people to rally around a predator when it has nearly vanished and become a romantic notion, but people are fickle: “If protection works in terms of increasing numbers, crocodiles eat more people, and then people want to get rid of them again,” he says. That’s what happened in 1979 and ’80, when crocodiles killed two people and badly injured two others. During the same time, an old crocodile named Sweetheart began flipping tourist fishing boats. The 5.1-meter, 780-kilogram brute—as long as a medium-sized dinosaur—would wrestle with the boat as the panicked passengers swam to shore. Sweetheart had likely mistaken the sound of the motor for the growl of another crocodile, Webb says, but these incidents threatened to unravel the conservation efforts and topple the NT’s nascent tourism industry. For people to willingly tolerate crocodiles, Webb reasons, they need to benefit from the situation—it’s not realistic to expect a community to conserve an animal out of appreciation for its intrinsic value alone.

Inspired by similar programs in Zimbabwe and Louisiana, Webb and his colleagues crafted the NT’s first formal incentive-driven management program in the early 1980s on behalf of the territorial government. It spawned what is now a multifaceted and far-reaching industry that works like this: people drop from helicopters and beat through the swamps to gather wild eggs—roughly 52,000 a year—which they sell to local crocodile farms. (In the wild, egg mortality, often caused by flooding, runs at 75 to 80 percent, and hatchling survival is density dependent, so collecting has no apparent impact on the wild population.) In turn, landowners receive a royalty from egg collection on their property—most harvesting takes place in remote aboriginal communities—which helps compensate them for livestock they lose to crocodiles and motivates them to retain habitat. Farms raise the hatchlings and sell their skins; rangers are employed to manage crocodiles in public areas; and independent permit holders make a living collecting animals from the wild, either for a farm’s breeding program or for trade. It all amounts to an AU $25-million industry.

Salties also play a significant role in the NT’s $1.61-billion tourism industry—the biggest employer in the region. Visitors pose for cheesy photos with hatchlings, dip into a croc’s tank within a “cage of death,” watch feedings, take croc-spotting excursions, and shop boutiques and souvenir shops for coveted wallets and belts or kitschy beer cozies and hatbands.

People are constantly making enormous life decisions (marriage, children, etc) for all of the wrong reasons.

Certain people -- some of whom are in positions of enormous power -- just do not give a damn about other human beings. A certain head of state in Syria comes to mind.

Often, the most important and consequential moments of our lives (chance encounter, fatal car accident, etc) happen completely at random and seemingly for no good reason.

Your sense of habitating a fully integrated reality is an illusion, and a privilege. Take the wrong drug, suffer a head injury, or somehow trigger a latent psychotic condition like schizophrenia -- and your grip on reality can be severed in an instant. Forever.

Tuesday, May 10, 2016

It is my strong conviction that a realist conception of human nature should be made a servant of an ethic of progressive justice and should not be made into a bastion of conservatism, particularly a conservatism which defends unjust privileges.

I currently use Ubuntu Linux, on a standalone laptop - it has no Internet connection. I occasionally carry flash memory drives between this machine and the Macs that I use for network surfing and graphics; but I trust my family jewels only to Linux.

Sunday, May 8, 2016

I just learned something about you and it is blowing my goddamned mind.

This is not a joke. It is not “blowing my mind” a la BuzzFeed’s “8 Things You Won’t Believe About Tarantulas.” It is, I think, as close to an honest-to-goodness revelation as I will ever live in the flesh.Here it is: You can visualize things in your mind.

If I tell you to imagine a beach, you can picture the golden sand and turquoise waves. If I ask for a red triangle, your mind gets to drawing. And mom’s face? Of course.

You experience this differently, sure. Some of you see a photorealistic beach, others a shadowy cartoon. Some of you can make it up, others only “see” a beach they’ve visited. Some of you have to work harder to paint the canvas. Some of you can’t hang onto the canvas for long. But nearly all of you have a canvas.

I don’t. I have never visualized anything in my entire life. I can’t “see” my father’s face or a bouncing blue ball, my childhood bedroom or the run I went on ten minutes ago. I thought “counting sheep” was a metaphor. I’m 30 years old and I never knew a human could do any of this. And it is blowing my goddamned mind.

[---]

How do you imagine things?

First I think of a noun in my milk voice: cupcake. Then I think of a verb: cough. Finally an adjective: hairy. What if there was a hairy monster that coughs out cupcakes? Now I wonder how he feels about that. Does he wish he was scarier? Is he regulated by the FDA? Does he get to subtract Weight Watchers points whenever he coughs? Are his sneezes savory or sweet? Is the flu delicious?

If I don’t like the combination of words I’ve picked, I keep Mad Libbing until the concept piques my interest.

This has always struck me as an incredibly inefficient way to imagine things, because I can’t hold the scene in my mind. I have to keep reminding myself, “the monster is hairy” and “the sneeze-saltines are sitting on a teal counter.” But I thought, maybe that’s just how it is.

Saturday, May 7, 2016

Even though there's a lot of hype about AI and a lot of money being invested in AI, I feel like the field is headed in the wrong direction. There's been a local maximum where there's a lot of low-hanging fruit right now in a particular direction, which is mainly deep learning and big data. People are very excited about the big data and what it's giving them right now, but I'm not sure it's taking us closer to the deeper questions in artificial intelligence, like how we understand language or how we reason about the world.

[---]

The natural language understanding is coming along slowly. You wouldn't be able to dictate this conversation into Siri and expect it to come out with anything whatsoever. But you could get most of the words right, and that's a big improvement. It turns out that it works best with a lot of brute force data available. When you're doing speech recognition on white males, who are native language speakers, in a quiet room, it works pretty well. But if you're in a noisy environment, or you're not a native speaker, or if you're a woman or a child, the speech recognition doesn't work that well. Speech recognition is brute force. It's not brute force in the same way as Deep Blue, which considered a lot of positions; it's brute force in the sense that it needs a lot of data to work efficiently.

[---]

What we're trying to address is what I call the problem of sparse data: If you have a small amount of data, how do you solve a problem? The ultimate sparse data learners are children. They get tiny amounts of data about language and by the time they're three years old, they've figured out the whole linguistic system. I wouldn't say that we are directly neuroscience-inspired; we're not directly using an algorithm that I know for fact that children have. But we are trying to look to some extent at how you might solve some of the problems that children do. Instead of just memorizing all the training data, how might you do something deeper and more abstract in order to learn better? I don't run experiments, at least very often, on my children, but I observe them very carefully. My wife, who's also a developmental psychologist, does too. We are super well calibrated to what the kids are doing, what they've just learned, what their vocabulary is, what their syntax is. We take note of what they do.

[---]

I did want to say just a little bit about neuroscience and its relation to AI. One model here is that the solution to all the problems that we've been talking about is we will simulate the brain. This is the Henry Markham and the Ray Kurzweil approach. Kurzweil made a famous bet with the Long Now Foundation about when we will get to AI. He based his bet on when he felt we would get to understand the brain. My sense is we're not going to understand the brain anytime soon; there's too much complexity there. The models that people build are like one or two kinds of neurons, and there are many of them and they connect together. But if you look at the actual biology, we have hundreds or maybe thousands of kinds of neurons in the brain. Each synapse has hundreds of different molecules, and the interconnection between the brain is vastly more complicated than we ever imagined.

Rather than using neuroscience as a path to AI, maybe we use AI as a path to neuroscience. That level of complexity is something that human beings can't understand. We need better AI systems before we'll understand the brain, not the other way around.

Friends are sometimes a big help when they share your feelings. In the context of decisions, the friends who will serve you best are those who understand your feelings but are not overly impressed by them.

Friday, May 6, 2016

What propels an embryo from one stage to the next-and makes one species different from another-is not a blueprint but rather an enormous autonomous library of the instructions contained within its genome. Each gene does double duty, specifying both a recipe for a protein and a set of regulatory conditions for when and where it should be built. Taken together suites of these IF-THEN genes give cells the power to act as parts of complicated improvisational orchestras. Like real musicians, what they play depends on both their own artistic impulses and what the other members of the orchestra are playing. As we will see in the next chapter, every bit of this process-from the Cellular Big 4 to the combination of regulatory cues-holds as much for development of the brain as it does for the body.

Thursday, May 5, 2016

Thousands of years after Aristotle’s seminal work on causality, hundreds of years after Hume gave us two definitions of it, and decades after automated inference became a possibility through powerful new computers, causality is still an unsolved problem. Humans are prone to seeing causality where it does not exist and our algorithms aren’t foolproof. Even worse, once we find a cause it’s still hard to use this information to prevent or produce an outcome because of limits on what information we can collect and how we can understand it. After looking at all the cases where methods haven’t worked and researchers and policy makers have gotten causality really wrong, you might wonder why you should bother.

[…]

Rather than giving up on causality, what we need to give up on is the idea of having a black box that takes some data straight from its source and emits a stream of causes with no need for interpretation or human intervention. Causal inference is necessary and possible, but it is not perfect and, most importantly, it requires domain knowledge.

Why: A Guide to Finding and Using Causes by Samantha Kleinber. Beautiful book, full of insights not only for ML/AI aficionados but also to anyone who want to improve their knowledge about the world around them.

The main thing is to realize is that there is not just one method for all causal inference problems. None of the existing approaches can find causes without any errors in every single case (leaving out lot of opportunities for research). Some make more general claims than others, but these depend on assumptions that may not be true in reality. Instead of knowing about one method and using it diligently for every problem you have, you need a toolbox. Most methods can be adapted to fit most cases, but this will not be easiest or most efficient approach.

Given that there is not one perfect method, possibly the most important thing is to understand the limits of each. For instance, if your inferences are based on bivariate Granger causality, understand that you are finding a sort of direct correlation and consider the multivariate approach. Bayesian networks may be a good choice when the casual structure (connection between variables) is already known and you want to find its parameters (probability distribution) from some data. However, if time is important for the problem, dynamic Bayesian networks or methods that find the timing of casual relationships from the data may be more appropriate. Whether you data are continuous or discrete will narrow down your options, as many methods handle one or the other (but not both). If the data include large number of variables or you do not need the full structure, methods for calculating casual strength are more efficient than those that infer models. However, when using these consider whether you will need to model interactions between causes to enable prediction. Thus causes are used for is as important as the available data in determining which methods to use. And finally, recognize that all the choices made in collecting and preparing data affect what inferences can be made.

Wednesday, May 4, 2016

It’s not unusual for experts to totally miss the point of Ramanujan’s formulae. That happens over and over again. Everyone has four or five favorite examples when they’ll say, ‘I thought I understood this formula. I wrote papers on it, only to discover, five years later, that I’d missed the point.

Tuesday, May 3, 2016

More and more, as I near the end of my career as a heart surgeon, my thoughts have turned to the consideration of why people should suffer. Suffering seems so cruelly prevalent in the world today. Do you know that of the 125 million children born this year, 12 million are unlikely to reach the age of one and another six million will die before age of five? And, of the rest, many will end up as mental or physical cripples.

My gloomy thoughts probably stem from an accident I had few years ago. One minute I was crossing the street with my wife after a lovely meal together, and the next minute a car hit me and knocked me into my wife. She was thrown into the other lane and stuck by a car coming from the opposite direction.

During the next few days in the hospital I experienced not only agony and fear but also anger. I could not understand why my wife and I had to suffer. I had eleven broken ribs and a perforated lung. My wife had badly fractured shoulder. Over and over, I asked myself, why should this happen to us? I had work to do, after all; there were patients waiting for me to operate on them. My wife had a young baby who needed her care.My father, had he still been alive, would have said: “My son, it is God’s will. That’s the way God tests you. Suffering ennobles you- makes you a better person.”

But as a doctor, I see nothing noble in a patient’s thrashing around in a sweat-soaked bed, mind clouded in agony. Nor can I see any nobility in the crying of a lonely child in a ward at night.

I had my first introduction to the suffering of children when I was a little boy. One day my father showed me a half-eaten, mouldy biscuit with two tiny tooth marks in it. And he told me about my brother, who had died several years earlier. He told me about the suffering of this child, who had been born with an abnormal heart. If he had been born today, probably someone could have corrected that heart problem, but in those days they didn’t have sophisticated heart surgery. And this mouldy biscuit was the last biscuit my brother had eaten before his death.

As a doctor, I have always found the suffering of children particularly heartbreaking- especially because of their total trust in doctors and nurses. They believe you are going to help them. If you can’t they accept their fate. They go through mutilating surgery, and afterwards they don’t complain.

One morning, several years ago, I witnessed what I call the Grand Prix of Cape Town’s Red Cross Children’s Hospital. It opened my eyes to the fact that I was missing something in all my thinking about suffering – something basic that was full of solace for me.

What happened there that morning was that a nurse had left a breakfast trolley unattended. And very soon this breakfast trolley was commandeered by an intrepid crew of two- a driver and a mechanic. The mechanic provided motor power by galloping along behind the trolley with his head down, while the driver, seated on the lower deck, held on with one hand and steered by scraping his foot on the floor. The choice of roles was easy, because the mechanic was totally blind and the driver had only one arm.

They put on quite a show that day. Judging by the laughter and shouts of encouragement from the rest of the patients, it was much better entertainment than anything anyone puts on at the Indianapolis 500 car race. There was grand finale of scattered plates and silverware before the nurses and ward sister caught up with them, scolded them and put them back to bed.

Let me tell about these two. The mechanic was all of seven years old. One night, when his mother and father were drunk, his mother threw a lantern at his father, missed and the lantern broke over the child’s head and shoulders. He suffered severe third-degree burns on the upper part of his body, and lost both his eyes. At the time of the Grand Prix, he was a walking horror, with a disfigured face and a long flap of skin hanging from the side of his neck to his body. As the wound healed around the neck, his jaw became gripped in a mass of fibrous tissue. The only way this little boy could open his mouth was to raise his head. When I stopped by to see him after the race, he said, “You know, we won.” And he was laughing.

The trolley’s driver I know better. A few years earlier I had successfully closed a hole in his heart. He had returned to the hospital because he had a malignant tumor of the bone. A few days before the race, his shoulder and arm were amputated. There was little hope of recovering. After the Grand Prix, he proudly informed me that the race was a success. The only problem was that the trolley’s wheels were not properly oiled, but he was a good driver, and he had full confidence in the mechanic.

Suddenly, I realized that these two children had given me a profound lesson in getting on with the business of living. Because the business of living is joy in the real sense of the word, not just something for pleasure, amusement, recreation. The business of living is the celebration of being alive.

I had been looking at suffering from the wrong end. You don’t become a better person because you are suffering; but you become a better person because you have experienced suffering. We can’t appreciate light if we haven’t known darkness. Nor can we appreciate warmth if we have never suffered cold. These children showed me that it’s not what you’ve lost that’s important. What is important is what you have left.

Monday, May 2, 2016

Ask the experimenters why they experiment on animals, and the answer is: "Because the animals are like us." Ask the experimenters why it is morally okay to experiment on animals, and the answer is: "Because the animals are not like us." Animal experimentation rests on a logical contradiction.

Followers

Follow by Email

Subscribe To

About Me

I have this "little" 75 lb chocolate colored guy named Max and he has been the catalyst for my metamorphosis. Ever since he came into my life, I have been trying to subside that ape inside me.Blogging is proclamation of my ignorance to the world, the willingness to learn and an effort to get rid of my cognitive dissonances.