Some of the most interesting topics covered in this week's iteration are related to 'Who we are and how we got here', 'economics as an evolutionary science', and 'why do we get tanned'.

At Ambit, we spend a lot of time reading articles that cover a wide gamut of topics, including investment analysis, psychology, science, technology, philosophy, etc. We have been sharing our favourite reads with clients under our weekly ‘Ten Interesting Things’ product. Some of the most interesting topics covered in this week’s iteration are related to ‘Who we are and how we got here’, ‘economics as an evolutionary science’, and ‘why do we get tanned’.

Here are the ten most interesting pieces that we read this week, ended April 13, 2018.

1) Book review: Who we are and how we got here by David Reich [Source: Financial Times] DNA, the helical molecule that carries our genetic inheritance, turns out to be far more durable than organic chemists once imagined. After death, recoverable traces of DNA can remain within bones for tens of thousands of years under the right conditions. The analysis of ancient DNA and its comparison with the DNA of people living today are revealing migration patterns that had not been detected through traditional archaeology and paleontology. Findings range from interbreeding between Neanderthals and our own ancestors to the explosive expansion of the so-called Yamnaya people across Europe from the Russian steppe between 5,000 and 4,500 years ago. While many recent findings of ancient DNA research have been reported extensively in the media, David Reich, professor of genetics at Harvard, is the first leading practitioner to pull everything together into a popular book.

“Who We Are and How We Got Here” provides a marvelous synthesis of the field: the technology for purifying and decoding DNA from old bones; what the findings tell us about the origins and movements of people on every inhabited continent; and the ethical and political implications of the research. The overall conclusion is that there has been far more mobility and mixing of populations around the world, through migration and interbreeding, than paleontologists had ever imagined. Modern humans moved north out of Africa through Egypt and Sinai in several waves. A migration about 130,000 years ago took them into the Middle East and perhaps further into Asia. Then 60,000 to 50,000 years ago came a push both westward into Europe and eastward through Asia into Australia. 35,000 years ago Homo sapiens were the only human species left on Earth. The genetic evidence proves that on each migration modern humans interbred with the people they encountered — not only Neanderthals but also Denisovans, the mysterious group living in western Asia who are known mainly through DNA decoded from a finger bone discovered in a Siberian cave.

Genome comparison also implies interbreeding with another ancient human population living in Asia for which no direct fossil evidence has yet been discovered. In addition, a very ancient relict species of human dwarfs, the famous “hobbits” of Flores, lived in Indonesia until a few tens of thousands of years ago. “These five groups of humans and probably more groups still undiscovered who lived at that time were each separated by hundreds of thousands of years of evolution. . . greater than the separation times of the most distantly related human lineages today,” Reich writes.

Reich challenges the conventional view of one-way traffic out of Africa – the original cradle of humanity. He proposes an alternative scenario: the first migration of hominins (Homo erectus) into Eurasia 1.8m years ago led to substantial evolution of archaic humans there too. Some of them then moved back into north Africa, where they became the primary founders of the population that later evolved into modern humans. While this is unproven, Reich writes, “the evidence for many lineages and admixtures should have the effect of shaking our confidence in what to many people is now an unquestioned assumption that Africa has been the epicentre of all major events in human evolution.” The biggest movement of people in European prehistory was the “tide from the East” that swept away a relatively settled farming culture from 2500 BC. People associated with the pastoralist Yamnaya culture that arose in the steppe of western Asia surged across the continent and into Britain, where they had largely replaced the previous megalith-building population by 2000 BC.

The analysis of DNA from ancient graves since 2015 demonstrates what Reich calls “a magnitude of population replacement that no modern archaeologist, even the most ardent supporters of migrations, had dared to propose”. Some discoveries made through ancient DNA have unwelcome political resonance, which Reich is not shy to discuss in the book. However, he dismisses fears that new DNA evidence about differences between peoples represents a return of racism in genetic clothing. On the contrary, the overall finding of ancient DNA research is the previously unsuspected degree of mixing throughout human history, which renders any old-fashioned idea of race, let alone racial purity, absurd.

2) How babies learn and why robots can’t compete [Source: The Guardian] In this piece the author, Alex Beard, explains why machines can’t learn like the babies. And the main reason is robots are interactive without being adaptive. The author discusses the case of Deb Roy, AI and robotics expert, and Rupal Patel, an eminent speech and language specialist. For years, Deb and Rupal had been planning to amass the most extensive home-video collection ever. They put up 25 devices in total throughout the house – 14 microphones and 11 fish-eye cameras, part of a system primed to launch on their return from hospital, intended to record their newborn’s every move. Deb had painstakingly taught Toco, a robot, to distinguish words and concepts within the maelstrom of everyday speech. Asked to pick out the red ball among a range of physical items, Toco could do it. Rupal ran an infant lab in Toronto and Roy flew up there to see what he could learn. Observing the mothers and babies at play, he realised he’d been teaching Toco badly. “I hadn’t structured my learning algorithm correctly,” he explained to Wired magazine in 2007.

His robot had been searching through every phoneme it had ever heard when it was learning a new object, but Deb tweaked its algorithm to give extra weight to its most recent experiences, and began to feed it audio from Rupal’s baby lab recordings. Suddenly, Toco began to build a basic vocabulary at a rate never seen before in AI research. His dream of “a robot that can learn by listening and seeing objects” felt closer than ever. But it needed to feed on recordings, and these were hard to find. Before pressing record, Deb and Rupal agreed some ground rules. The recordings would be available only to their most trusted inner circle of researchers. If at any time they felt uncomfortable with the filming, they would junk the footage. When privacy was required, the system could be temporarily shut down. It was a leap of faith, but they agreed it was worth it. Their experiment, which they named the Human Speechome Project, had the power to unlock new insight into the workings of the infant mind.

A professor of early-childhood development at Temple University in Pennsylvania, Kathy Hirsh-Pasek, had written that “just as the fast food industry fills us with empty calories, what we call the ‘learning industry’ has convinced many among us that the memorisation of content is all that is needed for learning success and joyful lives”. She had also written an influential book that laid out her reservations about the word-rush: Einstein Never Used Flashcards: How Children Really Learn and Why They Need to Play More and Memorize Less. Until recently, scientists had tended to think of infants as irrational, illogical and egocentric. In his “Principles of Psychology” in 1890, William James had described babies’ experience of sensory overload: “The baby, assailed by eyes, ears, nose, skin, and entrails at once, feels it all as one great blooming, buzzing confusion.” This understanding had contributed to a mechanistic view of learning, and the idea that the sheer repetition of words was what mattered most. But it wasn’t true.

Even in utero, babies are learning. At that stage, they pick up sounds. One-hour-olds can distinguish their mother’s voice from another person’s. They arrive in the world with a brain primed to learn through sensory stimulation. We are natural-born explorers, readymade for scientific inquiry. We have to understand this if we were to realize our learning potential. “We arrive ready to interact with other humans and our culture,” said Hirsh-Pasek. The real genius of human babies is not simply that they learn from the environment – other animals can do that. Human babies can understand the people around them and, specifically, interpret their intentions. Some of Hirsh-Pasek’s experiments are aimed at closing developmental gaps between rich and poor kids. Others cover topics such as language development and spatial awareness, and all use technology in different ways. “What the machine can’t do is be a partner,” she said.

Looking back, the Human Speechome Project seemed a quirk of turn-of-the-millennium enthusiasm about artificial intelligence. In all, they had captured 90,000 hours of video and 140,000 hours of audio. The 200 terabytes of data covered 85% of the first three years of their son’s life (and 18 months of his little sister’s). But now the footage had been gathering dust. “I still have the whole collection,” Deb said. “I’m waiting for his wedding day, just to bore the hell out of everyone.” Deb gave up building robots that would compete with humans and instead turned his attention to the augmentation of human learning. What had changed his mind was the process of actually raising a child. “I guess, putting on my AI hat, it was a humbling lesson,” he continued. Deb recently started working with Hirsh-Pasek, following her insight that machines might augment learning between humans, but would never replace it. He had discovered that human learning was communal and interactive. For a robot, the acquisition of language was abstract and formulaic.

3) Big banks on notice as tech groups ramp up pressure [Source: Financial Times] Technology companies are set to take a big chunk of customers from banks, as they intensify their challenge to traditional lenders across a range of mass-market financial services. Carlo Messina, chief executive of Intesa Sanpaolo, Italy’s biggest bank by market capitalization, expects to lose market share to digitally focused rivals in many mainstream areas such as payments. “It is clear that there could be a threat and in our plan we have already embedded a portion of revenues coming from payments that can be reduced.” But he added that most older, wealthy Italian clients would be reluctant to entrust their money to tech groups — prompting it to focus on areas such as insurance and asset management. Research predicts that North American banks could lose more than a third of revenues from traditional savings, lending and investment activities to tech-based rivals — including those backed by the banks themselves.

The banking market in the US and Canada will be more profoundly disrupted than elsewhere in the world by the new entrants and emerging technologies. By 2025, North American banks could lose 34% of revenue from payments, investments, personal lending, SME lending and business lending to “disrupters” including fintechs, technology companies and the banks’ own start-ups. The only part of the North American market that is expected to escape this dramatic upheaval is credit card lending, with disrupters’ share at 17% by 2025. Peer-to-peer lenders, such as Funding Circle and Lending Club have already taken business from banks in the global loans market, while payments apps like TransferMate and Revolut have eroded banks’ foreign exchange commissions. Banks in Europe are being challenged by intensifying competition from tech-savvy rivals, spurred by new “open banking” regulation forcing lenders to give them access to the accounts of clients who authorize it.

Banks have tried to tackle the encroachment of disrupters by setting up innovation labs themselves, taking stakes in start-ups and partnering with tech groups: JPMorgan is in talks about helping Amazon launch bank accounts, for example.

4) Imagine economics as an evolutionary science [evonomics.com] In 1898, the great American institutional economist Thorstein Veblen published an article entitled: “Why is economics not an evolutionary science?” By “evolutionary”, Veblen explained that he meant an economics that took full account of the impact of Darwinism on the social and behavioral sciences. More than a century has passed, and since 1898 there has been a notable increase in the use of evolutionary ideas in economics. Important examples include the large literature inspired by Richard Nelson and Sidney Winter’s path-breaking 1982 book on “An Evolutionary Theory of Economic Change” and the vibrant development of evolutionary game theory. But what might the future development of an evolutionary perspective mean? In his 2004 book entitled The Evolution of Institutional Economics, Geoffrey Hodgson, research professor at Hertfordshire Business School, outlined what he called “the principle of evolutionary explanation”. This is the idea that any behavioural assumption, including in the social sciences, must be capable of causal explanation in evolutionary terms, or at least be consistent with a scientific understanding of human evolution. This principle is found in Veblen’s work.

Veblen made a radical point, which is still difficult for many economists to swallow. He argued that the idea of the utility-maximizing individual is inconsistent with the principle of evolutionary explanation. This point remains pertinent, because the idea of a fixed utility function, even if it is one that has “social” or “altruistic” preferences, lacks a clear evolutionary and causal explanation of its origins. It is simply assumed. Some economists have tried to show why humans evolved to maximize their utility. But these claims are rather empty, because all possible exhibited behavior can be made consistent with some utility function. Utility functions are summaries of observed behavior, rather than true causal explanations of it. Fitting a utility function to data is not the same as providing an evolutionary and causal explanation. We need to explain the evolution of the particular traits and dispositions that make us human. Yet it is widely argued by economists that other species are utility maximizers as well. Modern behavioral economics however, relaxes the assumption of strict utility maximization, in pursuit of a “more realistic” theory.

Veblen took the view that humans were driven by habit. Habits are guided by both inherited propensities, called instincts, and existing institutions. A habit is a learned capacity to act or think in a particular way. Instead of beliefs being prime movers, they too are based on habits. As the pragmatist philosopher John Dewey argued eloquently in his 1922 book “Human Nature and Conduct”, deliberate choices occur when our habitual propensities clash and we are forced to make a decision between them. Generally, habit drives reason and choice, rather than the other way round. This way of putting instinct first, habit second, and reason third is consistent with our understanding of human evolution. It is also consistent with the way in which they develop in each human individual, from infanthood to adulthood. This evolutionary perspective on human agency is very different from the mind-first, or beliefs-first, perspectives that still dominate economics and much of social science.

The evolutionary perspective on human agency is important for another reason, noted by Veblen, stressed by the dissident British economist John A. Hobson, and researched today by leading scholars. Consistent with Darwin’s account in The Descent of Man, these writers argue that humans have developed propensities for moral judgment. Moral systems evolve in societies because they enhance group cohesion and survival. Humans are both selfish and capable of acquiring and heeding moral values. Sometimes these two come into conflict – we face dilemmas between self-interested behavior and morally “doing the right thing”. Utility-maximizing models in mainstream economics have been adapted to take on “altruistic” behavior and “social preferences”. But even if the individual is “altruistic” in these models, he or she is still maximizing his or her own utility. The individual is always “selfish” in that sense. Moral philosophers such as Richard Joyce argue, utility-maximizing models have difficulty accommodating genuine altruism or morality.

Veblen saw a further extension of the evolutionary perspective in economics and the social sciences more generally. While there was competition and cooperation between individuals in the struggle for survival, there were also social processes that lead to some institutions being more successful than others: a “natural selection of institutions”. This insight is important because it opens up the possibility of a dynamic theory of social change, involving both the selection and development of institutions, entailing human agency but never entirely by design. The key point here is that the implications of evolutionary thinking for economics and the social sciences have only partially been explored. Economics, in particular, is not yet an evolutionary science. To take economics forward would require a widening perspective, where ideas and approaches from several other disciplines were taken into account.5) Friends of Putin: Russia’s western elite networks [Source: Financial Times] In early 1961, the “Portland spy ring” was uncovered in Britain. Its agents included a middle-aged American couple living in a bungalow in suburban Ruislip, who posed as antiquarian booksellers while sending submarine secrets to Russia. That April, double agent George Blake was given 42 years in jail, the longest sentence in British history at the time. By 1963 his fellow traitor Kim Philby was on a boat to Moscow, and Brits were obsessing over the Profumo affair (a sex scandal masquerading as a security scandal). Prime Minister Harold Macmillan, who hated each of these embarrassments, and referred disparagingly to the “so-called security service”, eventually resigned.

Simon Kuper, the author of this piece is writing a book on cold war spying and sees parallels to the 1960s today. There’s French far-right leader Marine Le Pen, who has been largely funded from Russia, dropping in on Vladimir Putin. “Moscow will help Le Pen win the election,” boasted the Kremlin-friendly broadcaster Life News, before deleting the tweet minutes later. Meanwhile in Washington, the FBI is investigating whether Moscow helped Donald Trump win. According to Kuper, Cold war spying was child’s play by comparison, but nonetheless provides insights into Russian influence today.

Brits and Americans in the 1950s and 1960s worried too much about Reds under the bed. The 1940s atomic spies probably did help speed the USSR’s building of the bomb, but most subsequent western traitors were small fry. The information they sent to Moscow was usually ­piddling, distrusted or ignored. And the Kremlin had few western political friends other than ­no-hope national Communist parties. Each side in the 1950s was fumbling in the fog, unsure whether the other was planning an attack. That made it tempting to strike first. Today, by contrast, hacking gives Moscow ample information. Russia has upgraded lately from information-gathering to influence-gathering. This is facilitated by the Russian presence in international business — unimaginable during the cold war. Today, behind every great Russian fortune there is a great friendship with the Kremlin. Trump and Le Pen aren’t outliers but part of a continuum of Russian-western elite networks. Trump in New York real estate, his secretary of state Rex Tillerson in oil, and his former campaign manager Paul Manafort in political consulting come from industries where Russian money was normal.

For years western politicians took Russian money with impunity because few voters cared. People worried about a supposed “fifth column” of Muslims at the bottom of society, but not about a fifth column of friends of Putin at the top. That explains why Trump’s associates barely bothered to hide meetings with Russian officials. After “Russiagate” the public has started paying attention, but possibly too late. If Le Pen (who denies Russia invaded Crimea) becomes French president in May, the Kremlin will have friends running two of the west’s three main military powers.

The biggest damage spies did to western countries wasn’t by stealing secrets. It was by getting caught. Each time a British official was exposed as a spy — almost a ritual in the 1946-1963 period — Britons’ trust in their society crumbled a bit more. Security officials eyed each other and wondered, “Are you a KGB agent?” Paranoid dysfunction infected the state. In the US, Joseph McCarthy’s 1950s spy-hunting did the Soviets’ work for them by creating national paranoia. According to Kuper, that is happening again today, but now the suspects are more senior, and Russian paranoia creation more deliberate. Russian hacking of the US elections was “unusually loud”, says the FBI’s director James Comey, presumably because Moscow’s aim was “freaking people out”. It’s working. Six months ago, populist voters thought the elites were rotten. Today, as rightist leaders cosy up to Putin, left-of-centre voters are coming to the same conclusion.

6) Why driverless cars may mean jams tomorrow [Source: The Economist] The most distractingly unrealistic feature of most science fiction—by some margin—is how the great soaring cities of the future never seem to struggle with traffic. Whatever dystopias lie ahead, futurists seem confident we can sort out congestion. If hope that technology will fix traffic springs eternal, history suggests something different. Transport innovation, from railways to cars, reshaped cities and drove economic advance. But it also brought crowded commutes. Now, as tech firms and carmakers aim to roll out fleets of driverless cars, it is worth asking: might this time be different?

Alas, artificial intelligence (AI) is unlikely to succeed where steel rails and internal-combustion engines failed. Congestion is a near-inevitable side-effect of urban growth. Cities exist because being near to other people brings enormous advantages. Proximity allows people to find friends, mates and business partners, to discuss ideas and generate new ones, and to trade. Regrettably, clumping leads to crowding: the more people an area houses, the greater the competition for its scarce resources, from seats at a hot new restaurant to space on public roadways. Each new arrival enhances a city’s magic but also adds to congestion. Cities grow until costs outstrip benefits. Mass-transit railways and highways allowed big cities to get bigger. But their congestion-easing benefits inevitably proved temporary. In a paper published in 2011, Gilles Duranton, of the University of Pennsylvania, and Matthew Turner, of Brown University, identified a “fundamental law of road congestion”: namely, that building more highways does not alleviate congestion. Rather, it attracts more residents, leads to more driving by existing residents and boosts transport-intensive economic activity, until roads are once again crammed.

Driverless cars should cut traffic, other things being equal. Lower accident rates will mean fewer crash-related hold-ups, while AIs that can pilot cars more closely together will boost road capacity. But reductions in traffic will make living in currently congested areas more attractive and hence more populous. Miles travelled per person might also rise, since self-driving technology frees passengers to use travel time for work or sleep. And just as new highways prompt a rise in transport-intensive business, driverless vehicles could generate lots of new road-using activity. Where now a worker might pop into the coffee shop before going to work, for example, a latte might soon be delivered in a driverless vehicle. The technology of driverless cars may make us safer and more productive, but not necessarily less traffic-bound.

It might, however, improve traffic by making it easier, politically, to impose tolls on roads. Jams occur because a scarce resource, the road, is underpriced, so more people drive than it can accommodate. But tolls could favour use of the roadway by those who value it most. Some places already use such charges—London and Singapore are examples—but they are rarely popular. Some drivers balk at paying for what they once got for nothing, and others are uneasy about the tracking of private vehicles that efficient pricing requires. People however seem not to object to paying by the mile when they are being driven—by taxis and services like Uber and Lyft—and the driverless programmes now being tested by Waymo and GM follow this model. If a driverless world is one in which people generally buy rides rather than cars, then not only might fewer unnecessary journeys be made, but also political resistance to road-pricing could ease, and congestion with it.

That might lead to a different kind of dystopia: one in which fast, functional transport is available only to those who can pay. Luckily, history also suggests a solution: mass transit. Ride-hailing services might introduce multi-passenger vehicles and split travel costs across riders. Or, as Daniel Rauch and David Schleicher of Yale University argue, governments might instead co-opt the new transport ecosystem for their own purposes. They might subsidise the travel of low-income workers, or take over such systems entirely. Municipal networks of driverless cars might prove less efficient than private ones, particularly if cars are rationed on a first-come-first-served basis rather than by price. But in the past city governments have felt that providing equal-opportunity access to centres of economic activity was worth the cost.

7) Cambridge Analytica and Facebook – is anybody actually liable under Indian law? [Source: The Wire]The controversy around data harvesting by Cambridge Analytica from Facebook users refuses to die. To begin with, Alexander Kogan, a university researcher had developed an application called “thisisyourdigitallife” offering a personality prediction for users. The application was hosted on Facebook and was downloaded by 270,000 people who took the personality test. In the process, the application also scooped up the data of not only the 270,000 people who took the tests but also the data of their friends. In all, it appears that the exercise resulted in the data of 50 million people being scooped up by the app. This data was allegedly then sold/licenced by Kogan to Cambridge Analytica and Eunoia Technologies. In the Indian context, with the absence of a statutory data protection law the only legal liability that flows out of the law is from the contractual agreements between the various parties. There are three contracts here. The first is between Kogan and the Facebook users. The second is between Facebook and its users. The third is between Facebook and Kogan.

As per Facebook’s statement, available on its website, Kogan “gained access to this information in a legitimate way and through the proper channels that governed all developers on Facebook at that time”. Simply put, Kogan informed users that their information and those of their friends would be collected if they used the application. The statement put out by Facebook, however also states that Kogan violated Facebook’s platform policies by passing on the collected data to Cambridge Analytica and Eunoia Technologies. It is not clear, if Facebook’s policies were legally binding on Kogan. Whether Kogan violated this contractual arrangement with users when he gave this information to third parties, is not clear. If Kogan informed users that there was possibility of this data being transferred into the hands of others then Kogan is simply not liable under the law. Now the question that arises is whether Facebook violated the terms of its contract with its users and can it be held liable for Kogan’s activities?

It is very likely that Facebook in 2014 had a clause informing users of the risks involved in using external apps and absolving itself of any liability. To further complicate matters, it is possible that Facebook is simply an intermediary that facilitated transactions between app developers and users. If Facebook is deemed to be a platform facilitating automatic transactions between others, it follows that the company cannot be held liable for violations of the law by the people using its platform. This is because Section 79 of the Information Technology Act as amended in 2008 created a safe harbor for all intermediaries by ensuring their immunity for all acts that were a result of users. This immunity from legal liability is not absolute and depends in large part on the level of knowledge that Facebook had with regard to the activities of its users. The law is structured in such a way that the less knowledge Facebook has of its user’s activities, the less it is liable for the activities of its users provided that it fulfills a minimum ‘due diligence’ requirement under Section 79.

Also, there has been some talk that Facebook can perhaps be held liable under Section 43A of the Information Technology Act or that a future data protection law will empower Indian authorities to take action against Facebook. But, can India via a future data protection law force a foreign entity like Facebook Inc., which has no physical presence on Indian soil, to change the ‘choice of law’ governing the terms of its use with citizens of the said jurisdiction? Parliament can certainly try to do so by tying a data protection law to other financial legislation which Facebook and Google cannot avoid. But policymakers should be aware that any such unilateral attempt will invite reciprocal action from the US Trade Representative. Ideally, in situations involving contracts spread over multiple jurisdictions, the only credible and workable solution is an international treaty forcing all countries to adopt similar rules based on principles of reciprocity. The other solution is to threaten a blockade of Facebook until it agrees to change the governing law of its contracts to Indian law. That is however, not a likely option for the Indian Government given the popularity of the platform in India.

8) The economics of piracy in the digital world [Source: Livemint] We just can’t imagine our lives without Facebook, Google and other social media sites. But the world is now waking up to the privacy concerns as these sites encroach into hitherto intimate areas of our lives. Information capture sits at the heart of important parts of the digital economy. The transaction in online services is radically different from what we usually encounter: We voluntarily pay in personal data rather than cash. This unique contract creates several complications as far as privacy goes. Digital exchange—personal information for free access to platforms—often means that privacy comes at a cost. Hal Varian, now chief economist at Google, argued in 1996 that customers are better off sharing information about themselves with marketers because it makes life easier. Junk email or unsolicited phone calls that are an annoyance to consumers become less so when the company can target consumers better through data analysis. However, the potential for personal data to be abused—for discrimination, manipulation and censorship—is a huge cause for concern.

So why do people have to share their data with the digital behemoths? The simple answer is that they choose to do so. In some cases, the consent to collect information is presumed, and the degree of privacy the user experiences is a function of self-help; you can disable some surveillance if you can figure out how. In other cases, customers explicitly agree to privacy policies that basically define the control they don’t have. Whether or not to consent is a complicated question, but users succumb to the instant gratification, undervaluing their privacy in the process. The potential for an individual’s personal data to be used against him is the defining feature of this contemporary privacy debate. The puzzle starts with how companies get to have an ongoing right to private data in the first place; i.e. how is it that the default rule is that the company has the right over user data, and the user can opt out, instead of the user owning the data and the company soliciting them to allow the use of data by opting in?

Economists such as Richard Posner have based their defence of this on utilitarian grounds. Since businesses value the data more, imposing onerous “opt-in” rules is a significant transaction cost. This could jeopardize the ability of digital companies to provide services, and significantly degrade user experience. The efficient solution would be to award the initial ownership of data to the business, but let users opt out if they want to. The experience of the past decade and change has shown that this argument has several flaws. For one, consent is meaningless unless it is informed consent. The structure of digital services and apps today means that for the average user, the latter is often not the case. Admittedly, to what extent fully informed consent would change user choices is debatable—but a granular opt-out model instead of an opt-in model would provide greater security regardless. If you sell your car, the owner of the car cannot legitimately influence your life after the conclusion of the transaction. Personal data can be used to manipulate people in ways they don’t recognize at the time of sharing their data. This is something that current systems—designed to facilitate the one-time transfer of personal information to the digital company, and without the individual’s subsequent involvement in decision making about the use of the collected data—don’t take into account.

In this context, the framing of intellectual property rights is a good example of an encumbrance to trade that works for everyone. It provides the necessary incentives to the producers, and balances progress with the public distribution of intellectual goods. The same technologies that enable distributed rights management could enable privacy protection that travels with the data. Traditionally, private property has been the main barrier to privacy invasion. As monitoring and recording capabilities are embedded in our surroundings, there is a need to redefine private spaces that will not be infringed. The government and businesses should start by adopting privacy-by-design principles in their data-accumulation practices. Governments and supreme courts all over the world will have to rethink their stand in order to secure citizens’ privacy and control over their data, and the meaning of such words as “property” and “consent” in relation to personal-data sharing. The drive to accumulate data alone cannot dictate the public debate on privacy.

9) Obituary: Linda Brown – Campaigner for equality in education, 1943-2018 [Source: Financial Times]In 1951, Linda Brown was denied admission in Sumner Elementary School, Kansas because of the colour of her skin. She had to attend the all-black elementary school, two miles across the town which took her more than an hour. Brown later became the face of the watershed civil rights case, Brown v Board of Education that ended legal segregation in schools. Encouraged by the National Association for the Advancement of Colored People, her father, Oliver, filed a class-action lawsuit along with 13 other parents against the state of Kansas. They claimed that the “separate but equal” doctrine of racial segregation provided that facilities were equivalent — enshrined in US case law in 1896 — was unconstitutional. The Board of Education’s defence stated that separation in elementary schools (junior and high schools were not segregated in Kansas) prepared black children for later life. Kansas District Court ruled in the board’s favour.

In 1952, the case, along with others from Delaware, Washington DC, South Carolina and Virginia, came to the Supreme Court. It took the Supreme Court, guided by the newly-instated pro-reform Chief Justice Earl Warren, two more years to rule — in a unanimous decision — against racial separation in schools. “In the field of public education the doctrine of ‘separate but equal’ has no place,” Warren’s ruling noted. The precedent for other institutions had been set. One year later, Rosa Parks refused to give up her seat on an Alabama bus. By the time she heard the ruling on 17 May 1954, Brown was attending a fully integrated junior high school. In a PBS documentary she remembers coming home to find her mother overjoyed. “When she shared the news with me, I felt a joy too because I felt that my sisters wouldn’t have to walk so far to school the next fall,” she said.

Brown, who was born on February 20, 1943, grew up to become an educational consultant and put both her children through Topeka’s schools. Together, the sisters founded The Brown Foundation in 1988 to steward the legacy of Brown vs Board of Education. The foundation funds scholarships to minority students and research into equal educational opportunity. Brown regularly spoke about the issue of segregation in schools. This autumn, an ebook of essays written by plaintiffs of the case will be published. Ms. Brown Henderson, one of the sisters says that her family did not realise the significance of the case at the time. “History doesn’t become history until 50 years have passed so it was hard to know,” she says.

It has now been 64 years since the ruling, yet a segregation of sorts remains in American schools. It is now less a legal problem than a social one. A report in 2014 by the US General Accountability Office found that, between 2001 and 2014, the number of schools considered “intensively segregated”, with more than 90% low income pupils or students of colour, more than doubled. Brown’s sister says the problem is endemic but that she will continue the fight: “Without good education, you can’t expect to be a good citizen.”

10) Why do we get tanned? [Source: Scienceabc.com] Summers are here and so is the associated fear of getting tanned as we step out in the sun. But is it really bad? And why does it happen? The short answer is that tanning is our body’s way of signaling that damage is being done to our skin cells, as well as a way to protect ourselves from further harm. Coming to the long explanation, during your relaxing day at the beach, you probably don’t feel like you’re being bombarded by radiation, but that is precisely what sunlight is! In addition to visible light and heat, there are three types of ultraviolet radiation that come from sunlight: UVA, UVB, and UVC. We can basically ignore UVC radiation, as it never reaches the surface of the planet (or our skin) and is largely absorbed by the atmosphere. However, UVA and UVB radiation do reach our sun-exposed skin, and have various effects.

When UVA radiation strikes our skin, it immediately engages the melanocytes (the pigment cells in our skin), causing a release of the melanin they have already stored, resulting in what we know of as a “tan”. Once UVA radiation penetrates deeper into the skin, it can damage skin cells in the epidermis, leading to various types of skin cancer. Whereas, UVB radiation is slightly different – it only penetrates the top few layers of the skin, and is primarily responsible for sunburns, rather than sun tans. This makes UVB less of a danger for deep-layer skin cancers, but it can contribute to melanoma and those uncomfortable sunburns. Radiation of any kind penetrating the skin can damage DNA in those affected cells, which is why humans have adapted melanin – to repair and protect the body from that damage. When UVA radiation penetrates the skin, it causes the existing melanin to darken, but does not stimulate the production of more melanin. The color change resulting from UVA radiation is due to oxidative stress on the melanin, which changes its color. However, this is not a long-lasting color change, and the “tan” from UVA rays will usually fade in a few days.

UVB radiation is the key component in the second stage of the tanning process. The damage caused by UVB rays stimulates melanogenesis, the body’s natural response to radiation (producing more melanin). This type of tan will be much longer lasting, and actually protects your skin from further radiation damage, as the melanin produced will absorb that radiation. UVB radiation can typically be blocked by sunscreen, whereas UVA rays are more difficult to protect against; fortunately, natural and synthetic fibers (clothing) have been shown to protect against the majority of UVA rays. The melanin produced and released by melanocytes comes in two pigment forms: eumelanin (brown) and phaeomelanin (yellow and red). Depending on a combination of your hair color, skin tone, race, genetics, and previous exposure to sunlight, the production levels of these two pigments may be different. For example, a fair-skinned Irishman with red hair may produce less eumelanin than phaeomelanin, making it almost impossible for him to get a “tan” in the traditional sense. On the other hand, a Mediterranean woman with dark hair and an olive-skinned complexion may tan very easily, as her melanocytes produce more eumelanin than phaeomelanin.

If you are of a race other than Caucasian, you’re particularly fortunate, as melanin production is almost continuous, ensuring that you always have a darkened skin tone and much more protection from radiation. For this reason, the occurrence of skin cancer in people from those cultures is much lower. If you want to get a truly excellent tan, short bursts of exposure are recommended over the course of 5-7 days, as that will activate the melanocytes (through UVB rays) and start building up a protective layer of melanin. This will not only protect you from additional DNA damage and lower your chances of skin cancer, but also give you that sexy, toasted in the sun appearance you’ve been dreaming of all winter!