Technology allows us a "read later" mentality. We don't seem to want it.

Wildcat2030's insight:

This morning, the save-for-later service Pocket (formerly Read It Later) posted some highlights from a year's worth of user data. Among the stats: Users -- who now number at 7.4 million -- saved 240 million pieces of digital content over the year (compared to 170 million in the span between the service's launch in 2007 and 2011). And they save that content at a rate of 10.4 items per second.

Perhaps you are one of those users, and perhaps your mouse is hovering over a save-for-later button right at this moment. Before you click it, though, let me just say one thing: Those numbers are remarkable. And not just because they suggest the growth of the save-for-later mentality, but also because that mentality also has the potential to shift, just a little bit, the way we relate to all the stuff -- the videos, the essays, the listicles, the treatises, the cats -- that crosses our paths every day online. A defining psychic feature of the Internet is its immediacy, its urgency, its implicit demands on our time. Hereisthisthingyoushouldseerightnow. Alsothatthingisacatvideo.

That one feature, Internet as scheduler, shapes the web as a social space. Because the same tendency that makes 20 minutes a long time to take to reply to an email, and two minutes a long time to reply to a tweet, also means that, generally, the content that lives on it has an extraordinarily short shelf life. And that's true not just of "content" as in news stories, the stuff that loses most of its value when the term "new" no longer applies to it. It's also true of content as a more general category: long stories, deeply reported narratives, richly researched essays -- stuff that aims to endure. The stock of the Internet.

The sharing economy could bring about the end of capitalism: that’s the provocative claim made by economic journalist Paul Mason, among others. But my ongoing research indicates that there are many possible futures for the sharing economy: it could transform the world of work as we know it – or it could gradually fade from the public eye.

The exact nature and impacts of the sharing economy are still disputed. The organisers of social movements, entrepreneurs, established businesses and politicians all have very different ideas of what the sharing economy is, and what it should become. For example, Share the World’s Resource (a not-for-profit civil organisation) talks about building a sharing economy based on “shared” public services, which are funded by taxation.

Meanwhile, the UK government speaks of building a sharing economy based on online peer-to-peer platforms, which enable citizens to become micro-entrepreneurs by renting out assets such as homes, driveways and pets. So it seems that a diverse range of actors can see their own hopes, fears and values reflected in the sharing economy. But one thing is for sure: online platforms such as Airbnb and Uber have grown from Silicon Valley startups to global corporations, and this trend will probably continue.

Research on the economic, environmental and social impacts of these enterprises is scarce. As a result, there is very little evidence to help us understand how the sharing economy will develop. So I analysed approximately 250 sharing-economy-related articles and reports, which contained contrasting views from advocates and critics. Based on this evidence, I mapped out four possible paths for the sharing economy: and only one of them predicts that the sharing economy will bring capitalism to its knees, as Mason holds.

Researchers from the University of California, San Diego (UCSD) are taking inspiration from nature in the search for new materials that could one day be used to create more effective body armor. The study, which was supported by the US Air Force, focuses on the unique structure and strength of the hexagonally-scaled shell of the boxfish.

The idea of looking to nature for inspiration when it comes to next-gen armor isn't anything new. We've seen numerous studies over the last few years that focus on that same idea, including efforts to copy the structure of overlapping fish scales and even the properties of sea sponges to develop strong yet flexible armor.

Following in the footsteps of that research, the UCSD team decided to focus on the boxfish, an animal that's been able to survive for some 35 million years in an environment dominated by aggressive fish that are often much larger than it.

To unravel the secrets of the ancient creature, the team used electron microscopy to study its carapace and took cross sections of the fish, using micro-computer tomography to characterize its structure.

Finding Earth 2.0 has profound implications for and impact upon technology, knowledge and systems across the spectrum of human skills, capacities, experiences and perspectives on the future.

100 Year Starship Public Symposium 2015 explores what capabilities and systems — scientific, technical and social — will be needed over the next 5–25 years to not to merely suggest or catalog earth analogue candidate exoplanets, but to identify at least one definitive Earth 2.0 — and to consider how such a discovery itself will impact our world and space exploration.

In the last few decades, lasers have become an important part of our lives, with applications ranging from laser pointers and CD players to medical and research uses. Lasers typically have a very well-defined direction of propagation and very narrow and well-defined emission color. We usually imagine a laser as an electrical device we can hold in our hands or as a big box in the middle of a research laboratory.

Fluorescent dyes have also become commonplace, routinely used in research and diagnostics to identify specific cell and tissue types. Illuminating a fluorescent dye makes it emit light with a distinctive color. The color and intensity are used as a measure, for example, of concentrations of various chemical substances such as DNA and proteins, or to tag cells. The intrinsic disadvantage of fluorescent dyes is that only a few tens of different colors can be distinguished.

In a combination of the two technologies, researchers know that if a dye is placed in an optical cavity – a device that confines light, such as two mirrors, for example – they can create a laser.

Can’t remember the name of the two elements that scientist Marie Curie discovered? Or who won the 1945 UK general election? Or how many light years away the sun is from the earth? Ask Google.

Constant access to an abundance of online information at the click of a mouse or tap of a smartphone has radically reshaped how we socialise, inform ourselves of the world around us and organise our lives. If all facts can be summoned instantly by looking online, what’s the point of spending years learning them at school and university? In the future, it might be that once young people have mastered the basics of how to read and write, they undertake their entire education merely through accessing the internet via search engines such as Google, as and when they want to know something.

Some educational theorists have argued that you can replace teachers, classrooms, textbooks and lectures by simply leaving students to their own devices to search and collect information about a particular topic online. Such ideas have called into question the value of a traditional system of education, one in which teachers simply impart knowledge to students. Of course, others have warned against the dangers of this kind of thinking and the importance of the teacher and human contact when it comes to learning.

Such debate about the place and purpose of online searching in learning and assessments is not new. But rather than thinking of ways to prevent students from cheating or plagiarising in their assessed pieces of work, maybe our obsession with the “authenticity” of their coursework or assessment is missing another important educational point.

Imagine it’s the 1950s and you’re in charge of one of the world’s first electronic computers. A company approaches you and says: “We have 10 million words of French text that we’d like to translate into English. We could hire translators, but is there some way your computer could do the translation automatically?”

At this time, computers are still a novelty, and no one has ever done automated translation. But you decide to attempt it. You write a program that examines each sentence and tries to understand the grammatical structure. It looks for verbs, the nouns that go with the verbs, the adjectives modifying nouns, and so on. With the grammatical structure understood, your program converts the sentence structure into English and uses a French-English dictionary to translate individual words.

For several decades, most computer translation systems used ideas along these lines — long lists of rules expressing linguistic structure. But in the late 1980s, a team from IBM’s Thomas J. Watson Research Center in Yorktown Heights, N.Y., tried a radically different approach. They threw out almost everything we know about language — all the rules about verb tenses and noun placement — and instead created a statistical model.

The research: Yale doctoral candidate Matthew Fisher and his colleagues Mariel Goddu and Frank Keil asked people a series of questions that seemed answerable but were actually difficult. The questions concerned things people assume they know but actually don’t—such as why there are phases of the moon and how glass is made. Some people were allowed to look up the answers on the internet, while others were not. Then the researchers asked a second set of questions on unrelated topics. In comparison with the other subjects, the people who’d been allowed to do online searches vastly overestimated their ability to answer the new questions correctly.

Deep into the Arctic Circle in the far north of Norway, Finland, Sweden and north-west Russia, a few thousand indigenous minority people known as the Saami continue to follow a lifestyle of reindeer husbandry. But their profession is increasingly under threat from a number of developments ranging from climate change to globalisation.

We travelled north to spend some time with this marginalised group to try to understand how they cooperate with each other, from an evolutionary standpoint. The results may help us understand how they can best protect their lifestyle from being crowded out in the future, like many other traditional cultures across the world. A lifestyle under threat

The Saami are pastoralists who work in groups formed from a mixture of family and others sharing the burden of herding by keeping an eye on each other’s reindeer, protecting them from predators and working the land.

Over the years, this traditional way of life has absorbed many non-traditional features, from snowmobiles to GPS and from smartphones to Game of Thrones (I watched it for the first time with my Saami field assistant).

Jet-setting stallions and high-flying hounds at New York’s Kennedy airport can look forward to a new luxury terminal that will handle the more than 70,000 animals flying in and out every year.

The ARK at JFK, its name inspired by Noah’s biblical vessel, will more than measure up to terminals for humans: horses and cows will occupy sleek, climate-controlled stalls with showers, and dogs will lounge in hotel suites featuring flat-screen TVs. A special space for penguins will allow them mating privacy.

The ARK is billed as the world’s first air terminal for animals.

Set to open next year, the $48m, 178,000-square-foot (16,500-square-meter) shelter and quarantine facility will take in every kind of animal imaginable — even an occasional sloth or aardvark. From The ARK, they’ll head to barns, cages, racetracks, shows and competition venues in the United States and abroad.

Many arriving animals are quarantined for a period of time (for horses, it’s normally about three days) to make sure they’re not carrying contagious diseases. And The ARK is designed to make their stay as pleasant as possible, with hay-lined stalls for up to 70 horses and 180 head of cattle, plus an aviary and holding pens for goats, pigs and sheep.

In the past few years, even as the United States has pulled itself partway out of the jobs hole created by the Great Recession, some economists and technologists have warned that the economy is near a tipping point. When they peer deeply into labor-market data, they see troubling signs, masked for now by a cyclical recovery. And when they look up from their spreadsheets, they see automation high and low—robots in the operating room and behind the fast-food counter. They imagine self-driving cars snaking through the streets and Amazon drones dotting the sky, replacing millions of drivers, warehouse stockers, and retail workers. They observe that the capabilities of machines—already formidable—continue to expand exponentially, while our own remain the same. And they wonder: Is any job truly safe?

Futurists and science-fiction writers have at times looked forward to machines’ workplace takeover with a kind of giddy excitement, imagining the banishment of drudgery and its replacement by expansive leisure and almost limitless personal freedom. And make no mistake: if the capabilities of computers continue to multiply while the price of computing continues to decline, that will mean a great many of life’s necessities and luxuries will become ever cheaper, and it will mean great wealth—at least when aggregated up to the level of the national economy.

It’s been 50 years since Gordon Moore, one of the founders of the microprocessor company Intel, gave us Moore’s Law. This says that the complexity of computer chips ought to double roughly every two years.

Now the current CEO of Intel, Brian Krzanich, is saying the days of Moore’s Law may be coming to an end as the time between new innovation appears to be widening:

The last two technology transitions have signalled that our cadence today is closer to two and a half years than two.

So is this the end of Moore’s Law?

Moore’s Law has its roots in an article by Moore written in 1965, in which he observed the complexity of component development was doubling each year. This was later modified to become:

The number of transistors incorporated in a chip will approximately double every 24 months.

As we drove to our local cinema to see Inside Out, my five year-old son asked me: “So what is this film going to be about?” “Feelings,” I said, “the feelings that live inside our heads”. He thought for a moment, before replying: “That sounds pretty boring.” It’s true that I could have done better with the pitch, but the film held his attention, and mine, and gave us both a few laughs. While my son giggled at the good old-fashioned slapstick, I could chuckle knowingly at references to Freud, evolutionary psychology and the emotional turmoil of puberty.

Inside Out is the tale of 11-year-old Riley and her traumatic move from Minnesota to a new home in San Francisco. It’s a pretty run-of-the-mill story, but there’s a twist: it’s all seen through the eyes of the five emotions that control the girl’s mental life, from a console inside her brain. Riley’s mental steering committee is headed up by Joy at the start, but as the narrative unfolds, Joy, who has previously tried to keep the four more negative emotions – Anger, Fear, Disgust, and Sadness – away from the controls, gradually learns the special value and importance of sadness.

The psychological model used by the film is essentially the one already popularised with stunning success over several decades by the American psychologist Paul Ekman, the leading proponent of the theory that all human beings, regardless of their historical and cultural milieu, share a repertoire of identical “basic emotions”. Quite understandably for the purposes of an animated movie aimed at children, Inside Out simplifies things further still.

Ekman’s list of cross-cultural basic emotions is longer, including, in addition to the five in the film: contempt, surprise, shame, amusement, satisfaction, contentment and relief, among others. Ekman’s particular concern has been to show that there are certain innate facial expressions, whose emotional meaning can be discerned by anyone, regardless of their culture and education. In this special interest in the bodily and facial accompaniments of feeling, Ekman is a descendant of pioneer emotion theorists of the 19th century, including Charles Darwin and William James.

When people with very high IQs are given moderately difficult task, their brains work more efficiently compared to people with slightly above-average IQs.

To describe the effect, Elsbeth Stern, a professor at ETH Zurich, uses the analogy of a more and less efficient car: “When both cars are traveling slowly, neither car consumes very much fuel. If the efficient car travels at maximum speed, it also consumes a lot of fuel. At moderate speeds, however, the differences in fuel consumption become significant.”

Scientists refer to this as the neural efficiency hypothesis—although most experts accept it as an undisputed fact.

While working on her doctoral thesis in Stern’s work group, Daniela Nussbaumer found evidence of this effect in a group of people possessing above-average intelligence for tasks involving what is referred to as working memory.

“We measured the electrical activity in the brains of university students, enabling us to identify differences in brain activity between people with slightly above-average and considerably above-average IQs,” says Nussbaumer.

Past studies of neural efficiency have generally used groups of people with extreme variations in intelligence.

Scientists analysing the latest data from Comet 67P Churyumov-Gerasimenko have discovered molecules that can form sugars and amino acids, which are the building blocks of life as we know it. While this is a long, long way from finding life itself, the data shows that the organic compounds that eventually translated into organisms here on Earth existed in the early solar system.

The results are published as two independent papers in the journal Science, based on data from two different instruments on comet lander Philae. One comes from the German-led Cometary Sampling and Composition (COSAC) team and one from the UK-led Ptolemy team.

The data finally sheds light on questions that the European Space Agency posed 22 years ago. One of the declared goals of the Rosetta mission when it was approved in 1993 was to determine the composition of volatile compounds in the cometary nucleus. And now we have the answer, or at least, an answer: the compounds are a mixture of many different molecules. Water, carbon monoxide (CO) and carbon dioxide (CO2) – this is not too surprising, given that these molecules have been detected many times before around comets. But both COSAC and Ptolemy have found a very wide range of additional compounds, which is going to take a little effort to interpret.

Jono Williams has taken the concept of the man-cave to new heights. Looking something like a giant steel lollipop, Williams' Skysphere is a solar-powered, Android-controlled hideaway perched high above the New Zealand countryside that would put even the most painstakingly decked-out shed to shame.

Designed and built by plastics engineer and graphic designer Jono Williams, the Skysphere was originally conceived as a treehouse. But after toying with his designs for some months, Williams decided the potential for weather damage and difficulties created by tree growth meant that a steel tower would be a better option, with more flexibility in terms of location.

The main structure itself comprises a circular room surrounded by curved hoops and supported by a tall hollow column. Access to the tower-top space is provided by way of welded ladder rungs inside the central column, which is accessed via a motorized door with fingerprint entry at the bottom. Two further doors provide access to the room at the top and to what Williams calls a "rooftop starview platform." The whole structure is mounted upon of 17 cu m (600 cu ft) of concrete foundations.

The Skysphere provides a 360-degree viewing window that is 2 m (7 ft) high and has a circumference of 14 m (46 ft). "Too many times I’ve seen tree houses with the really small windows and I'm like 'what’s the point building something in a tree when you can’t appreciate the surroundings?'" Williams explains on the Skysphere website. "With my window, you won’t be short of a nice view."

Spiny grass and scraggly pines creep amid the arts-and-crafts buildings of the Asilomar Conference Grounds, 100 acres of dune where California's Monterey Peninsula hammerheads into the Pacific. It's a rugged landscape, designed to inspire people to contemplate their evolving place on Earth. So it was natural that 140 scientists gathered here in 1975 for an unprecedented conference.

They were worried about what people called “recombinant DNA,” the manipulation of the source code of life. It had been just 22 years since James Watson, Francis Crick, and Rosalind Franklin described what DNA was—deoxyribonucleic acid, four different structures called bases stuck to a backbone of sugar and phosphate, in sequences thousands of bases long. DNA is what genes are made of, and genes are the basis of heredity.

Preeminent genetic researchers like David Baltimore, then at MIT, went to Asilomar to grapple with the implications of being able to decrypt and reorder genes. It was a God-like power—to plug genes from one living thing into another. Used wisely, it had the potential to save millions of lives. But the scientists also knew their creations might slip out of their control. They wanted to consider what ought to be off-limits.

If you visit Softbank’s flagship store in downtown Tokyo, you may be greeted by a charming, slightly manic new member of the staff: a gleaming white humanoid robot that gestures dramatically, cracks odd jokes, and occasionally breaks out dancing to music emanating from its own body. If you laugh at these antics, and the robot can see your face, it will quite likely giggle along with you.

With significant progress currently being made toward safer and more intuitive industrial robotics, it’s perhaps unsurprising that the idea of personal home robots is gaining momentum. Toy robots that interact with people in simple ways have been around for some time. Now, several companies are developing more capable robots designed to live in the home. Although these machines don’t do physical chores, they aim to win your affections with a mix of charm and a stilted social intelligence.

Softbank sold a thousand of its robots, called Pepper, in Japan in less than a minute this June; a thousand more will go on sale next week. “We have people who are interested in testing them in their businesses, for welcoming people; we have families, also elderly people interested in companionship,” says Magali Cubier, global marketing director for Aldebaran, the French company that developed Pepper for Softbank.

A haul of planets from Nasa's Kepler telescope includes a world sharing many characteristics with Earth.

Kepler-452b orbits at a very similar distance from its star, though its radius is 60% larger.

Mission scientists said they believed it was the most Earth-like planet yet.

Such worlds are of interest to astronomers because they might be small and cool enough to host liquid water on their surface - and might therefore be hospitable to life.

Nasa's science chief John Grunsfeld called the new world "Earth 2.0" and the "closest so far" to our home.

It is around 1,400 light years away from Earth.

John Jenkins, Kepler data analysis lead at Nasa's Ames Research Center in California, added: "It's a real privilege to deliver this news to you today. There's a new kid on the block that's just moved in next door."

The new world joins other exoplanets such as Kepler-186f that are similar in many ways to Earth.

Determining which is most Earth-like depends on the properties one considers. Kepler-186f, announced in 2014, is smaller than the new planet, but orbits a red dwarf star that is significantly cooler than our own. Kepler-452b, however, orbits a parent star which belongs to the same class as the Sun: it is just 4% more massive and 10% brighter. Kepler-452b takes 385 days to complete a full circuit of this star, so its orbital period is 5% longer than Earth's.

The news that the search for extraterrestrial intelligence is to receive increased funding and data through the $100m (£64m) Breakthrough Listen project is welcome news for astrobiologists like myself. Launched by Stephen Hawking, it particularly helps to allay growing concerns in the field about having too narrow a focus in our search for life in the universe.

Last week I attended the Pathways Towards Habitable Planets conference in Switzerland, where leading scientists in the search for habitable planets shared their results and ideas for the future. What was especially interesting was the relatively strong consensus on the problems with our definition of the habitable zone – the area around a star which is neither too hot nor too cold for orbiting planets to support liquid water on the surface. Even its name is misleading, as we’ll see in a moment. If we aren’t careful, obsessing about this zone could prevent us from reaching our ultimate goal of finding extraterrestrial life.

For as long as we have considered planets orbiting other stars, we have speculated over their propensity to host living organisms in the way that the Earth does. The habitable zone concept has helped astronomers to define where, in all those quintillions of acres of galactic real estate, we should search for planets that might be inhabited.

It may seem sensible to look for extraterrestrial life in regions where any Earth-like planet would have liquid water on the surface. Liquid water is an essential solvent for the chemical reactions that Earth biology relies on. If we find planets with liquid water, they satisfy a key criterion for being conducive to life as we know it.

Yet being in the zone neither automatically means that a planet will have water, nor that it could support life. It needs to have a “healthy” atmospheric composition – usually assumed to mean similar to Earth’s – and ideally a healthy magnetic field to shield it from high-energy particles belched forth by its parent star.

We might also demand that the planet’s orbit and rotation is stable and that any planetary neighbours kindly leave it alone. We don’t have enough data on the planets we have found to date to know if they meet all these criteria. Even if we did, we would most likely have to run a sophisticated computer simulation to model their climate before we could determine what conditions were really like on the surface.

The restoration of natural ecosystems – “rewilding” – ought to be a chance to create inspiring new habitats. However the movement around it risks becoming trapped by its own reverence of the past; an overly nostalgic position that makes rewilding less realistic and harder to achieve.

The recent launch of Rewilding Britain is certainly exciting and timely. However George Monbiot’s vision of bringing back 15 iconic species falls short of the rewilding visions being discussed in universities.

These are emerging from advances in functional ecology and Earth system science. The vision of rewilding is more ambitious: it is about restoring ecological processes through reassembling the species that drive them. For example rooting by wild boars has repercussions throughout a woodland ecosystem. Such animals shouldn’t be reintroduced simply because they were once there, but because they could do something productive in future. Don’t go native

Monbiot’s quest to restore “lost” species harks back to a past age. However many conservation scientists are more relaxed concerning the question of “nativenes”. They are willing to consider introducing non-native species if they contribute a functional role in ecosystems, and they view the past not as a benchmark to preserve or replicate but as an inspiration for ecosystem restoration.

For instance, “Monbiot’s 15” omits the auroch and tarpan which are classed as extinct. However in the 1980s progressive Dutch ecologists realised that their functional analogues survived as cattle and ponies and their ecological role could be restored through “de-domestication”.

They set about de-domesticating them at the famous Oostvaardersplassen reserve located a 40 minute drive from Amsterdam. This produced a “Serengeti-like” landscape: a type of nature unknown to Europe since humans settled down and started farming.

Today there are high hopes for technological progress. Techno-optimists expect massive benefits for humankind from the invention of new technologies. Peter Diamandis is the founder of the X-prize foundation whose purpose is to arrange competitions for breakthrough inventions. His aim is “a world of nine billion people with clean water, nutritious food, affordable housing, personalized education, top-tier medical care, and nonpolluting, ubiquitous energy”. The Internet is a special focus for techno-optimists. According to the Google executives Eric Schmidt and Jared Cohen “future connectivity promises a dazzling array of ‘quality of life’ improvements: things that make you healthier, safer and more engaged”. K. Eric Drexler’s preferred instrument of universal prosperity is nanotechnology. He envisages a future in which miniature robots produce “a radical abundance beyond the dreams of any king, a post-industrial material abundance that reaches the ends of the earth and lightens its burden.”

When you think about it, kissing is strange and a bit icky. You share saliva with someone, sometimes for a prolonged period of time. One kiss could pass on 80 million bacteria, not all of them good.

Yet everyone surely remembers their first kiss, in all its embarrassing or delightful detail, and kissing continues to play a big role in new romances.

At least, it does in some societies. People in western societies may assume that romantic kissing is a universal human behaviour, but a new analysis suggests that less than half of all cultures actually do it. Kissing is also extremely rare in the animal kingdom.

So what's really behind this odd behaviour? If it is useful, why don't all animals do it – and all humans too? It turns out that the very fact that most animals don't kiss helps explain why some do.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.