People are notoriously bad at recognizing important trends in innovation. It's most commonly seen in people dismissing some new technology or service as being unimportant. Over and over again, people seem to think that the world is static and...

Technology has the potential to transform how we think about our healthThe current medical system in most countries works as a reactive setting. The patient goes to the doctor with existing symptoms, the physician diagnoses a condition, then prescribes drugs and/or recommends a treatment. According to the underlying dynamic, the medical professional articulates his decision based on his knowledge and the patient follows orders.

What happens when technology comes into play? What if the patient uses wearables and sensors to monitor vital signs and health parameters? Or what if he crowdsources medical information through patient communities or social media? With the rise of digital technologies, such as artificial narrow intelligence, robotics, virtual reality/augmented reality, telemedicine, 3D-printing, portable diagnostics, health sensors, wearables, etc. the entire structure of healthcare, as well as the roles of patients and doctors, will fundamentally shift from the current status quo. From reactive to preventive medicine. From a hierarchical patient-doctor relationship to a partnership.

While digital health is capable of amazing results (we could enumerate at least 60 things), it does not only mean the utilization of disruptive technologies, it’s rather a cultural transformation. It’s a change in attitude, policy and the entire system. The above-described transformation will not just happen to us, we need to be proactive drivers of such a systemic change. Patients who ask questions, who prepare for their visits to the doctor’s office with their own vital signs and parameters could drive this change. Patients who are not waiting for others to save them, but act as superheroes. And make no mistake, anyone could be a superhero. Starting tomorrow.

Facebook, Google, and Amazon are aiming for new horizons. The playfield must be too small for them solely on the technology markets. They certainly have the capacity to move into new fields. As The Economist writes, their huge stock market valuations suggest that investors are counting on them to double or even triple in size in the next decade.

So, where do they want to utilize their power? Recent moves show they have ambitions in healthcare. Google has made steps forward in the field with Calico. Human Longevity Inc. joined forces with Cleveland Clinic for a human genomics collaboration aimed at disease discovery and making aging a chronic condition. In September 2017, Microsoft announced the launch of its new healthcare division at its Cambridge research facility, to use its artificial intelligence software to enter the health market. Its research plans include monitoring systems that can help keep patients out of hospitals and large studies into conditions such as diabetes.

And what about Amazon?According to CNBC’s news in January 2018, the Seattle-based giant hired one of Amazon’s most high-profile hires to date in health, Martin Levine. He has been working for Iora Health, which focuses on Medicare patients in six US markets. He could be joining Amazon’s internal healthcare group known as 1492, which is testing a variety of secretive projects. Many analysts suspect that Amazon is considering selling prescription drugs online as rumor said in autumn 2017 or that it might be opening drug stores in its Whole Foods chains. Some analysts even considered Amazon’s popular digital assistant, Alexa as the future’s possible digital doctor. Amazon, Berkshire Hathaway, and JPMorgan Chase also announced a partnership to cut health-care costs and improve services for their US employees.

So, US consumers might one day find themselves logging in to Amazon Healthcare Prime, or asking Dr. Alexa what they should do about their cold. But what if we go even further than that? Let’s do a thought experiment. What if Amazon decided to open a clinic in the future?

Interacting with modern-day Alexa, Siri, and other chatterbots can be fun, but as personal assistants, these chatterbots can seem a little impersonal. What if, instead of asking them to turn the lights off, you were asking them how to mend a broken heart? New research from Japanese company NTT Resonant is attempting to make this a reality.

It can be a frustrating experience, as the scientists who’ve worked on AI and language in the last 60 years can attest.

Nowadays, we have algorithms that can transcribe most of human speech, natural language processors that can answer some fairly complicated questions, and twitter-bots that can be programmed to produce what seems like coherent English. Nevertheless, when they interact with actual humans, it is readily apparent that AIs don’t truly understand us. They can memorize a string of definitions of words, for example, but they might be unable to rephrase a sentence or explain what it means: total recall, zero comprehension.

Advances like Stanford’s Sentiment Analysis attempt to add context to the strings of characters, in the form of the emotional implications of the word. But it’s not fool-proof, and few AIs can provide what you might call emotionally appropriate responses.

The real question is whether neural networks need to understand us to be useful. Their flexible structure, which allows them to be trained on a vast array of initial data, can produce some astonishing, uncanny-valley-like results.

ON MONDAY, A Tesla Model S slammed into the back of a stopped firetruck on the 405 freeway in Los Angeles County. The driver apparently told the fire department the car was in Autopilot mode at the time. The crash highlighted the shortcomings of the increasingly common semi-autonomous systems that let cars drive themselves in limited conditions.

This surprisingly non-deadly debacle also raises a technical question: How is it possible that one of the most advanced driving systems on the planet doesn't see a freaking fire truck, dead ahead?

Tesla didn't confirm the car was running Autopilot at the time of the crash, but its manual does warn that the system is ill-equipped to handle this exact sort of situation: “Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead.”

Can you imagine travelling to work in a robotic “Jonnycab” like the one predicted in the cult Arnold Schwarzenegger movie Total Recall? The image from 1990 is based on science fiction, but Mercedes Benz does have a semi-autonomous Driver Pilot system that it aims to install in the next five years and Uber is also waging on a self-driving future. Its partnership with Volvo has been seen as a boost to its ambitions to replace a fleet of self-employed drivers with autonomous vehicles.

Jonnycab might belong to futurology but if MIT academics Erik Brynjolfson and Andrew McAfee are right, we may all be rejoicing at the prospect of extended leisure time, as robotic technologies free us from the drudgery of work. Except for the fact that big business will be keeping its eye on the bottom line and will often be opting for fast and cheap alternatives.

No work, more play? These are not new concepts. Karl Marx argued technology would help free workers from harsh labour and lead to a “reduction to working time”. In the 1930s Bertrand Russell wrote of the benefits of “a little more idleness” and the economist John Maynard Keynes predicted that automation could enable a shorter working week of less than 15 hours.

At the American Museum of Natural History’s Senses exhibit in New York, there is a room that is completely wallpapered in squiggly black lines, including the floor and ceiling. It’s otherwise a normal room: six sides, two doors. One door is an entrance that warns visitors of potential dizziness, which I scoff at. Me, dizzy? In a room? Pssh. I stepped in without a second thought.

On that cold January afternoon, the squiggly room owned me. As soon as I entered, the squiggly black lines came to life, snaking and pulsing and making me trip. I’m clumsy, sure, but this was a flat room. The lines weren’t uniform, and depending on how they were spaced, some “moved” more than others. Intellectually, I knew the room wasn’t moving. But something deep within me disagreed viciously, and I stumbled out of the room before I threw up.

Robert DeSalle, a genomics expert who curated the exhibit and wrote Our Senses: An Immersive Experience, said that my overconfidence entering the room of squiggly lines was indicative of the fact that we humans are way too confident about the superiority of our senses.

DeSalle and Graziano both agree that technology is—and will continue—to affect humans. “We’re doing a giant experiment on ourselves,” Graziano said. “People become unmoored socially in cyberspace, where there is no such thing as personal space.

Picture, if you will, 10 years into the future. What do you see? A modern society where human and machine work hand in hand to solve history’s greatest conundrums, or the imminent collapse of human civilization at the hands of our robot overlords? The second scenario is a little dramatic (and less likely) but nonetheless helps to illustrate the two seemingly polar views of artificial intelligence (AI) – as either a boon for humankind or a creator of even more problems. We might actually see something in between those two scenarios: AI and robots will cause societal growing pains, but both will largely prove valuable in many industries and applications such as health care, cybersecurity and transportation.

One area in particular where AI is poised to make a huge difference (and frankly, already has) is the field of cybersecurity. A recent study from Frost & Sullivan and (ISC)² found that the global cybersecurity workforce will have more than 1.5 million unfilled positions by 2020. It’s clear there’s a huge need for skilled cyber workers in the public and private sectors, but there aren’t enough to go around.

NEWS SPREAD MONDAY of a remarkable breakthrough in artificial intelligence. Microsoft and Chinese retailer Alibaba independently announced that they had made software that matched or outperformed humans on a reading-comprehension test devised at Stanford. Microsoft called it a “major milestone.” Media coverage amplified the claims, with Newsweek estimating “millions of jobs at risk.”

Those jobs seem safe for a while. Closer examination of the tech giants’ claims suggests their software hasn’t yet drawn level with humans, even within the narrow confines of the test used.

Rewriting Life 2017 Was the Year of Gene-Therapy Breakthroughs Gene-fixing treatments have now cured a number of patients with cancer and rare diseases. by Emily Mullin January 3, 2018

It was a notable year for gene therapy. The first such treatments in the U.S. came to market this year after winning approval from the Food and Drug Administration. Meanwhile, researchers announced more miraculous cures of patients with rare and life-threatening diseases who were treated with experimental therapies.

Decades in the making, gene therapy—the idea of modifying a person’s DNA to treat disease—represents a major shift in medicine. Instead of just treating symptoms like the vast majority of drugs on the market, gene therapy aims to correct the underlying genetic cause of a disease. Doctors and scientists hope these treatments will be a one-shot cure.

Last year, we wrote that 2016 was gene therapy’s most promising year. But 2017 proved to be even bigger.

What technological trends can listeners and artists expect in 2018? Scott Wilson stares into his crystal ball to discover how tech may change the way we consume and make music in 2017, wondering what changes are coming to Spotify, whether SoundCloud will survive and whether Eurorack gear will continue to inspire musicians.

From the insidious rise of “fake news” to the increasing prevalence of AI in our everyday lives, 2017 was actually a pretty terrifying year in terms of technology’s impact on society. In the music industry, streaming continued to dominate the headlines, as SoundCloud struggled to stay afloat and artists pushed back against the allegedly meagre royalties doled out to smaller artists and labels by companies like Spotify.

Technology’s impact on music in 2017 wasn’t all bad. For music-makers at least, the year brought a slew of innovative new apps and gadgets for production, while blockchain technology started to be taken seriously as a way of making sure musicians and everyone involved in the music production and distribution process get paid properly and fairly.

So what technological developments and trends might 2018 hold for artists and listeners? We’ve made some predictions on what the next 12 months might bring to the music industry – the good things and the bad.

1. SoundCloud will survive 2018, but its influence and usability will wane2. Big changes at Spotify and beyond will impact its users3. Cryptocurrency hype will hit the music industry, and probably not in a good way4. The synth clone wars are just getting started5. Music-making will become easier for beginners than ever

In just two hours, Amazon erased $30 billion in market value for healthcare’s biggest companies

WRITTEN BYPreeti Varathan

Amazon has disrupted fashion, books, furniture, food, cloud-based storage services, and much else besides. Now, it’s coming for one of the biggest, most complex industries in the US: healthcare.

Today (Jan. 30), Amazon, Berkshire Hathaway, and JPMorgan announced a vague but market-moving plan to launch an independent company that will offer healthcare services to the companies’ employees at a lower cost. The venture, which will be managed by executives from the firms, will be run more like a non-profit, than a for-profit entity.

The market value of 10 large, listed health insurance and pharmacy stocks 1 dropped by a combined $30 billion in the first two hours of trading. At the time of writing, insurer MetLife was the hardest hit, down nearly 9% for the day.

Some of Google’s top AI researchers are trying to predict your medical outcome as soon as you’re admitted to the hospital.

A new research paper, published Jan. 24 with 34 co-authors and not peer-reviewed, claims better accuracy than existing software at predicting outcomes like whether a patient will die in the hospital, be discharged and readmitted, and their final diagnosis. To conduct the study, Google obtained de-identified data of 216,221 adults, with more than 46 billion data points between them. The data span 11 combined years at two hospitals, University of California San Francisco Medical Center (from 2012-2016) and University of Chicago Medicine (2009-2016).

While the results have not been independently validated, Google claims vast improvements over traditional models used today for predicting medical outcomes. Its biggest claim is the ability to predict patient deaths 24-48 hours before current methods, which could allow time for doctors to administer life-saving procedures.

Secret Domination The ocean is crowded. As many as 10 million viruses can be found squirming in a single millilitre of its water, and it turns out they have friends we never even knew about.

Scientists have discovered a previously unknown family of viruses that dominate the ocean and can’t be detected by standard lab tests. Researchers suspect this viral multitude may already exist outside the water — maybe even inside us.

Over the past year, it's become pretty clear that machines can now beat us in many straightforward zero-sum games. A new study from an international team of computer scientists set out to develop a new type of game-playing algorithm – one that can play games that rely on traits like cooperation and compromise – and the researchers have found that machines can already deploy those characteristics better than humans.

Chess, Go and Poker are all adversarial games where two or more players are in conflict with each other. Games such as these offer clear milestones to gauge the progress of AI development, allowing humans to be pitted against computers with a tangible winner. But many real-world scenarios that AI will ultimately operate in require more complex, cooperative long term relationships between humans and machines.

Lifeguards testing out new drone technology in Australia have saved two people stranded off the coast of New South Wales state, as spotted by Quartz. The drone footage shows a birds-eye view of the ocean before the drone ejects the yellow floatation device, which inflates when it hits the water. The two teenage boys were caught about 700 meters (0.4 miles) offshore at Lennox Head in a swell of around three meters (9.8 feet). They were able to grab onto the floatation device and swim to shore.

“I was able to launch it, fly it to the location, and drop the pod all in about one to two minutes,” lifeguard supervisor Jai Sheridan told reporters. A government official confirmed the rescue took only 70 seconds, compared to the average six minutes it would take for a lifeguard to reach the swimmers. The drones were reportedly only unveiled that morning before being put to use, according to the Australian Broadcasting Corporation.

In 2018, blockchain will create a new wave of major disruption in media-content distribution. The immutability and "trustless" nature of the blockchain means that it can be used in instances where record-keeping and auditable data is key, including data about who owns what assets, such as music and movies. Once you have verified the validity of an asset entered into the "chain" in the first place, continuity is ensured from then on.

Already, blockchain's potential to disrupt content rights distribution is coming to fruition in the music business. Streaming services such as Spotify and Deezer require an additional layer of intermediaries to ensure that the artists' rights management process is conducted properly. As a result, content creators need different contracts in each jurisdiction - often via multiple intermediaries - to protect their copyright and to enable distribution of their content.

But putting content on a blockchain, and having the connectivity for peer-to-peer transactions - via a digital currency such as Bitcoin, or a smart contract such as Ethereum - allows complete transparency and automation of execution, as well as direct payments to copyright holders. With these elements all in place, new players such as Musicoin and Revelator propose using the blockchain to simplify digital-rights management by bypassing the usual intermediaries, thus enabling micro-payment

Mymanu's Clik+ headphones come with a big promise: live translation between 37 languages. We saw something similar recently from Google and Bragi, but both of those are operating as a middleman, serving up the audio with an app doing the heavy lifting. Let's be clear, Mymanu also uses an app for translation, but the Click is designed to bring us one step closer to the app-free translation device we really want.

Prototypes of the Clik have been around for a while, but here at CES we were able to finally test it for ourselves. After a successful Indiegogo campaign last fall, the headset is poised to go into production, with an expected delivery date of March this year.

In an interview at Exponential Medicine in San Diego, Singularity University faculty and speaker Dr. Divya Chander takes a look at how emerging technologies are letting us peer inside the human brain like never before.

As an anesthesiologist and neuroscientist at Stanford University, Chander specializes in measuring brain activity and depth-of-consciousness in patients using tools like high-frequency EEG technology.

During her interview, Chander outlined how CRISPR gene editing and stem cells are being applied in neuroscience. She said, “We are beginning to rewire the brain from the inside out. We’re cutting out things that don’t work at the level of the nucleus. We’re actually correcting diseases before they even express themselves.”

As excited as Chander is about the advances in her field, she’s well aware of the precautions we need to be taking while innovating in neuroscience.

Chander believes this is an ethical conversation that needs to happen across the board and in every country. She warns we can’t just leave the conversation to neuroscientists or entrepreneurs alone.

One of our biggest ethical problems is: all of this technology that’s hacking the neural code can non-invasively read brainwaves in a way we’ve never been able to do before,” Chander said. “There’s a group at the University of Alabama that actually found that if you’re wearing an EEG cap and someone’s typing in a password, you can hack the password. Using optogenetics we can implant false memories into mice.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.